Back to Blog

Software for Strangers

flux pro 2.0 A pair of worn work gloves resting on a steel workbench one glove holding a smal 0

I spent two years building software for people I'd never met. This week I finally saw them use it.

The manufacturing floor in Houston has this big screen TV right when you walk in. On it: the digital kanban board we built. Guys with iPads drag cards from station to station, and the TV updates in real time. They've got a real-time ops view into their entire process.

I stood there watching them work, and something clicked into place that I'd known intellectually but never felt. All those tickets, all those features, all that code. It was always for someone. I just couldn't see them.

Remote from What

Before the trip, I had some anxiety. A lot of these guys I'd never met, not even on a call. They were layers removed from the managers I talked to in virtual meetings. Two years of work, and they existed for me mainly as assumptions baked into acceptance criteria.

Here's the thing: I'm actually reasonably good at imagining users. It's part of why I'm decent at this job. I knew they were people who worked with their hands, results-oriented, under pressure to get stuff done. I knew they didn't want software that was laborious or clunky. If the system crashed and was unrecoverable for months, they wouldn't stop working. They'd get back on paper. The TV kanban would become a dry-erase kanban.

So I wasn't totally wrong about who they were. They matched my expectations in the broad strokes. But there's a difference between imagining a user and meeting one.

What surprised me: they didn't have complaints. I expected a laundry list of things they wanted changed. Instead, they were grateful. "We really appreciate the work. It's working so much better than it has in the past. We appreciate that you guys take the time to look at our feedback and make changes."

That landed differently than I expected. It was satisfying in a way that felt almost cathartic. For two years, there was always a relational dimension to this work. I just couldn't see it because distance isn't just geographic. It's relational.

Relational Debt

Everyone talks about technical debt. Code shortcuts that compound over time. But there's another kind of debt that kills systems faster: relational debt.

Technical debt is about the "how." Relational debt is about the "who." A system accumulates relational debt when the people it serves become abstractions. When nobody is watching, nobody is caring, nobody is asking "is this still right?" The code keeps running. But the system is already dead. It just hasn't realized it yet.

Every system serves someone. Every API has a consumer. When that relationship is active, the system stays healthy. When the relationship breaks down, the system drifts.

I've seen this pattern over and over. Some API is pulling data, and downstream from that there's analytics, and then there's Jeff from accounting looking at a report. If the vendor changes their pattern and the API breaks, Jeff notices. He makes a phone call. "This isn't right anymore." People aren't often in jobs where they accept everything uncritically. That would defeat the purpose of having a human there at all. They notice when things aren't looking the way they should.

But the systems that people don't look at? The ones nobody interacts with regularly? Those break down and stay broken.

We had a process that was supposed to run async and do some data transformations. For about a month, it just wasn't working. Typos in deployment secret values. Nobody noticed because nobody was looking at that data during that window. Holiday season, people focused elsewhere. One day someone finally used it and discovered: we don't have any data for the last month.

The diagnosis is simple: no relationship, no health check. A system with no one watching it is already dead. The code just hasn't realized it yet.

We backfilled the data. I wrote a script so we could do it again if needed. Eventually built a much more robust process. But the real fix wasn't technical. It was making sure someone was always in relationship with that data.

Design Is the Distribution of Friction

The Houston team had this IT manager who was manually correcting 10 to 15 transfer records every single day. Transfers are for moving inventory between locations, from the field to repairs, whatever. And users just kept doing them wrong.

This guy was losing his mind over it. He had high blood pressure, diabetes. This was health-affecting stress. We joked about him needing a rage room. The frustrating part: users could have been doing the right thing. They just weren't.

Here's a rule of design: if a user is able to do something, they will do it. 10 out of 10 times. It doesn't matter if it doesn't make sense to you. It'll make sense to them in the moment for some reason. If you can't wrap your head around that, you're going to struggle in this business.

Design is the distribution of friction.

You put friction where you don't want people to go. You remove friction from the path you want them to take. That's it. That's the whole job.

So we rebuilt transfers with guardrails. Users couldn't transfer things to wrong locations. They couldn't transfer things that didn't exist. Things couldn't be in two places at once. We added autocomplete dropdowns where you type a few letters and it finds what you're looking for. We made entering correct data dead simple and entering wrong data nearly impossible.

Now it's maybe one correction a month. Maybe less. I don't even remember the last time the IT manager mentioned it.

Low friction to do the right thing. High friction to do the wrong thing. Then people just do the right thing. Because they want it to be right too. They're not spitefully messing up your software. They have a job to do, and they need to get that bit out.

Models of Desire

The developer we replaced was a smart guy. A visionary guru type. "I know how to make this so they can use it."

His specific failure: he built features based on how he thought the process should work, not how it actually worked. The system had no messaging around when things went wrong. People would work a lot and then find out nothing had been saved. Giant swaths of the actual process had no interaction with the software at all. He'd built a pseudo sales feature that wasn't working the way salesmen actually worked. Smart, but not useful. Those are different things.

Our product owner, Andrew, dragged all the painful feedback out of users. He talked to everybody. We built a ticketing system that captures the whole operation from start to finish. Salesman makes a sale, into repair. Salesman makes an order, into manufacturing, then repair. End-to-end visibility. It was a game changer.

Here's what the guru missed: we don't even know what to build without models of desire.

Users don't just want "features." They want what their peers value. What their bosses value. What makes them look competent. The foreman values speed because he's measured on throughput. The manager values visibility because he's measured on reporting. The C-suite values data integrity because they're measured on decisions.

These desires conflict. The foreman wants fewer clicks. The manager wants more fields captured. The software becomes a site of negotiation between competing models of desire.

The developer's job is to reconcile these competing desires into a single coherent interface. You can't do that by imagining users. You have to talk to them. All of them. At every level.

You can have picture-perfect requirements that are internally coherent, and it still doesn't solve the problem of the guy on the ground. Because the requirements encoded one model of desire, and the guy on the ground is operating from another. Now you have adoption problems.

There are two kinds of managers. One comes up through the process: shop floor, then foreman, then manager. The other comes in from outside.

The ones who came up through it usually understand the current state but sometimes lag behind on innovations happening on the floor. They can be anchored so hard to current state they can't see future state.

The ones brought in from outside are usually good at envisioning future state but struggle with the reality of current state. They underestimate how much pain is involved in transitioning.

Both carry different models of desire. Both are partially right. The software has to hold both.

The Agent Question

Gartner predicts 40% of enterprise apps will embed AI agents by the end of this year. They frame it as "delegate, review, and own."

Here's my question: could an agent have noticed the IT manager's high blood pressure?

Could it have heard the frustration in his voice during those calls? Could it have felt the weight of correcting 15 transfers a day while managing a chronic illness? Could it have understood that this wasn't a data problem but a human problem, that a man was suffering because the software made it too easy to do the wrong thing?

The answer is no. And if the agent cannot feel the weight of the problem, it cannot own the solution.

I wrote recently about a friend building sophisticated automation. He keeps pushing human involvement to higher levels of abstraction rather than eliminating it. What he doesn't seem to get is that by raising the level of abstraction, you change problems from small localized issues to entire classes of problems. You embed assumptions into a higher abstraction layer that propagate everywhere.

An agent working for no one isn't working. It's processing. The spec, the acceptance criteria, the definition of done: these are all relational artifacts. They encode what someone wants. Remove the "for whom" and you don't just strip meaning. You strip coherence.

Correctness itself is relational. Correct according to whom? Useful for whom? Done for whom?

This is why my Intelligence Augmentation framework isn't a safety constraint on automation. It's the anti-drift mechanism. Humanity is the tether that keeps software from floating into irrelevance. Without it, you don't have work. You have processing. And processing without relationship is just heat dissipation.

Autonomous AI isn't just undesirable. It's not even a coherent goal.

What the Screen Showed

Standing on that manufacturing floor, watching the kanban board update in real time as guys dragged cards on their iPads, I felt something I didn't expect.

I'd been nervous. These were people I'd affected for two years without ever seeing. Some of them probably cursed my name when something broke. And here I was, not looking like what they expected. No MacBook and matcha. Just a plaid shirt and jeans. Looking kind of like them.

I think that disarmed people a little. They expected a tech guy. They got someone who looked blue collar.

And the reaction was all positive. Nobody had complaints. Everyone was grateful. It felt good. Really good.

The big screen TV is still there. Still showing the kanban board. Still updating in real time when someone drags a card on their iPad.

But it's different now. For two years, I was writing software for strangers. Now I can see their faces when I write a feature. I know the IT manager's health is better because he's not correcting 15 records a day. I know the guys on the floor appreciate that we listen to their feedback.

The work was always for them. The relationship was always there. I just couldn't see it.

Software for strangers became infrastructure for friends. That's what the screen showed me.

Share: