Back to Blog

Humility in the Age of AI Gurus and Hot Takes

flux pro 2.0 A pair of open hands cupped together holding a soft luminescent question mark th 5

Somebody from Anthropic said "Software engineering is done." We wouldn't need to look at the outputted code anymore, he claimed. Just like we don't look at compiler output.

I read that and my first thought was: what a silly, silly take.

Here's the thing. A compiler is deterministic. Every single time I send program code into the compiler, I get the exact same machine code out. People argue with me about compiler optimizations and decisions the compiler makes. Yes, those exist. But they're entirely rule-based. There's no randomness. It's a direct, deterministic process.

An LLM is the opposite. It's probabilistic by nature. That's how it becomes effective. If the model only ever outputs the highest-probability token, it becomes rote and brittle. The temperature setting exists precisely to introduce flexibility, which means non-determinism. The idea that we're going to stop checking outputted code the way we don't check compiled code is just nonsensical.

And here's what really gets me: if it were possible to just take business requirements and a prompt and generate working code directly, why hadn't we already done that? If all we needed was the technological know-how, why didn't we do it before?

Because going from business requirements to code is a process of resolving ambiguity. That's what you're doing. Defining things. Firming things up. Navigating the nuance. That's the actual work.

The Confidence Industrial Complex

LinkedIn has become a prediction factory. For two years now, people have been saying with absolute certainty that they know exactly what's going to happen with AI.

Engineers are obsolete. Engineers will never be replaced. Junior developers are dead. Junior developers are best positioned to ride the wave. Pick your flavor.

I get it. Projecting confidence makes sense on a business platform. Nobody wants to hire the consultant who says "I'm not sure." But there's a difference between professional confidence and making claims about the future that nobody can actually know.

I found a post recently that started with a good take. The guy said he'd never felt more behind in engineering because of how quickly things are moving. I thought: yes, here's someone being honest. Then he went on to make a bunch of wild predictions and I lost him.

Two years ago they were saying AI is coming for your job. It's now early 2026. The jobs are still here. The goalposts keep moving.

Another One Bites the Dust

About 70% of 2024's AI predictions aged poorly. Even Geoffrey Hinton publicly admitted his 2016 radiologist prediction was overconfident.

When I see high-profile people quietly walk back their certainties, my reaction is basically: another one bites the dust.

This is why I back away from bold predictions. Here's what I know for sure: the way current technology works, I do not see how people could truly, completely automate important things with it. It doesn't work in that capacity without a human in the loop.

But I'm willing to admit I could be wrong. People have done really impressive engineering things that seemed impossible. They could find ways to use AI more deterministically. The technology could shift direction entirely.

What I won't do is pretend I know things I don't.

When Absurdity Becomes a Feature

Here's one that made me laugh. Someone wrote a post about why "dumb AI answers are actually correct." You've seen the screenshots. Google suggests adding glue to pizza sauce. Neural networks recommend eating small rocks daily.

The post argued these answers are "discursively correct." The logic: when we ask AI how many rocks we should eat, we're not actually searching for truth. We already know the answer. Our real goal is to get something to joke about, repost, and convert into social capital. By giving absurd recommendations, AI perfectly satisfies that hidden need.

This is the kind of mental gymnastics we're doing to avoid facing the fact that we just don't understand these systems very well.

And the people who create them say the same thing. They know how things are trained. They understand the algorithms, the weights, the neural networks, how inferences happen at a high level. But the actual details of why any particular response comes out for any particular prompt? They have no idea. Nobody does.

Specific Problems for Specific People

There was a project we didn't get in the oil and gas lease payment space. The client wanted to automate a bunch of processes with AI. I was just straight up with him: I don't think AI can do what you want it to do. Not the way you're describing it.

Another time, my boss wanted to build an AI to analyze our timekeeping data. The idea was to gather insights about billing, find people who weren't billing correctly, flag descriptions that might be problematic if a customer asked about them six months later. He wanted a system that would automatically notify people their time entries weren't right.

I told him: that just does not work. There's no way to do this without a human in the loop validating the responses.

People don't love hearing "I don't know" from the person who's supposed to be the expert. But the key is having a direction in mind to figure things out. People are okay with not knowing exactly where we are, as long as someone has a good idea of where we're going.

That's the difference between humility and helplessness. Humility says: I don't have all the answers, but I'm going to work on this specific problem until I understand it. Helplessness says: I don't know anything, good luck.

AI Is Just the Latest Thing

AI is just the latest in a long series of things where we think we've changed how the world works. Changed the nature of reality. Made ourselves into God, reforming nature in our own image.

Ecclesiastes says there's nothing new under the sun. Everything is vanity. This is just the human experience. As much as things change, they also stay the same.

There's a transcendental aspect to this. At some point you have to step back and realize: I'm not the creator of the universe. I don't know the future and never can.

When I do make predictions, I'm usually pretty careful. I don't make a prediction that I'm not pretty certain of. Or I say as much: this is what I'm thinking right now, but I don't have a lot of confidence in it.

Advice for Junior Developers (With Appropriate Uncertainty)

A lot of junior developers are hearing wildly contradictory advice. Some say they're doomed. Others say they're best positioned to ride the wave.

Here's my read, and take it with a grain of salt because I don't know the future.

If you really enjoy technology. If code speaks to you on a visceral level. If you understand it intuitively and enjoy spending time with it. Then yes, this is good. Keep going.

If you got into it for the money, or because it sounded cool, and it's not making as much sense to you as you expected. I would get out while the getting's good. It's only going to get harder.

These tools make you more of what you already are. If you're sharp, self-directed, naturally curious, your output grows immensely. I've watched it happen with my colleagues who are really good. Their ability to get stuff done has exploded.

But if you need someone holding your hand the whole time, if you're just trying to get the paycheck, the differentiation is happening. That comes through even with these amazingly powerful tools.

I think about my cousin who interned at our company. His first summer, he had basically no experience beyond academics. He took directions well, built out an example application from scratch in an unfamiliar technology with some coaching. I was impressed. What really cemented it was when I talked to my boss later about bringing him back. My boss said he'd been so impressed with the work that he'd already paid him for what was supposed to be an unpaid internship. I thought: okay, this guy is going to be fine.

Then I think about a friend I worked with years ago who was kind of in it for the money. It wasn't his calling. He didn't really enjoy the work. Eventually he moved to another city and started living his dream of working in elder care. Much better for him. Had he stayed in tech until now, I think he'd be struggling. He was struggling back then. It's ten times harder now.

Signal in the Noise

The information environment went from a fire hose to a deluge. Some people can suck that up and even enjoy it. Others are getting crushed by it.

How do I handle it? I'm not entirely sure.

I've always had what I think of as pretty high discernment combined with pretty high humility. You can't commit too hard to any of it. You can't have too much pride, because pride prevents you from pivoting. And you have to pivot constantly. You can't fight the current. When it pushes you in a new direction, you go with it.

The skill that matters most right now is seeing the signal in the noise. Taking in the information and identifying the key pieces while ignoring what's non-essential. That's always been something I've been decent at. Not perfect. But I enjoy trying.

So that's what I'm doing. Keep going. Keep learning. Stay humble. Don't pretend to have everything figured out.

In a world where everyone is shouting about how they know exactly what's about to happen, maybe a little humility combined with competence is a better path forward. Some earnest work toward figuring out specific problems for specific people. Rather than claiming to know everything in general.

That's not as exciting as predicting the future with certainty. But it's honest. And it's what I've got.

Share: