From AI to IA

There are two camps when it comes to AI in software development, and I think they're both asking the wrong question. The evangelists promise AI will replace programmers tomorrow. The skeptics dismiss it as worthless hype. But both perspectives share a flawed premise: they're debating whether AI will replace humans when the real issue is that autonomous AI can't work the way they think it can.
The answer isn't finding middle ground between these positions. The answer is reframing the entire conversation. We're not talking about AI (Artificial Intelligence) replacing humans. We're talking about IA (Intelligence Augmentation): technology that amplifies human capabilities rather than attempting to replace human judgment.
A Note on Terminology
Throughout this piece, I'll use more precise language than the "AI" buzzword allows. When referring to the actual technology, I'll use generative models (GM), systems that generate text, images, code, video, and other outputs based on patterns in training data. This includes large language models (LLMs) like ChatGPT and Claude, image generators like Midjourney and DALL-E, and code assistants like GitHub Copilot.
I'll reserve "AI" for discussing the broader narrative and hype. The distinction matters: calling these systems "intelligence" accepts the premise that they think or reason. They don't. They're pattern-matching engines that generate plausible outputs.
The Two Dominant Narratives
Let me explain why both camps are operating from the wrong premise:
The AI Maximalists
These are the people saying AI will do everything. We're never going to need developers again. We won't need any white-collar workers. AI is coming for your job. Just give it user stories and it will give you an app. We're headed for a Skynet scenario where AI becomes fully autonomous.
This narrative is everywhere in tech media, on LinkedIn, in executive boardrooms. It's driving massive investment and creating enormous anxiety.
The AI Skeptics
On the other side are people saying AI is evil, or that it's worthless, that it can't do anything useful, that it's all hype. They dismiss it entirely or focus only on the risks while ignoring the actual capabilities.
This narrative shows up in developer communities, among people threatened by change, and in certain policy circles.
Both positions accept the premise that AI could theoretically replace human expertise. They just disagree on the timeline and desirability. But that premise itself is flawed.
Intelligence Augmentation: What We're Actually Building
Intelligence Augmentation (IA) isn't just a rebranding of "AI used responsibly." It's a fundamentally different framework that recognizes these systems as tools for amplifying human capabilities, not substitutes for human judgment. Here's what that means in practice:
What Generative Models (GMs) Are Good At
GMs have a massive amount of information embedded in their training data. They can generate code quickly. They can help you explore solution spaces. They can be incredible productivity multipliers when used properly within an IA framework.
When I'm building this website, GMs help me:
- Generate boilerplate code faster than I could write it
- Explore different architectural approaches quickly
- Find solutions to problems I haven't encountered before
- Translate between technologies I know and ones I'm learning
For developers with strong fundamentals, working with GMs is like having a highly knowledgeable junior developer who works instantly but needs constant supervision.
The Fundamental Problem: Why Autonomous AI Can't Work
GMs hallucinate constantly, and they don't know when they're wrong. They will very confidently tell you incorrect things and keep doubling down when challenged. You can't work autonomously with systems that behave this way. You need humans validating all output.
This isn't a temporary limitation that will be fixed in the next model.
Even more concerning:
This is why the "AI will replace humans" narrative is fundamentally flawed. You cannot safely deploy fully autonomous systems that hallucinate unpredictably, can't recognize their own errors, and whose decision-making processes are opaque even to their creators.
The Human Advantage
Developers aren't simply writing code. The best ones are taking business-level requirements and translating them into technical requirements while identifying issues and trade-offs. They're making nuanced judgments about:
- Architecture (how should this system be structured?)
- Performance (what are the actual bottlenecks?)
- Maintainability (will we regret this in six months?)
- Business value (is this solving the actual problem?)
- Trade-offs (what are we giving up to get this benefit?)
They're asking the questions that need to be asked before writing a single line of code.
GMs struggle with this because they don't know what they don't know. They can't reliably identify when they're making a bad assumption or missing an important consideration. They can't have the conversation where you realize the client is asking for the wrong thing.
The Real Transformation: Intelligence Augmentation in Practice
What excites me about IA isn't that technology will replace human expertise. It's that Intelligence Augmentation will transform what human experts can accomplish.
The Productivity Multiplier
Developers practicing IA (combining strong fundamentals, good communication skills, and sound judgment with AI tools) will be incredibly productive. They'll be able to:
- Move faster through routine tasks
- Explore more architectural options
- Deliver better solutions because they can iterate more quickly
- Focus their time on the hard problems that require human insight
I practice IA extensively in building this website. Intelligence Augmentation lets me work in technologies I haven't used professionally (Rust, Nuxt, Postgres optimization) and learn them much faster than I could through documentation alone. But I'm validating everything, making the architectural decisions, and catching the mistakes. The GM accelerates my capabilities. It doesn't replace my judgment.
The Rising Bar
The bar is rising. If you were a mid-level developer who was never going to progress beyond that, who turned out middling quality code without asking the big questions, then yes, GMs can probably do your job better than you can.
But if you understand data structures, algorithms, architecture, and trade-offs, if you can communicate effectively and make sound judgments, then GMs make you more valuable, not less.
This is why I tell people entering the field now that they need to understand fundamentals. You can't just know enough to be productive on a web project anymore. You need to understand how things actually work from an architectural and algorithmic perspective.
If all you can do is what GMs can do, you're in trouble. But if you can do what GMs can't (understand context, make nuanced judgments, identify bad assumptions, communicate with stakeholders, think architecturally), then you're incredibly valuable.
The Broader Landscape: GM and ML
Generative models (GMs) have opened the door to other machine learning (ML) technologies being more readily utilized: classification, image classification, bounding algorithms, anomaly detection. These things existed before, but they weren't being widely deployed. Now that generative AI is such a buzzword, you can sneak other useful ML technologies in the back door.
We work in manufacturing and oil and gas, and there's massive opportunity for using ML in:
- Quality assurance (detecting defects in production)
- Predictive maintenance (identifying equipment likely to fail)
- Process optimization (finding inefficiencies in complex systems)
But it's always Intelligence Augmentation. GMs and ML technologies amplify human expertise, never replacing it. These systems can identify patterns in massive datasets that humans would miss. But it takes human judgment to know which patterns matter, what to do about them, and how to implement changes without creating new problems. This is IA in action: technology making human experts more capable.
What This Means for Different Audiences
For Companies
Don't believe the hype that AI will replace your entire development team or expert workforce. But do invest in Intelligence Augmentation. The productivity gains are real when you have people with strong fundamentals who can practice IA effectively.
Focus on:
- Training your team to practice IA (using GMs and ML tools while maintaining human judgment)
- Establishing processes for validating GM-generated output
- Identifying tasks where IA provides the biggest productivity multiplier
- Maintaining standards for architecture, quality, and human oversight
For Developers and People Entering the Field
The opportunity is still enormous, but the requirements are higher. The bar is rising. You need:
- Strong fundamentals - Data structures, algorithms, systems design, architecture. You can't just know enough to be productive. You need to understand how things actually work.
- Communication skills - Being able to talk to non-technical stakeholders, understand business requirements, and explain technical trade-offs is increasingly important.
- IA proficiency - Learn to practice Intelligence Augmentation. Figure out where GMs amplify your capabilities and where they get in your way.
- Sound judgment - Knowing when to trust GM output and when to question it. Developing processes for leveraging IA while maintaining quality.
The "code bootcamp graduate who grinds out CRUD apps" path is closing. The "developer who thinks architecturally and communicates well" path is opening wider.
My Approach: Practicing Intelligence Augmentation
This personal website is my laboratory for Intelligence Augmentation in practice. I'm using technologies I haven't used professionally:
- Rust with Actix for the backend
- Nuxt for the frontend
- Postgres with UUIDv7 for the database
I'm practicing IA throughout this project, using Claude to help me learn these technologies and build the site. But I'm also validating everything, making the architectural decisions, and learning deeply about how these systems work.
The result is that I'm learning faster than I would through documentation alone, but I'm also building genuine expertise. The GM accelerates the learning curve; it doesn't replace the learning. My capabilities are augmented, not substituted.
This is Intelligence Augmentation: GMs as powerful tools that amplify skilled human capabilities, not as replacements for human expertise.
The Talks I'm Giving: Spreading the IA Message
I've been evangelizing Intelligence Augmentation to various industries: oil and gas, manufacturing, healthcare. The message is consistent:
Don't believe the hype, but don't dismiss the technology. AI won't replace your experts, but Intelligence Augmentation can make them dramatically more effective if you implement it thoughtfully.
Practice IA, not autonomous automation. The goal isn't to eliminate human judgment. It's to give humans more powerful tools that amplify their capabilities and enable better decision-making.
Invest in your people. The companies that will win are the ones that upskill their workforce to practice IA effectively, not the ones that try to replace their workforce with autonomous AI.
Maintain human oversight and standards. GMs can generate output quickly, but without strong validation processes, architectural standards, and human judgment, you'll end up with a mess that's harder to maintain than what you had before.
Where This Goes Next
Looking forward, I expect:
Short term (1-3 years):
- Intelligence Augmentation becomes standard practice across industries
- Productivity gains become obvious in organizations that implement IA effectively
- The gap widens between professionals who practice IA effectively and those who resist or misunderstand it
- We see high-profile failures from companies that tried to deploy autonomous AI without adequate human oversight
Medium term (3-7 years):
- GMs improve at certain specialized tasks (code generation, refactoring, testing), making IA even more powerful
- But the fundamental limitations (hallucination, inability to reason, can't know what they don't know) remain inherent to the architecture
- Successful organizations have mastered Intelligence Augmentation. They've figured out the optimal division of labor between GMs and human judgment
- Professional roles evolve to emphasize human strengths while leveraging IA
Long term (7+ years):
- Hard to predict, but I'm skeptical of "AI will do everything" scenarios
- More likely: GMs become one tool among many that skilled professionals use
- The premium on human judgment, communication, and architectural thinking increases
- New specializations emerge around GM integration and validation
The Bottom Line: From AI to IA
Generative models are neither savior nor threat. They're tools, powerful ones that work best when augmenting human capabilities, not attempting to replace human judgment.
The future belongs to people who practice Intelligence Augmentation effectively: those who leverage GMs while maintaining the human capabilities that GMs can't replicate (judgment, creativity, communication, understanding context, asking the right questions).
If that's you, you have nothing to fear and much to gain. If you're trying to compete with GMs on the things GMs do well while ignoring the things only humans can do, or if you're waiting for AI to replace human expertise, you're going to struggle.
The question isn't "Will AI replace me?" The question is "How can I practice Intelligence Augmentation to become dramatically better at the work only I can do?"
That's the question I'm exploring in my own work, and it's the question I'm helping organizations answer in theirs. Not AI replacing humans. IA amplifying human potential.
There's more to discover—including personal details about difficult seasons and current daily life.
Login to See More