The Vibepocalypse

Vibe coding went from Andrej Karpathy's February 2025 tweet to Collins Dictionary's Word of the Year in under 12 months. The adoption numbers are staggering: 92% of US developers now use AI coding tools daily, 41% of all code is AI-generated, and 25% of Y Combinator's Winter 2025 batch shipped codebases that were 95% machine-written. Big tech followed suit. Amazon and Google report 30% AI-generated code. Meta is targeting 50%. The market exploded to $4.7 billion with projections of $12.3 billion by 2027.
Non-technical founders discovered they could ship MVPs in hours instead of hiring developers. For a moment, it looked like software development had been democratized overnight.
The hangover arrived faster than anyone expected.
The Reckoning
Veracode's 2025 report found 45% of AI-generated code contains security flaws. When given a choice between secure and insecure methods, LLMs choose the insecure path half the time. Analysts now project $1.5 trillion in technical debt by 2027 from unvetted AI code.
The human cost is already visible. Employment for developers aged 22-25 has dropped nearly 20% since late 2022. CS graduates face a 7.4% unemployment rate, nearly double the national average. A Harvard study confirmed that when companies adopt generative AI, junior employment drops 9-10% within six quarters while senior employment stays flat.
The industry is discovering that vibe coding's speed advantage evaporates the moment you need to debug, maintain, or secure what you've built. Developers are calling it "prompt purgatory."
The Six-Month Wall Is Just Legacy Code on Fast-Forward
The six-month wall that developers keep hitting with vibe-coded projects is the exact same phenomenon as the 20-year-old legacy application. The pattern is identical, just compressed.
You're not focused on architecture. You probably don't know what good architecture is, good modularity is, good practices, code hygiene. And yeah, it becomes the exact same problem. But here's the scary part: it wasn't humans making the decisions along the way. It was pattern matching.
When a human builds a legacy system, they made decisions. Bad decisions, maybe. Shortsighted decisions. But decisions with reasoning behind them. You can ask someone why they structured the authentication that way. You might not like the answer, but there's a thread to pull.
AIs don't make decisions. They pattern match. Someone came to them with a problem and the AI matched against training data. It's not even that decisions are being made. It's that patterns are being matched, and matched with randomness baked in. These models are non-deterministic. They take a problem, match the pattern, and apply pseudo-randomness to the output.
Here's a concrete example of what non-deterministic means in practice. You give an AI the same prompt twice for setting up a database connection layer. The first time, it creates a clean repository pattern with dependency injection. The second time, same prompt, it hardcodes connection strings directly into your business logic. Both versions work. Both pass your initial tests. But six months later, one of them scales gracefully and the other one turns into a knot you can't untie. The randomness isn't in whether the code runs. It's in whether the architecture survives contact with reality.
The real vibepocalypse isn't a developer typing prompts into Claude. It's agentic orchestration. Autonomous agents running in loops, "self-correcting" at 3 AM, making architectural decisions at scale with no human in the loop. The six-month wall becomes the six-week wall when you have agentic swarms pattern-matching their way through your entire codebase. The "turtles all the way down" scenario isn't hypothetical anymore. It's the default configuration for teams that bought the hype.
We specialize in legacy applications at SEQTEK. We've been unwinding Gordian knot codebases for a long time, and we continue to do it. The vibe coding mess is just another chapter in the same book. The stakes are higher, and there's nobody to ask about the decisions that were made.
And when there's nobody to ask, someone still has to answer. That's where the real cost shows up.
The Liability Problem Hidden Inside Every Knot
When architecture gets replaced by pattern matching, the resulting mess isn't just a code problem. It's a liability problem that the market will eventually solve by finding someone to blame.
The Lovable security incident in May 2025, where 170 apps had personal data exposed because of AI-generated vulnerabilities, is a preview. Take the recent Verizon outage that lasted a full day. How do you lose service for a full day? They haven't said what caused it because they're a big company and don't want to open themselves up to even more losses. That could easily have been a vibe-coded mistake cascading through systems nobody fully understood.
We are about to see a tsunami of lawsuits trying to figure out who's responsible. Is the engineer responsible? Kind of. But software isn't like professional engineering. If I have plans for a building, some PE somewhere signed their name on it. Nobody's doing that with software. There isn't that process.
Here's what's accelerating the reckoning: cyber insurance. Insurers are beginning to deny claims for data breaches if "sufficient human architectural oversight" cannot be proven. This is the "why now" for everything that follows. The mob doesn't just want blood. The insurance companies want a reason not to pay. When your breach claim gets denied because you can't demonstrate that a human reviewed the authentication logic your agentic swarm generated at 3 AM, the scapegoat hunt begins in earnest.
The Anthropic engineer claiming AI-generated code will be like compiled code, that AI will be like a compiler taking our prompts and turning them into perfect code, is spouting nonsense. It may seem like that on the surface, but the underlying technology makes it impossible for AI to have the same level of care as a person. Because it has no care. It's not a person. It doesn't think like us.
At the very core, regardless of how good these models get, they will still hallucinate and generate incorrect answers. Period. People do too. But when a person does that, you know who's at fault. When the AI does it, we don't know.
Maybe it ends up being the engineer who clicked accept on the prompt. But imagine when it's turtles all the way down. The AI writes the code, makes the pull request, reviews the pull request, interprets the CI/CD run. Who's at fault when an autonomous agentic swarm chooses an insecure pattern at 3 AM with no human in the loop? The engineer who designed the system is off on an island somewhere.
We're going to find out who's responsible. And unfortunately, it's going to be some poor scapegoat.
The Scapegoat Is Always the Lowest-Value Person the Mob Will Accept
This touches on Girardian mimetic theory. The scapegoat is always the lowest-value person the mob will accept.
It's like in olden times when an army loses. What will satisfy the emperor? Does he need the captain? The major? The general? Who will satisfy?
If this is a huge deal, it's probably going to have to be somebody big. A billion-dollar loss won't accept the junior dev who clicked accept. Maybe. Who knows.
Why will it be unfair? Because scapegoating is always unfair. That's the nature of it. The victim is innocent or somewhat innocent, and the mob is guilty.
That's what's happening right now. We're creating this culture of "get it done, you have AI." But when the knot can't be untied, when the architecture is actually a loop with no beginning and no end, someone still has to pay. The technical debt becomes legal debt. The debugging session becomes a deposition. The six-month wall becomes a courtroom exhibit. The denied insurance claim becomes the discovery process.
The market always clears. The question is who gets cleared out with it.
The Verification Tax
That programmer quote making the rounds captures something real: create 20,000 lines in 20 minutes, spend two years debugging. I don't let projects get to that point. I'm constantly stopping the AI, telling it to redo things, telling it the architecture is wrong, that it's not following good practices.
The good patterns are tucked away in the training data too. There's just a lot more bad patterns. It's like a rut that the wheel keeps trying to slip into. But if you keep pushing it on track, you can get good code out of the AI. I'm doing it consistently.
Where I see the edges of this disaster is with junior to mid-level developers adopting AI. I feel like I'm playing telephone with the LLM. They produce code and create a PR. I review it and give feedback. They put that feedback directly into the AI, update the code uncritically without understanding the feedback, and submit again. So I'm doing a higher level of prompt engineering where I have to figure out how to get the other engineer to prompt the LLM correctly.
This is the Verification Tax. The cost of auditing non-deterministic AI output until it exceeds the cost of a senior developer writing it from scratch. The industry is paying an unsustainable tax on speed. My cognitive time and energy reviewing their code. Their near-zero investment in understanding it. The tight loop of "it's not working, why isn't it working?" That's exactly how you get 20,000 lines in 20 minutes and two years debugging. And somebody has to pay that tax. Right now, it's seniors like me subsidizing juniors who never learned to read the code they're shipping.
The Junior Developer Crisis
The numbers on junior developer employment are stark. The industry shift suggests a move away from foundational theory in favor of rapid generation. People graduating from coding schools are really, really struggling.
Here's the real problem. There are some diamonds in the rough who actually learned the details of how to make good software from an architectural level, from a structural level. But a lot of them just leaned into AI code generation. It's hard not to use AI. You almost have to make them do it by hand.
The on-the-ground reality is that it's very, very difficult right now. There has to be an adjustment.
What makes sense economically is that new grads have to understand there's going to be much more variation in technology salaries than there has been. Previously, you start out making high five figures, then when you get senior you're making low six figures. New grads need to say: I will work for anything I can get in order to start building experience in a real company and show up with an opportunity to demonstrate value.
Right now there's enormous opportunity for smart people to be very productive. But there's so much noise because it's difficult to discern, without time passing, the difference between a smart person who knows what they're doing and someone who's learned some tips and tricks about using AI.
Unemployment at every level means supply and demand hasn't cleared the market because the price of labor exceeds what buyers will pay. This is the same economic logic that creates scapegoats. When the system produces more liability than value, someone absorbs the loss. Right now, that's junior developers who can't demonstrate they're worth more than a subscription to Claude.
Inside Prompt Purgatory
The real prompt purgatory isn't code you never understood in the first place. It's code you were never capable of understanding.
AI will code above your level. People need to write code that's clear, not clever. It should be understandable by everybody. But AI very often creates code that's difficult to follow.
My mentor, my first tech lead when I got into the business, identified the core problem: the training data. The majority of code getting trained on has problems because that code gets the most attention. It has blog posts about it, Stack Overflow questions. There's nothing special about good code. Good code is ubiquitous. It doesn't jump out at you. You look at it and think, "Oh yes, this is right."
Bad code? You look at it and think, "This is terrible." And here's the thing: if you look at code and think, "Man, this is super clever, this guy must be 500 IQ," that's probably bad code too. Good code is completely invisible. You don't even notice it. It just does what it should be doing and is completely understandable.
The non-deterministic nature of these models is what makes purgatory inescapable. Give an AI the same authentication prompt on Monday and Friday. Monday's version uses proper token refresh logic with expiration handling. Friday's version stores credentials in local storage with no expiration check. Both work in your demo environment. Both pass the tests you wrote based on Monday's implementation. Friday's version ships to production because you didn't notice the architectural difference buried in 200 lines of plausible-looking code. Six weeks later, you're explaining to your security team why user sessions never expire. Six weeks after that, you're explaining to your insurer why they shouldn't deny your breach claim.
Prompt purgatory looks like this: every prompt you make creates changes you can't track. Everything is interwoven. Any change affects many things. It becomes a knot that's actually a loop. There's no way to untangle it because the ends are the beginnings. It really is the knot that can't be untied.
Now multiply that by agentic orchestration. Autonomous agents running correction loops, each iteration adding complexity, each "fix" introducing new patterns that conflict with patterns from three iterations ago. The agent doesn't know the architecture is collapsing. It just keeps pattern-matching, keeps "solving" the immediate problem, keeps making the knot tighter.
What do you do in those situations? Think about the Gordian knot from mythology. In some versions, Alexander cuts the knot in half. That's what you have to do. Cut that code out and start over.
People don't want to do that. They say, "It was working at this level, we tried to add a feature, now it's not working at all. I don't want to cut all this out. I want to start from where it was working and get to this new state."
That's just not going to work. That's what it means to be in purgatory. You can't progress. You're stuck. It's a slow process. You have to cut that stuff out.
A Production Incident
Here's a real example from about six months ago.
We were reworking an edge application that took data from a photo-taking robot and uploaded it to the cloud for validation. Another developer was working on this feature, not me. He thought he had the solution. He got excited. He vibe-coded and extended the solution to other, more critical and slightly nuanced use cases for the same edge tool.
It didn't work. He put it straight into production. He basically broke the process. It turned into a really big mess. It was kind of the last straw for him. He'd been struggling for a while, continually getting sidelined because he couldn't deliver quickly and effectively. This was his opportunity.
He's a nice guy. No condemnation on him. But he couldn't execute on writing good code, even with AI assistance. We had to revert. I stepped in, finished the feature, and got it working. He got let go.
Luckily it wasn't catastrophic. It was a connector piece, so we didn't lose a ton of data or drop a production database. But this is exactly it: you didn't fully understand the nuance of the code, the AI gave you a solution you thought worked, you put it into production uncritically, and you lost your job.
He became the scapegoat for a system that told him to move fast and use AI. The culture said "get it done." The code said "this works." The architecture said nothing because nobody was listening to it. He paid the Verification Tax with his career because nobody else was willing to pay it with their time.
What I Actually Do Differently
The difference between successful and unsuccessful AI-assisted development is simple: I know when it's wrong.
I don't know everything. I'm imperfect. I make mistakes. But I know how things get put together. I've been doing this a long time. I understand how to solve problems and how things should be structured. I've gotten good training from my mentors.
When I use AI, I look at the output and say: this is right, this is wrong, regardless of whether or not it works. "It works" is table stakes. I need code that works and is maintainable and understandable. Code that handles edge cases. Code that demonstrates understanding of nuance.
My process is that I let the AI do what it's going to do, but I pay attention. I'm talking with the AI, it tells me what it's doing, and when I see nonsense, I stop it.
One technique: I clear the context and set up an agent to be the contrarian, like an angry senior developer who doesn't like the code's author. I let it get nitpicky, then dig through the criticism. Some of it doesn't matter. Some reveals real issues.
Another technique: Gemini has a CLI and Claude has a CLI. I bounce them off each other. Claude says this is good code. Gemini says no, it's not, here's what I'd change. I tell Claude what Gemini thinks. Claude says Gemini doesn't know what it's talking about. Which is very healthy.
I've been writing an app to help my company with an internal process. I give Gemini an agentic persona to review the code and make recommendations about architecture. It surfaces things like: we're not using a layered architecture, this piece is not testable, the reason it's not testable is because it's tightly coupled. That's good feedback. I give that to Claude. Claude agrees and refactors. It also surfaces minor stylistic stuff that doesn't matter right now.
You drag the code back out of the rut of bad patterns. The majority of code that gets a lot of attention is probably not great. Using AI iteratively, setting up adversarial review processes: these are real tools. This is how you avoid paying the Verification Tax later. You pay it upfront, in small increments, while the architecture is still salvageable.
The Vibe Engineering Misnomer
I hate this term "vibe engineering" that's being coined now, where experienced seniors use AI to write good code through agents instead of by hand. That's not vibe anything. That's people who know what they're doing using a tool.
Is it vibe hammering if I hold a hammer and hammer in a nail? No. It's just hammering. This is nonsense talk. Virality and clout chasing. People chase terminology.
I'm just as bad. I'm calling this post "The Vibepocalypse."
But the distinction matters. The vibe is feeling your way through without knowing what you're doing. That can't be how it works. People who know what they're doing and use AI as a tool? That's just engineering with better tools.
The danger is when people conflate the two. When they see seniors using agentic orchestration effectively and conclude that the agents are doing the work. The agents are doing the typing. The senior is doing the architecture. The senior is paying the Verification Tax in real-time. The vibe coder is deferring it until the bill comes due with interest.
The Confidence Gap
I had a conversation with my sister yesterday. She had an idea for an app and asked me, "Can we vibe code the app?"
She had this problem domain where she thought AI could just solve it. I told her no, that's not how that works. You need experts in that field to solve that problem. Maybe they can use AI to make it easier, cheaper. But if you just point AI at a complex problem, here's the sad truth: it's going to tell you a solution. Every time. 10 out of 10. Never once will the AI say, "I don't know the answer to that" or "This is not something I can solve." It's always going to say, "Here's what we do."
That's the nature of the technology.
She mentioned a friend who knows a guy that builds apps for people, takes their ideas and builds them out. I told her to be careful. AI is at a point where it can build a very realistic prototype demo. But taking that and making something you can actually sell takes experts. It just does.
At the end of our conversation, she said she was going to talk to this other guy. I didn't take it personally. She was skeptical of my contrarian view. But that's the thing with non-technical people right now. Their perception comes from all the AI evangelists telling them that AI can do everything and software engineering will be dead in six months.
What Happens Now
The training data problem isn't going away. My mentor and I have talked about this numerous times. How is it that generated code falls into anti-patterns? Because anti-patterns produce the most noise. They get the most attention. They dominate the training data.
My sister might go build her app with that other guy. Non-technical founders will keep shipping 95% machine-written codebases. The lawsuits will come. The scapegoats will be found. The trillion dollars in technical debt will come due. The insurance claims will be denied.
The junior developers facing 7.4% unemployment are the first wave of scapegoats. They absorbed the cost of a system that promised democratization and delivered liability. The next wave will be the startups that shipped fast and broke things they didn't understand. After that, the enterprises that let the agentic swarms run unsupervised, that couldn't prove "sufficient human architectural oversight" when the insurers came asking.
The Verification Tax will be paid. The only question is whether you pay it incrementally, with attention and discipline, or all at once, with your career, your company, or your insurance claim.
But for those of us who actually build software: the fundamentals haven't changed. Architecture matters. Modularity matters. Understanding matters. The tool got faster. The discipline got more important.
The vibepocalypse isn't the end of software development. It's the end of pretending you can skip the hard parts.