The Skill Shift: What Enterprise Developers Need to Learn Now

I spent four years in mechanical engineering school learning how to write problem statements. Not how to solve problems. How to frame them.
At the time, it felt like busywork. I wanted to get to the equations, the calculations, the actual engineering. But my professors kept dragging us back to the same question: "What problem are you actually trying to solve?"
That training has become more valuable in the last two years than anything I learned about thermodynamics or fluid mechanics. And it's becoming the dividing line between developers who thrive with generative models and those who get left behind.
What My Engineering Degree Actually Taught Me
When I work with developers who have traditional CS backgrounds, I notice a pattern. They often start from "here's the problem we're solving" as handed to them through requirements. They're ready to implement. They want to get to the code.
I keep going back to the customer. Explain the problem you're trying to solve. Not the solution you want. Not the feature you think you need. The actual problem.
This isn't because CS-trained developers can't do this. They absolutely can. But it's not drilled into them the way it was drilled into me. In engineering school, we spent serious time learning to write problem statements. It sounds mundane until you realize how many projects fail because the team built the wrong thing correctly.
Classical computer science education often focuses on data structures, algorithms, and implementation details. Mechanical engineering taught me to think in systems. Inputs, outputs, feedback loops, constraints. Both are valuable. But when so much implementation detail is being automated, systems thinking becomes the differentiator.
The Doctor Metaphor
I give this analogy a lot: working with stakeholders is like going to the doctor. A patient comes in saying their stomach hurts and they've done some research online. They think it's appendicitis. Here's the thing: a good doctor doesn't dismiss that research. But they also don't skip the examination and go straight to surgery.
They run tests. They ask questions. They use their experience to investigate the actual problem based on the symptoms.
A few months ago, I was working with a client in oil and gas. Smart technical people. They know their domain cold. And they came to me with a fully-formed solution: here's how we think you should build this feature.
The instinct is to just build what they asked for. They're the experts in their domain, right? But when I started asking questions about the underlying problem, it became clear they hadn't fully diagnosed it themselves. They'd jumped straight from pain to treatment.
The fix wasn't what they originally proposed. It was simpler in some ways, more complex in others. But it actually solved the problem they were experiencing rather than the problem they thought they had.
That's what developers need to be doing now. And it's a skill that has nothing to do with whether you can implement a binary search from scratch.
The Real Shift Isn't About Syntax
The ability to write syntactically correct code from memory is becoming less valuable by the month. I'm not saying it's worthless. You still need to read and understand code. But memorizing method signatures and API patterns? That's increasingly automated.
What's not automated is the ability to look at a business problem and frame it correctly. To push back when a product owner hands you a solution disguised as a requirement. To recognize when the customer is describing a symptom rather than the actual disease.
A 200,000 token context window sounds huge until you actually work with a real application. When I started my personal website project, the model could hold the whole thing in context. Then I added Google OAuth integration. Amazon SES for email. A blog feature. Complex relational database stuff. User model changes for OAuth.
It didn't take long before the model couldn't see the whole application. That's typical. If your application is properly modularized and you're managing context well, 200K tokens is sufficient for what you need in any given session. But you have to curate that context carefully.
People with massive legacy monoliths struggle with this. Five million lines in one repo? The AI can't handle it. You have to build intermediate structures: indexes, abstractions, architectural documentation that the model can reference to understand the larger scope.
This is systems thinking applied to AI tooling. Understanding constraints, working within them, creating structures that enable the work to happen.
The Communication Bottleneck
The JetBrains 2025 survey found that developers now rank internal collaboration and communication as important as technical tools for their performance. That matches what I've experienced.
I learned this lesson the hard way on a legacy rewrite project. The situation: old VB.NET application, junior developers on the team, and I was the architect. For various reasons, I didn't have the bandwidth to closely review everything the other developers were doing.
The project stalled. Architectural problems piled up. The juniors were stuck.
So I took the reins. In one week, I basically redid the application. Got all the features implemented. Did what probably would have taken two months five years ago.
I thought I saved the day.
It completely backfired. The blowback was immediate and intense. People were upset. Communication had broken down. I'd made decisions without bringing people along.
Here's the part that still frustrates me: we went from 60% finished to 99% finished in that week. And then it took another three weeks to get to 100% because of all the fallout from my lack of communication.
Three weeks. To close a 1% gap. Because I hadn't communicated.
That's when it clicked. The technical capability doesn't matter if the communication isn't there. I can leverage AI tools to do in a week what used to take months. But if I don't bring people along, that capability is almost meaningless.
That's where projects fail now. Not in the implementation. In the communication.
Reading Code vs. Writing Code
I still spend most of my day reading code. That hasn't changed. If anything, I'm reading more than ever.
What I'm reading has shifted. I'm not looking for syntax errors. The generative models can run build tools, linters, static analysis. Rust has Clippy. The agentic tooling catches that stuff automatically.
When I review AI-generated code, I'm asking different questions:
Can I understand this? If I can't understand the code, it needs to be rewritten. Period. I'm not comfortable operating at a level where I can't reason about what's happening. People say "well, you can't understand machine code either." Right. But machine code is deterministic. The same source compiles to the same binary every time. Generative models are not deterministic. Small changes in prompts produce wildly different outputs.
Are we mixing patterns? This is a constant battle. Across different context windows, different prompts push the model into different architectural patterns. I'll have a clean three-layer architecture, and the AI keeps wanting to drop into two layers, mixing business logic with data fetching. Or in Rust, it'll try to use pseudo-OOP constructor overloading instead of the idiomatic builder pattern.
If you let pattern mixing slide, you wake up one day with three or four competing paradigms fighting each other in your codebase. I learned this the hard way when I tried to change from a single user table to a multi-table entity. I had to update 50 test functions across multiple files. That's when I realized the inconsistent patterns had created massive coupling.
Was there a meeting of the minds about requirements? Does the code actually do what it should do? The AI has no context about users, about the business, about how the application actually gets used. It can't know whether it understood my intent. That's my job.
Is this the right level of abstraction? Recently I was building a feature for users to upload images into sections corresponding to parts of a product. The AI's plan had the whole part as one big component. But each section needed to be its own component within the larger part component.
When I pointed this out, the model agreed immediately. "You know what, that is a good abstraction." It wasn't wrong about the syntax. It was wrong about the architecture. And recognizing that kind of pattern, seeing what should be generalized and what shouldn't, is something humans are still better at.
The Vibe Coding Problem
There's a lot of debate about "vibe coding," developers using AI to generate code they don't fully understand. I encounter the problems from this constantly.
The common issues: off-by-one errors, confusing logical branches, conditions that are over-filtered so no case ever reaches them. These are easy mistakes for generative models to make. They look right on the surface.
How do you catch them? Testing. There's no substitute.
One nice thing about AI is that you can build a relatively robust testing framework quickly. The models are pretty good at writing basic tests that give you peace of mind and a template to expand on. Run those tests. Run the linter. Run the static analysis.
And then manual testing. Ultimately, users are using applications. Click through the workflow. Get feedback. Listen when someone says "this isn't right" or "this is close but not quite."
The non-determinism of these models still surprises me sometimes. I was generating portrait-style images from family photos. Same prompt, same input images. One generation gave me exactly what I wanted. The next generation gave me a picture of three random guys I've never seen before in my life.
Same prompt. Completely different output. That's what non-determinism means in practice.
The New Workflow
My day doesn't look like prompt-wait-review-prompt-wait-review on a single thread.
I typically have three or four different projects or features in flight. I write a prompt, let the agent work, and switch to the next project. Review what that agent produced. Set the next step. Switch again. Review. Prompt. Switch.
It's round-robin. I'm spending 70-90% of my time reviewing output, reviewing plans, reviewing documents. Maybe 10-30% on actual prompting and generation.
Some people use git worktree to do this on a single project. Check out four different branches, have four agents working on different features simultaneously, then merge it all together. I've done it with two branches at once. It was clean because the features weren't related. If they were interrelated, it could get messy. But no messier than four junior developers working on the same repo.
The context switching is hard. Before this role, I was focused on one project at a time. So much nuance was just in my working memory, readily available. That's not the case anymore.
My system for managing it: I'm not afraid to stop and investigate when something feels off. Even if I can't remember the specific detail, if something seems wrong, I sidetrack into it. Some would say that's not best practice because you're polluting the context window. But it's necessary for making sure I'm not missing things.
What Actually Matters Now
The skills that matter most in enterprise development are shifting:
Problem framing. Understanding what you're actually trying to solve before jumping to implementation. Pushing back on solutions disguised as requirements.
Communication. Bringing people along. Explaining technical decisions in terms stakeholders understand. Not being the hero who saves the day and creates three weeks of fallout.
Architecture and patterns. Recognizing the right level of abstraction. Keeping patterns consistent. Understanding systems at a level above individual functions and files.
Reading and reviewing code. Not for syntax. For clarity, consistency, and correctness at the architectural level.
Testing rigorously. Automated tests, manual testing, user feedback. The AI can't know if it understood your intent. Only testing reveals that.
None of this means traditional CS knowledge is useless. You still need to understand what's happening. You need to be able to read the code, reason about it, catch when the model goes sideways.
But the premium is shifting from "can you implement this algorithm from memory" to "can you define the right problem, design the right system, and communicate effectively while you build it."
That's the skill shift. And it's happening faster than most developers realize.