Prompt Engineering Is Just Communication (And That's the Point)

There's a debate happening right now about whether prompt engineering is dying or more important than ever. Every time I hear it, I get confused. Not because I don't understand both sides. Because I don't understand how anyone thinks the answer isn't obvious.
At its core, here's what happens when you talk to an LLM: your text gets translated into tokens. Those tokens get fed into the model. The model says "based on these tokens, what's the most likely next token?" and keeps doing that until some internal process decides it has enough. Where you start in that space matters. There's no way around it.
It's just like talking to real people. The way you phrase things, your tone, the specificity of your language: all of it influences how the message gets received and processed. With LLMs we don't have body language or facial expressions. We just have text. So the text becomes everything.
Why would what you say to a pattern recognition machine stop mattering just because it's gotten really good at matching patterns? That feels backwards to me.
The Most Common Miscommunication
Here's what transfers directly from human communication: the most common miscommunication is the thought that communication happened.
You think you had a meeting of the minds. Then suddenly the LLM takes a left turn into the weeds and you're wondering what happened.
The fix is constant checking. Is the LLM understanding what I'm getting at? Am I understanding what it's understanding? It sounds circular, but it's real. You have to verify you're in the right pattern-matching space. That the right pathways are activated.
Our experience of this is a lot like understanding. A lot like a meeting of minds. Even though technically the LLM doesn't comprehend anything in a human sense. There's no aha moment, no light bulb. Just pattern activation.
But the experiential reality is what matters for how we work with these tools. And that experience is communication. It always has been.
The Pathways You Activate
I was talking with Claude recently about the difference between formal and colloquial language when prompting. What I learned surprised me.
The output difference might be subtle. But the language you use activates different pathways in the model. When it sees colloquial, casual language, it takes the information down pathways associated with informal communication. When it sees precise, technical language, it goes somewhere else entirely.
For me as a software engineer trying to get precise technical output, casual language is counterproductive. I'm asking for code, for architectural decisions, for specific technical solutions. The last thing I want is for the model to pattern-match into "casual conversation" mode.
Vice versa: if you want creative output, informal variation, something that feels human and loose, then matching that energy in your prompt makes sense.
This isn't magic. It's just how context shapes pattern matching. And that's exactly why prompt engineering isn't going anywhere.
Resolving Ambiguity Is the Whole Game
If I could tell someone just starting with AI tools one thing, it would be this: err on the side of over-communication.
You have to work pretty hard to give an LLM too much context when you're prompting it. Too little is easy. Ambiguity is everywhere. Every bit of ambiguity you can resolve improves your results.
The problem is that everything is clear in your own mind. You know what you mean. It's hard in the moment to think about alternative interpretations. This is the same problem we have with human communication. You're locked into your pattern of thinking. Seeing the transverse patterns, the inverted interpretations, takes deliberate effort.
For me, the words sit on top of the pattern I'm seeing in my head. The translation from thought to language introduces gaps. The only technique that works is attention. Stop. Read through what you're saying. Try to come at it from another angle. Just like you would if you were explaining something to another person and wanted to make sure they actually understood.
It's hard. Easier said than done. But it's the work that matters.
The Telephone Problem
I see this constantly with junior developers. They come to me with code problems. I look at their pull request, explain what's wrong, lay out the pattern I want them to follow. And then they just paste my feedback directly into an LLM and throw whatever comes back at me.
I don't need them to do that. If that's how we're going to work, I can talk to the LLM directly.
What I need is for them to take what I'm saying, research it, understand the why behind the feedback. Then I need them to think critically about how it applies. And finally, I need them to evaluate whether the LLM's output actually fits what we're trying to accomplish.
The disconnect is always the same: if you can't validate whether the output is good, you can't use the tool. You just become a relay station passing messages you don't understand.
When I catch this happening, I call it out directly. I try to be nice about it. "Hey, I'm looking at the new code you wrote in response to my feedback, and it really seems like you might be putting my feedback into an LLM and uncritically accepting the results."
Usually there's defensiveness first. "No, I'm not doing that."
"Okay. Can you explain this code to me and how it matches my feedback?"
And then it comes out. "Well, maybe I did go to the LLM and ask it to rewrite based on your feedback."
So I explain: if you want to use the LLM, great. It's a great tool. But try getting it to explain my feedback first. Do some research. Understand the why, not just the what. Then you can write it yourself or prompt the LLM intelligently. Either way, you're learning. You're growing. You're building the ability to validate output.
The alternative is scary. I read constantly about young developers and students losing the ability to think critically. People who came up before real AI tooling had to develop these skills. The best we had was a language server showing us symbol options. That felt like cheating at the time compared to people who learned from books on the shelf.
Now LLMs write whole programs. The Lovables of the world generate entire apps. And if you never develop the ability to evaluate what comes out, you're not really using the tool. The tool is using you.
Validation Requires Expertise
Every single day I'm using LLMs and telling them no, that's not right. It hasn't been solved. Models have gotten better, but they still confidently recommend terrible approaches all the time.
I was doing discovery for a project recently. Building context for what would eventually become a SaaS product. The LLM kept pushing me toward a JavaScript/TypeScript stack using tooling that's pretty immature and completely unsuitable for enterprise software.
Here's the thing about SaaS: you're selling it to people. It's not an internal tool where your customers don't care what's under the hood. Enterprise buyers ask what the technology is. If you tell them it's some trendy JS stack, there's a good chance they walk away because they know it won't scale.
But I knew that. So when the LLM gave me those recommendations, I pushed back. "Nope. Start again. That's not right."
And then the LLM did that thing they all do: "You know what? You're absolutely right. This is a terrible choice. I don't know why I recommended it."
Of course you don't know. You don't actually know anything. You're pattern matching based on what you've seen. That's why validation matters. That's why domain expertise isn't going away.
We went back and forth through several options. Each round got us closer to understanding the real requirements. What's the MVP state? How do we pick a tech stack now that won't create massive rework when we grow? The conversation refined the constraints until we landed somewhere I trusted.
The Two-Hour Double-Down
Let me tell you about Gemini.
I was in rescue mode on a project. Three months in, two weeks past deadline, the other developers dead in the water. I wasn't even supposed to be a primary developer on this one. I was brought in as an architect, but budget constraints meant light guidance. Now I was trying to save the whole thing.
The problem: a Vue app with a filter popup for table columns. When the table was short due to screen size, the popup got cut off. I needed to teleport it outside the table container and align it properly. This is exactly what Floating UI is designed for.
But Gemini was convinced Vue would handle it automatically.
I tried what it suggested. Didn't work. Gemini said there must be a CSS issue. We fiddled with CSS. Still broken. Gemini said there's some underlying cause in the DOM tree. We looked at the whole DOM. Gemini found some CSS rule "way up in the hierarchy" supposedly causing problems.
I told Gemini I thought it was hallucinating. That this doesn't work the way it thinks it works.
And Gemini doubled down. For two hours. Different approaches. New things to try. At one point it suggested I rewrite the whole application.
Finally I said: write me a standalone app that demonstrates just this one thing you think works. Nothing else.
Gemini built it. I ran it. The popup was obviously in the wrong location. Not even close to aligned. I took a screenshot and sent it back.
Finally: "Oh, I guess I was mistaken."
After two hours of absolute certainty. I joked that I was surprised it didn't tell me the screenshot was fake.
During those two hours, I genuinely thought I was going crazy. When Gemini said I'd have to rewrite the whole project, I died a little inside. This was a project already past deadline. I was in save-everything mode. The thought that I might have to rebuild from scratch was terrifying.
But I'm stubborn. I stuck with it. Had a suspicion around the third or fourth iteration that Gemini was wrong. Finally proved it.
The lesson isn't that LLMs are bad. The lesson is that you have to be able to push back. You have to know enough to recognize when the confident response is confidently wrong.
Building Context That Can't Be Found in Code
I've been building a tool that examines existing codebases and interviews users about what's happening. The purpose is building context for Claude Code. Context that can't be extracted from reading the source.
LLMs are really good at reading code. They understand syntax, patterns, structure. What they can't understand is the culture that produced the code. Why certain decisions were made. What constraints existed that aren't documented anywhere.
The problem with LLMs is never lack of information. They have more information embedded in them than we can imagine. Libraries upon libraries. More than the Library of Congress, every county library in the US and Europe, all mashed together. The problem is containment. Constraints. Knowing what to pay attention to.
That context is what turns good results into great results. It's what lets the model attend to the right details.
Getting the prompts right for this tool took real iteration. Early versions asked questions about things the LLM could discover by looking at the code itself. Mechanically it worked, but conceptually it was focusing on the wrong things. It was doing what it was supposed to do, just not looking at what it really needed to look at.
The trick was getting it to focus on where the code stops and culture starts. Developer preferences. Historical decisions. The intangible stuff. It took several tries to prompt it the right way: what do you need to know that you can't discern from the code itself?
The Point
Prompt engineering is communication. It's always been communication.
The models will keep getting better. The abstractions will keep evolving. But you'll still be trying to get a pattern-matching system to attend to the right things. You'll still be resolving ambiguity, providing context, validating output.
If you can't explain what you want clearly enough for another intelligent entity to understand it, no amount of model improvement will save you. That was true when the entity was a human colleague. It's true now when it's an LLM.
The skills transfer. The work remains. And if you're not doing that work, you're not using these tools. You're just hoping they're doing the work for you.
They're not.