Back to Blog

Framing Freedom

flux pro 2.0 A Japanese temple gate torii standing at the threshold between a chaotic storm o 0

Gartner predicts 40% of agentic AI projects will be canceled by 2027. Not because the technology failed. Because organizations deployed agents without orientation: no clear outcomes, no cost controls, no guardrails.

Meanwhile, OpenClaw showed us what "full autonomy" actually produces: security vulnerabilities, prompt injection attacks, and a mimetic frenzy where companies forced adoption before understanding what they were adopting.

These look like opposite problems. One is too much control, the other too little. But they share the same root failure: neither understands that constraint is what gives agency its shape.

The Four Quadrants of AI Agency

Every organization deploying AI lands somewhere on two axes: Constraint and Orientation.

Low Constraint, Low Orientation. This is the "let it rip" failure. Agents doing whatever they're prompted to do. No guardrails, no direction. The result: security vulnerabilities, wasted spend, the cancellation wave. OpenClaw lives here.

High Constraint, Low Orientation. This is the Inquisitor failure. Everything prohibited by default, approved tools only, shadow IT as heresy. The result: stagnation, people going underground, governance that drives innovation into the shadows. I wrote about this in "The Signal in Shadow IT."

Low Constraint, High Orientation. This is rare and unstable. Strong vision but no structure to execute it. Usually collapses into one of the other quadrants.

High Constraint, High Orientation. This is the goal. Formative freedom. Constraints that shape rather than restrict. Direction that gives autonomy its meaning. In this quadrant, constraints aren't "No," they are "How": How does this specific tool serve our specific business objective? How does this workflow change align with where we're going? This is where companies actually ship AI to production.

The quadrant you're in isn't about how many policies you have. It's about whether your constraints express something real or just manage liability.

Both Failure Modes at Once

My friend works for a company that's Copilot-only. That's the tool that interacts with SharePoint, Teams, all the Microsoft ecosystem. But it's clunky. So he ends up talking to ChatGPT constantly, running things by it on the side.

This is a company doing both failure modes simultaneously. High constraint on one axis, completely ungoverned on another. Locked down and leaking at the same time.

This is a testament to how much trouble governance leadership is having wrapping their heads around this technology. It's not like picking an ERP or choosing between Microsoft Teams and Google Workspace. Those are basically new wrappers around old things. AI is that too in a way, except for the difference of scale. You have access to tooling that can generate any kind of image, video, code, or analysis you want. But only if you're using the right tooling, in the right way, with the right orientation.

And the shakeout is predictable: people who get results aren't getting questioned about their methods. People are too busy being impressed by the output to ask whether it was in line with governance policy. It's a crazy time.

Tech in Search of a Problem

There's a company we're working with that has an agentic AI project that's been in pilot for over a year. Actually longer, because they were talking about it before kickoff. Still in pilot. The problem isn't the technology. The problem is they don't even really understand what they're doing.

This happens constantly. Tech in search of a problem rather than a problem in search of technology. Completely backwards. Technology is not an end unto itself. Agentic AI is not an end unto itself.

You need orientation toward a realistic business goal. And that goal needs alignment with what you as a company are actually trying to accomplish. Then you ask: what are we trying to accomplish with AI specifically? Those things should be in alignment.

This is basic stuff. But we're losing our minds because there's this fear of missing out, this fear of getting behind and missing the wave. Everything is controlled by private equity and shareholders, so we don't really have long-term capital management strategies. We have short-term earnings-based strategies focused on the next quarter.

A year or two ago, everybody had to do something AI. It wasn't about understanding what we could do with AI that would be valuable. It was: we need AI because everyone else is using AI.

This is the Girardian move. The object of desire is less important than the mimetic rivalry at this point. Trying to actually increase business value is secondary to not being left out. That's the insight.

The Grand Inquisitor and the CIO

Dostoevsky's Grand Inquisitor argues that freedom is a burden people can't bear. The merciful thing, he claims, is to replace it with miracle, mystery, and authority. Take away the burden of choice. Manage people through awe and control.

The locked-down CIO is the Inquisitor. Everything prohibited by default, approved tools only, shadow IT as heresy. It's governance through restriction, and it drives people underground.

But the "let it rip" agentic crowd makes the opposite error. Full autonomy, no direction, agents doing whatever they're prompted to do. This produces the cancellation wave. Same root failure, opposite symptom.

Both fail because neither understands that constraint is formative, not restrictive.

The Natural Will as System Architecture

Maximus the Confessor provides the theological architecture here. His concept of the natural will is freedom perfected through orientation toward its proper end, not through the absence of limits.

Modern times have fetishized absolute freedom. Complete freedom from constraint. But constraint is what shapes things. Solving a problem without constraints isn't really solving a problem. There's no frame. How do you even begin to look at something?

You learn this in engineering early. What's my budget? What's the timeline? What environmental, economic, political, geopolitical considerations do we have? All of those things inform the problem space. There are ethical constraints, practical constraints, all sorts of things that frame the problem and guide you to a solution with the right trade-offs.

There is no perfect solution. Such a thing does not exist. There are only trade-offs. Understanding which trade-offs to make requires the correct framing of the problem. That framing comes from constraints.

Here's what the Inquisitor misses, and what Maximus gets: freedom doesn't have to be constrained by a bunch of rules. It can be constrained by orientation. By direction. By a proper place in the ontological hierarchy.

In technical terms, the natural will is a reward function or system prompt that is perfectly aligned with a firm's core values. Orientation produces rules. A clearly defined business outcome dictates the necessary API permissions, the required human-in-the-loop checkpoints, the boundaries of agent autonomy. Rules can be an expression of orientation. We see it with Moses coming down the mountain with ten commandments. But rules can't be an end unto themselves. Without the orientation informing them, they become completely arbitrary.

This is why policy documents don't work when leadership doesn't understand where the company is or where it's going. You can't orient toward north if you don't know where you're standing.

If the natural will is the internal engine of agency, then orientation is the compass that keeps that engine from driving off a cliff.

Orientation Is Not Abstract

Orientation feels abstract until you know where you are.

North is abstract. East, south, west. All abstract concepts until I'm standing somewhere. Right now, sitting in my house dictating these thoughts, I know exactly where north is. I can point to it. North is behind me. East to my left. West to my right. South straight ahead.

If I'm orienting south right now, that's concrete. If I'm saying I'm orienting south but don't know where I am, then yes, that's abstract and impossible.

Communicating orientation well requires communicating where you are. A lot of leaders struggle with this because they don't know where they are, where their company is. That's a sad state of affairs, but it's just the way the world works.

To communicate orientation well, you have to know where you're at. Then it's as simple as pointing.

I use a thought experiment when talking about things that are so simple they're hard. Try explaining the concept of "up" to someone without using relational language like "above" or "below." When you're sitting on planet Earth and you know what up is, it's readily apparent. You don't even have to think about it. But absent the context, it becomes meaningless. What is "up" in outer space?

That's what orientation means. That's what it means to point at something. And that's what Maximus was explaining: orienting ourselves toward the good starts with an understanding of embodied reality.

A Computer Must Never Make a Management Decision

There's a quote from a 1979 IBM training manual: "A computer can never be held accountable. Therefore a computer must never make a management decision."

If you buy into my argument, nothing has changed since 1979. A computer cannot be accountable. It should not make a decision that a human should make. AIs should help and assist humans to make decisions. They should not make decisions for us.

But here's what's changed: the causal chain has gotten longer and more obscure.

When an agent hallucinates a price and causes a contract breach, someone is liable. When an automated system flags an employee for termination based on endpoint monitoring data, someone bears the legal and moral weight of that decision. The AI doesn't. The AI can't. The human who oriented the agent, who clicked "run," who designed the workflow: that's where accountability lands.

This is why the Inquisitor metaphor cuts so deep. The Inquisitor wanted to relieve people of the burden of freedom. But the burden doesn't disappear. It just gets obscured. The person who deploys an autonomous agent hasn't escaped the burden of decision. They've just made it harder to see who's deciding.

And this creates a legal and ethical vacuum. If no human is reviewing the output, and the agent causes harm, the liability still exists. It just lands on whoever was supposed to be orienting the system. The "let it rip" crowd isn't achieving freedom. They're achieving plausible deniability, badly.

People outsourcing their decision-making to AIs should not be in management positions.

The Timekeeping Email

This isn't abstract. My boss was talking to me the other day about an AI tool he wanted to build. It would pull our timekeeping software to see who hasn't filled in their time, whose descriptions are poor. He wanted the AI to send out emails automatically.

No. Nothing will frustrate people more than having an email from an AI telling them their time is wrong when their time is fine.

Here's what we can do: have it draft emails. Have someone check and make sure all the things the AI flagged are actually correct. Then send the emails. Then train the AI on which emails it generated were actually valid. It can get better. But at the end of the day, I would never accept an automated email like that.

He got it once I explained it. He's building tooling around the same idea for another company: monitoring sites with AI, surfacing irregularities, then having a person check it out. Maintenance by exception. The AI assists. The human decides.

People are even talking about layoffs by AI. Examining online habits through endpoint monitoring, feeding that into AI, having AI make decisions on who gets laid off. This is the most insane thing I've ever heard of. But I'm sure people are doing it.

The difference is understanding that people make decisions, not computers.

Human-in-the-Loop as Formative

Human-in-the-loop gets treated as a bottleneck by the "let it rip" crowd and as a security blanket by the lockdown crowd. It's neither. It's what makes agentic systems actually work.

Here's the reality: at the end of every causal chain, there's a human. There will never be, in the whole history of agentic AI and all things to come, a process that an agentic AI runs that didn't at some level, at some point, start with a human.

Unless the Lord himself is going to come down and bless us with the virgin AI, untouched by human hands, operating completely independently from human direction. Unless that happens, every single agentic AI is starting from a human doing something. Creating it. Creating guidelines. Maybe just clicking "run," and God help us if that's what it is.

Recognizing this reality is the first step to designing effective agentic systems. Somewhere there has to be human judgment.

My friend is building really sophisticated agents that take requirements and turn them into functioning apps. The human in the loop in that scenario is the person creating the requirements. The human shapes what the agent does. The agent executes within that shaped space.

I wrote about this in "Nothing New Under the Sun." His thought is that every time the agent solves a problem, he can make a skill so the agent can solve that problem again. But the reality is it's difficult to know from inside the system whether a particular skill is solving a particular problem correctly. Human judgment is irreplaceable for that meta-level assessment.

Two Companies: A Tale of Orientation

Deloitte's data shows that enterprises where leadership actively shapes governance achieve significantly greater business value. But plenty of executives are "involved" in governance while still getting it wrong. Having your name on the policy document isn't the same thing as shaping governance.

Here are two companies. Same technology landscape. Radically different outcomes.

Company A: I consulted with them on a proof of concept chatbot. It took their existing GraphQL interface to their data, created a tool, and had LLMs query it. Output was pretty good. Not great, but not bad.

They took that and said: we're going to build on this. Now they have RAG plus multiple technologies wrapped together. Their governance structure is lean. Responsive. The lead developer and decision-maker work closely together, talking daily, constant push and pull between two really smart guys. Time to production: months. They're shipping. They're iterating. They're capturing value.

Company B: Over a year in pilot. Actually longer, because they were talking about it before kickoff. Still in pilot. We don't really know who's in charge of it. They don't really have a CTO. The people they do have with relevant expertise, they're not engaging with. They're going to do it all on their own. Then when that fails, they're going to bring in an outsider who doesn't know the business, the project, or the technology, who's going to "fix it."

The sunk cost is staggering. A year of developer time, consultant fees, infrastructure spend, opportunity cost. Every month Company A is in production capturing value, Company B is burning capital on a pilot that hasn't shipped.

This isn't a personality conflict between "smart guys" and "outsiders." This is what poor orientation does to capital. It destroys it. The technology was never the problem. The absence of orientation was the problem from day one.

God bless them. We're going to do the best we can. Our job is to support them so they can draw their own conclusions and not have an easy scapegoat. It would be a huge disservice if we dropped the ball and they were able to say "this is why it didn't work, they dropped the ball, it wasn't our flawed premise or poor execution."

For their own good, I don't want that to happen.

What Effective Leadership Looks Like

Effective leadership means understanding and engaging with the tools your people want to use. You can't just lay AI over a workflow. You have to understand the workflow and then understand where AI fits into it. Where can you design a workflow change that everyone doing the workflow can understand and be on board with? A change that isn't just adding a layer of AI, but actually takes a problem that AI will solve better than people.

These problems exist. Summarizing information, ingesting large amounts of information and summarizing it quickly: AI can do that orders of magnitude better than people. But making a decision based on contextual information, making a decision based on incomplete information: humans still do that way better than AI.

And AI has no accountability.

What does leadership look like? Understanding AI. What it's good at, what it's not good at. Understanding your employees' workflows, their processes, their projects, their day-to-day. Being able to engage with them at the level they're at in informing your governance strategy.

That's the only way it works. Companies capable of doing this are in the top-right quadrant: high constraint, high orientation. Formative freedom. Everyone else is burning money in one of the other three.

The AGI Curtain

The enthusiasm for AGI, for superintelligence that's going to solve all our problems and recommend the right way to live: that's essentially enslaving yourself to other people through an intermediary.

Don't pay attention to the man behind the curtain.

There's always a human behind the AI. Always human values encoded in the training data, the fine-tuning, the system prompts, the deployment decisions. An AGI that tells you how to live is just other humans telling you how to live, with extra steps and less accountability.

To treat an AI as a peer or a superior isn't just a technical error. It's an abdication of the very thing that makes leadership valuable: judgment.

I wrote about this in "Agents and Swarms and Bots, Oh My: But Who's Behind the AGI Curtain?" We're so far from anything meaningful. I'm skeptical of the idea in general. I don't even understand what AGI might mean.

AI is below humans in the ontological hierarchy. This isn't a limitation to work around. It's the structure of reality. Recognizing it is what makes agentic systems work.

Freedom Needs a Frame

The Inquisitor was wrong about freedom being a burden people can't bear. The "let it rip" crowd is wrong about autonomy without direction being freedom at all.

Freedom perfected through orientation toward its proper end. Not through the absence of limits, but through the presence of direction.

The companies getting AI right understand this. Lean governance, responsive structures, leaders who understand both the tools and the workflows. Human judgment at the center, not as a bottleneck, but as what makes the whole thing cohere.

Plot yourself on the quadrant. High constraint without orientation puts you in Inquisitor territory: your people are going underground. Low constraint without orientation puts you in cancellation territory: you're burning capital on agents that produce nothing but liability. Low constraint with high orientation is unstable: you'll collapse into chaos or rigidity.

High constraint, high orientation. That's the target. Constraints that express your actual business objectives. Orientation that everyone from the C-suite to the developer can point to.

Constraint is formative. Orientation gives freedom its shape.

That's not a restriction. That's what freedom actually is.

Share: