New Economics of the Agentic Firm
- olivermorris83
- 1 hour ago
- 6 min read
Inside the firm the supply of intelligence is changing. What used to be scarce is becoming abundant. What used to sit quietly in the background is becoming the bottleneck. And the management model built for human-paced knowledge work is starting to break.
That is the real significance of AI agents.
This is not just a better tool for producing files, reports, code, or drafts. It is a new unit of production. And when the unit of production changes, the economics of the company change with it.
The firm has had a supply shock
For most of modern business history, cognitive labour was scarce. Expert attention was limited. Analysis took time. Producing a first version of anything meaningful was expensive. So firms built processes to protect scarce human effort. Requirements came before execution. Review came before release. Managers allocated limited expert time carefully.
Agents change that equation. The marginal cost of trying an approach, branching into alternatives, or producing another version has collapsed.
This is a supply shock in intelligence. And whenever supply changes sharply, scarcity moves elsewhere.

Scarcity has Moved
What remains scarce inside the agentic firm is not output. It is:
available context
judgment
review capacity
coordination
trust
decision rights
accountable ownership
That is the new economics. When intelligence gets cheap, context becomes capital.
By context, I do not just mean a longer prompt or better retrieval. I mean the operating knowledge that makes action useful inside a real business: priorities, constraints, exceptions, risk tolerances, commercial logic, regulatory nuance, internal standards, and the tacit understanding of how things are actually done. Some of that lives in documents. Much of it lives in people, habits, scars, and unspoken norms.
Agents can only act as well as the context they are given or can reliably retrieve. That is why context engineering matters. Not as a prompt-writing trick, but as the discipline of building the informational infrastructure that lets synthetic intelligence act coherently inside the firm. More on this later.
The frontier is uneven
This does not mean the models are uniformly reliable. They are not.
In coding, maths, and structured analytical tasks with fast feedback, they can be extraordinary. In more ambiguous domains, where truth is harder to verify and persuasion can masquerade as judgment, the weaknesses become harder to ignore: overconfidence, weak grounding, and answers that sound smoother than they are dependable.
So the frontier is uneven. Strong on tasks. Weaker on direction. And that matters because real organisations do not live entirely in one regime or the other. Work moves constantly between the testable and the interpretive.
That is why many enterprise failures will not look like dramatic breakdowns. They will look like plausible work moving in the wrong direction. Local brilliance can hide strategic drift.
Cheap cognition creates expensive management
I have felt this directly in my two years working with AI agents. The productivity gain is real. So is the management burden.
Agents never sleep. They never get tired. They never decide the team has already seen enough for one day. They generate branches, alternatives, fixes, experiments, and candidate actions continuously. That sounds like pure upside until you meet the human bottleneck on the other side.
Burnout is a real risk.
Not because the AI is struggling, but because the humans are drowning in output. Cheap cognition creates expensive management. As generation becomes abundant, review becomes the constraint. The old assumption was that production was slower than oversight. That is no longer safe.
Review does not scale with generation.
If agents can produce far more code changes, drafts, analyses, or candidate actions than humans can inspect one by one, then “review harder” is not an operating model. The bottleneck moves upstream.
From craft supervision to statistical management
There is a limited analogy here with AI itself. In a world of scarce compute, AI systems had to be handcrafted. Engineers stayed close to the rules, the architecture, the moving parts. But the bitter lesson was that once enough compute arrived, grown systems beat handcrafted ones. Scale outperformed local legibility.
Something similar may now be happening in knowledge work.
When output was scarce, managers could stay close to every line. They could read every document, inspect every change, shape every draft. Supervision was artisanal.
As agentic output becomes abundant, that breaks. We move from craft supervision to statistical management.
That means less confidence that any human will understand every local act of production. More emphasis on whether the system performs reliably enough overall. Less line-by-line inspection. More thresholds, monitoring, sampling, and exceptions. We understand less of each individual act. We govern more of the system.
In that sense, we are all a little more C-suite now.
The work shifts upward. Fewer people are directly crafting every output. More people are defining thresholds, deciding what counts as acceptable performance, and carrying responsibility for systems they cannot inspect exhaustively.
This is where output evaluations (aka evals) come in. Context tells the agent what world it is operating in. Evals tell us whether it is operating well enough. Both matter and both are a lot of work.
The unit of work is changing
In the previous piece, we argued that the unit of work is becoming the agent, not the file. That has deeper consequences than it first appears.
When each capable worker is effectively already paired with AI, the structure of teamwork starts to change. In software, I notice less pair coding between humans because each coder is already in a pair with an AI. Work splits into more modular chunks. Projects start to look more like prefab assembly: separately generated components integrated through interfaces, tests, and constraints.
The same pattern is likely to spread elsewhere. Research, drafting, reporting, analysis, internal operations, and customer workflows can all become more parallel, more modular, and more assembly-oriented.
That does not eliminate coordination costs. It moves them.
A useful analogy here is containerisation in shipping. Containers did not simply make ports a bit faster. They introduced a new unit of production and forced the surrounding system to change: ports, cranes, warehousing, scheduling, inland transport, and capital investment. The winners were not simply the firms that bought containers. They were the ones that redesigned operations around them.
Agents may do something similar in knowledge work. They are not just faster assistants. They are a new unit of production. And once that unit arrives, the bottlenecks move: away from drafting and toward coordination, observability, evaluation, and context supply.
That is where many firms are still underestimating the challenge.
Context becomes infrastructure
If agents are the new unit of production, then context is the infrastructure that allows them to move productively through the firm.
This is why context engineering matters.
Not as a prompt-writing trick, but as the discipline of building the informational infrastructure that lets synthetic intelligence act coherently inside a real business. Priorities, constraints, exceptions, risk tolerances, commercial logic, regulatory nuance, internal standards, tool access, memory, and state all need to be available in the right form, at the right time, under the right constraints.
Eric Broda's Agentic Mesh proposes that firms need the equivalent of ports, cranes, and logistics for context: systems to extract it from operational environments, keep it fresh, compile the relevant subset for the next step, and deliver what he calls a minimum viable context. In other words, context cannot remain an artisanal activity. It has to become part of the firm’s operating infrastructure.
Prompt engineering was the prototype phase. Context engineering is the operating model.
Rights can move. Accountability does not.
This brings us to the most important point for leadership.
As agents become more capable, firms will increasingly delegate operational rights: the right to retrieve information, draft messages, run workflows, write code, make recommendations, or trigger actions under defined conditions.
Responsibilities will also spread. Teams will need to maintain context, thresholds, evals, escalation rules, and monitoring systems around those agents.
But accountability does not disappear. This is the trifecta that matters:
* Agents may hold more operational rights
* Teams may share more execution responsibilities
* But accountability remains with named humans
That is not a philosophical footnote. It is the core governance fact of the agentic firm. The strategic challenge is not simply adoption. It is deciding, with precision, where rights may be delegated, what responsibilities must be built around those rights, and who remains accountable when the system fails.

The new economics of the firm
So the internal economics of the company are shifting.
What gets cheaper:
generation
exploration
iteration
branching
first drafts
routine synthesis
execution of many cognitive sub-tasks
What remains scarce:
available context
judgment
review capacity
eval design and maintenance
trusted escalation
coordination across modules
context infrastructure
named accountability
That is the real shape of the change. The firms that thrive in this environment will not simply be those with the most agent output. They will be the ones that redesign management around abundant intelligence, limited context, and statistical control.
Because that is the new bottleneck. Not whether the machine can produce, but whether the organisation can remain coherent while it does.




Comments