The AI Productivity Paradox?
Most people believe AI is already transforming workplace productivity. Tools like ChatGPT are everywhere. Workers are writing faster, coding more efficiently, and summarising dense documents in seconds. But here’s a less popular, but increasingly important idea:
The productivity gains from today’s AI tools largely accrue to the employee—not the firm.
This isn’t to say the gains aren’t real. They are. But they’re localised, private, and unstructured. An employee may finish a task in half the time, but that doesn’t mean the firm gets twice the output. Often, the surplus is absorbed by inefficiencies elsewhere: meetings, distractions, idle time. The structure of production hasn’t changed. Coordination costs remain high. Measurement is difficult. And critically, the firm struggles to capture, monitor, or even notice the marginal value being created.
This is not a new phenomenon. In the 1980s, Robert Solow famously observed: “You can see the computer age everywhere but in the productivity statistics.” The same paradox may be playing out again with generative AI—ubiquitous adoption, but elusive aggregate gains.
To understand why, it helps to revisit the economics of the firm.
In The Nature of the Firm, Ronald Coase asked: why do firms exist at all? His answer: to minimize transaction costs. When the cost of using the market—finding prices, negotiating contracts, enforcing terms—is too high, work is brought inside the firm. But AI tools like ChatGPT don’t reduce the transaction costs that underpin the firm. They reduce task-level friction, not the broader cost of coordination, delegation, or integration. The firm’s core logic remains intact.
Meanwhile, Jensen and Meckling’s theory of the firm framed it as a nexus of contracts—a structure riddled with principal-agent problems. Employees (agents) don’t always act in the interests of the employer (principal), especially when incentives diverge or outputs are hard to monitor. When employees use AI tools, they decide how, when, and to what extent to apply them. Time saved isn’t necessarily reinvested in higher output. It’s often invisible to the firm. As a result, the productivity surplus is privatised.
From a growth theory lens, the current wave of AI tools behaves like labour-augmenting technology. They make individuals more efficient, but unless firms restructure workflows or alter their production function, the returns are captured by labour, not capital. This echoes the concerns of Acemoglu and Restrepo, who have argued that automation technologies which only augment labour often yield uneven growth and exacerbate inequality, without translating into broad-based productivity gains.
But this is where the story turns—and where the next phase of AI evolution begins.
The rise of autonomous AI agents—task-completing, workflow-integrated systems—represents a fundamentally different economic model. Agents don’t assist workers; they replace discrete units of labour, executing end-to-end tasks without constant oversight. They’re programmable, consistent, and auditable. More importantly, they’re owned and controlled by the firm, not the employee.
These agents begin to behave like capital—not merely enhancing labour but substituting for it. In doing so, they invert the current distribution of productivity gains. The surplus now accrues to the owner of the agent—the firm—not the individual worker. In Coasian terms, they reduce internal coordination costs. In Williamson’s logic, they lower the cost of managing bounded rationality and opportunism. And in Jensen and Meckling’s framework, they eliminate agency risk entirely.
Here’s where Hayek enters the picture.
In The Use of Knowledge in Society, Hayek argued that the central planner cannot possess all the distributed knowledge needed to make efficient decisions. Markets, through price signals, aggregate and coordinate this dispersed information. But within firms—where hierarchical planning replaces market signals—information bottlenecks remain a problem. AI agents, however, begin to solve this. Properly integrated, they process local information automatically, act on it autonomously, and feed results back into a system where no human intervention is needed. They operate like embedded market participants within the firm’s structure, effectively reducing Hayekian knowledge frictions internally.
This is why agents represent more than just another tool. They are a new kind of firm-native intelligence—one that internalises knowledge, performs work, and closes the feedback loop in a way that’s both scalable and measurable. They allow the firm to finally reconfigure its own production function, not just augment individual contributors. The result: not just faster work, but a different kind of firm.
So the emerging divide in AI is not between firms that adopt it and firms that don’t—it’s between firms that use AI as a set of tools for workers, and firms that integrate AI as systems of autonomous agents. The former democratises productivity. The latter captures it.
If the first wave of AI empowered individuals, the second will empower organisations. That’s when the productivity boom will finally show up in the data.
Anthony Butler Newsletter
Join the newsletter to receive the latest updates in your inbox.