Insights

When output gets cheaper, value has to be redefined

AI boosts output but shifts cognitive burden to reviewers, showing why organisations must redefine value around judgment, not speed

Ling

March 30, 2026

Posted originally at e27

AI has made it easier than ever to produce work that looks finished. What it has not made easier is knowing what is actually correct, useful, or worth acting on. For many knowledge workers, that is where the real fatigue begins.

That is why the recent Harvard Business Review piece on AI-driven “brain fry” struck such a nerve. It gave language to something many teams are already feeling. AI can increase throughput while also overloading attention, judgment, and decision-making. It does not simply remove work. It changes the shape of work, and often adds new cognitive overhead in the process.

The illusion of time saved

The dominant story around AI is still too shallow. We talk about it as a time-saving device, as if its main benefit were simply getting to the first draft faster.

But in many forms of knowledge work, AI does not eliminate effort. It redistributes it.

It handles part of the drafting, summarising, and generating. In exchange, it creates new layers of evaluation, synthesis, correction, comparison, and oversight. The work becomes less linear and often more mentally taxing. Writing from scratch is demanding, but at least it is coherent. Reviewing a polished but slightly wrong AI draft is different. It requires constant vigilance. You have to verify facts, test assumptions, check tone, and make sure the strategic logic actually holds.

What AI removes in drafting time, it often adds back in review, verification, and mental strain.

Who carries the cognitive load?

The problem is that these new forms of work are not distributed equally.

In most teams, the heaviest AI burden falls on the people with the most judgment and accountability. Senior contributors, managers, and context-rich operators are the ones expected to notice when an answer is plausible but wrong, when a project plan rests on shaky assumptions, or when a polished presentation has no strategic backbone. They become the final filter before a hallucination turns into a real mistake.

So while AI may help more people produce more work, responsibility still accumulates around the same high-value individuals who cannot afford to let standards slip. A junior employee may generate a sophisticated proposal in ten seconds. A senior manager may then spend thirty minutes untangling its hidden flaws.

That redistribution of responsibility is the real story behind many claims of AI productivity.

That is also why the conversation about AI and equity cannot stop at access. Giving everyone the same tool does not automatically create a more level playing field. When the cognitive costs of using AI are distributed unevenly, democratised output can still coexist with concentrated responsibility. The visible gains may look broadly shared, while the invisible burden lands on the same few people repeatedly.

When output stops being the best measure of value

This creates a management problem much deeper than burnout.

Many organisations still define value through visible output: how many drafts were produced, how quickly work was delivered, how much material was generated. That logic made more sense when producing the artefact was the hard part. It makes less sense now.

When AI can generate slides, memos, plans, and code in seconds, output is no longer the best proxy for value. Judgment is. So is the ability to design sound workflows, create shared context, and align a team around what actually matters. These activities were always important. Now they are becoming the real bottleneck.

In the AI era, the most valuable people are not necessarily the ones producing the most artefacts. They are the ones making sure the work is worth doing, the reasoning is sound, and the team is not moving quickly in the wrong direction. They are the ones who build processes others can trust, connect fragmented efforts into shared understanding, and reduce the need for constant cleanup downstream.

If AI changes how work gets done, it also has to change what organisations reward.

The danger of performance theatre

If companies continue to reward only speed and visible output, they will reinforce an old pattern in new packaging. The strongest people will carry more invisible work. The conscientious people will catch more mistakes. The most context-rich people will spend more time aligning everyone else. Meanwhile, others may gain speed without gaining depth.

That is where AI productivity can quietly become performance theatre.

There is a familiar version of this dynamic on social media. The most visible content is not always the most thoughtful or valuable. It is often the most optimised for speed, attention, and immediate reaction. AI can create a similar distortion inside organisations. The people producing the most polished work may not be the ones contributing the most judgment. And the people doing the most valuable thinking may become less visible precisely because their work happens upstream, in review, refinement, and decision-making rather than in easily countable outputs.

Less experienced employees may appear more productive because they can generate polished outputs quickly. But polished output is not the same as developed judgment. Much of real learning comes from wrestling with ambiguity, structuring an argument, and understanding why one path works while another fails. When AI bypasses too much of that process, it can also bypass the learning.

The risk grows when AI practices remain private and inconsistent. Teams cannot see which prompts led to useful reasoning, which workflows actually reduced friction, or where human judgment was applied. Strong practices stay trapped in individuals. Weak practices spread through imitation. Junior employees learn how to generate work that looks finished without learning how to think it through.

The result can be a more polished output without deeper capability behind it.

Designing for a more equitable AI workplace

The real difficulty lies less in deploying AI than in redesigning the norms and incentives around its use.

If companies want AI adoption to be sustainable and equitable, they need to recognise the invisible work that makes responsible AI use possible in the first place. That means rewarding judgment, process design, context-building, careful review, and team alignment, not just the visible outputs at the end. It also means making AI practices more transparent, so learning does not stay locked inside individual habits and hidden workflows.

A more equitable digital economy will not come from giving everyone one more AI tool. It will come from designing incentives, workflows, and team norms that do not quietly offload the cognitive bill onto a few overloaded reviewers.

What AI really changes is not only the pace of work, but the distribution of responsibility and the visibility of contribution.

If old definitions of value stay in place, AI may end up hardening the very inequalities it was supposed to help reduce.

Sign up now