Thousands of CEOs just admitted AI has had no impact on productivity. But that does not mean AI cannot deliver real gains — it means most organisations are using it wrong. Clinical documentation is the counter-example.
On February 17, 2026, Fortune published a piece that should have surprised nobody who has been paying attention. A National Bureau of Economic Research study surveying 6,000 executives across the US, UK, Germany, and Australia found that nearly 90% of firms reported AI had zero measurable impact on employment or productivity over the past three years. Two-thirds of executives said they use AI, but only for about 1.5 hours per week. A quarter do not use it at all.
The article invoked Robert Solow's famous 1987 observation about an earlier technology revolution: "You can see the computer age everywhere but in the productivity statistics." Economists are now asking the same question about AI. Billions invested. Massive adoption reported. Productivity gains? Invisible.
This is being called the AI Productivity Paradox. And it is real — but it is also misleading. The paradox describes what happens when organisations adopt AI as a general-purpose tool without a specific problem to solve. It does not describe what happens when AI is applied to a defined, measurable workflow with a clear before and after.
Clinical documentation is one of those workflows. And the numbers tell a very different story.
Why Most AI Adoption Produces Nothing
The Fortune data reveals a pattern that anyone in implementation science would recognise immediately. The executives surveyed are not failing because AI does not work. They are failing because they are using it without a target.
Using AI for 1.5 hours per week to "help with emails" or "brainstorm ideas" or "summarise meetings" is not a productivity strategy. It is a novelty. There is no baseline measurement, no defined task being replaced, no way to know whether the time spent prompting and reviewing AI output is less than the time spent doing the work manually.
This is exactly what the ManpowerGroup 2026 Global Talent Barometer confirmed: AI usage among workers increased 13% in 2025, but confidence in its utility dropped 18%. People are using it more and trusting it less. That is what happens when a tool is adopted without a workflow.
The productivity paradox is not about AI failing. It is about AI being deployed into the wrong problem space — or no problem space at all.
What Makes Clinical Documentation Different
Compare the vague AI adoption described in the Fortune article with how AI applies to clinical documentation:
- The task is specific and repetitive. Writing a progress note after every session, in a structured format, with clinical language, connected to a treatment plan. It happens multiple times per day, every working day, for every client.
- The baseline is measurable. Research consistently shows clinicians spend 25–50% of their working hours on administrative tasks, with documentation consuming the largest share. A therapist seeing 20 clients per week who spends 15 minutes per note is losing over five hours per week to documentation alone.
- The output is structured. Clinical notes follow defined formats — SOAP, BIRP, DAP — with predictable sections, consistent clinical language, and clear quality standards. This is exactly the kind of task where AI excels: expanding structured inputs into structured outputs.
- The improvement is directly measurable. If a 15-minute note becomes a 5-minute review, you have reclaimed 10 minutes. Multiply by 20 clients per week and you have reclaimed over three hours. That is not a vague productivity claim. It is arithmetic.
This is the critical difference. The CEOs in the Fortune study cannot measure their AI productivity gains because there is nothing specific to measure. A therapist using AI documentation tools can measure theirs in minutes reclaimed per day.
The Solow Paradox Resolved: Specificity
Solow's original paradox about computers eventually resolved itself. By the mid-1990s, productivity growth surged — but only in organisations that had restructured their workflows around the technology rather than layering it on top of existing processes. Companies that gave everyone a computer and changed nothing else saw nothing. Companies that redesigned their operations around what computers made possible saw transformative gains.
AI is following the same pattern. The 90% of firms seeing no impact are the ones that added ChatGPT to their toolbar and changed nothing else. The gains will come — and are already coming — from organisations that identify specific, high-burden workflows and rebuild them around what AI makes possible.
In mental health, that workflow is documentation. The burden is well documented: clinicians spending their evenings writing notes, weekends catching up on paperwork, professional satisfaction declining as administrative demands grow. The task is clear, the pain is measurable, and the solution is straightforward — not because AI is magic, but because the problem is specific enough for AI to be useful.
Why Worker Confidence Is Dropping (And What That Means)
The Fortune article highlighted a striking disconnect: 76% of executives believe their employees are enthusiastic about AI, but only 31% of workers actually are. Sixty percent of Americans distrust AI entirely.
This is not irrational. Workers are being told AI will make them more productive, but their experience is that it creates new tasks (learning prompts, reviewing outputs, fixing errors) without eliminating old ones. The Bank of America economists quoted in the companion Fortune article put it bluntly: recent productivity gains are accruing as corporate profits, not worker benefit. Workers are doing more with AI, but their workload has not decreased and their compensation has not increased.
For therapists, this framing is particularly relevant. The documentation burden is not imposed by a corporation seeking profit. It is imposed by regulatory requirements, insurance demands, and professional standards. When AI reduces that burden, the benefit flows directly to the clinician: less time on paperwork, more time for clients, more time for life outside of work. There is no corporate intermediary capturing the gains.
This is why AI adoption in clinical documentation does not produce the same distrust. The clinician is both the worker and the beneficiary. The productivity gain is personal, immediate, and tangible — you finish your notes during the workday instead of at 10pm.
The Privacy Dimension the Paradox Ignores
The Fortune articles focus on productivity and economics. They do not address a factor that is central to AI adoption in healthcare: privacy.
Much of the AI distrust among workers is about data — who sees what they type, where it goes, what it trains. For therapists, this concern is not abstract. Clinical notes contain the most sensitive information a person can disclose. A therapist who sends session content to a general-purpose AI tool is taking a risk that most privacy frameworks were not designed to address.
This is another reason generic AI deployments fail to produce results: people do not trust them enough to use them for real work. A CEO might report that "we have AI tools available" while workers avoid using them for anything sensitive — which, in many professions, means anything that matters.
Purpose-built tools that address privacy at the infrastructure level — not just with policies, but with hardware-encrypted processing environments where data cannot be accessed even by the platform operator — change the adoption equation. When the trust problem is solved architecturally, adoption becomes a question of usefulness, not risk.
What the Paradox Gets Right
The AI productivity paradox is not wrong. It is describing something real: most organisations have spent heavily on AI and have nothing to show for it. The MIT researchers who projected a 40% boost in worker performance were measuring controlled tasks in lab conditions, not real-world adoption in messy organisations with unclear objectives and inconsistent implementation.
The lesson is not that AI does not work. It is that AI works when three conditions are met:
- The task is specific and repetitive. Not "brainstorming" or "general assistance" — a defined workflow with a clear input and output.
- The baseline is measurable. You need to know how long the task takes now to know whether AI made it faster.
- The benefit reaches the person doing the work. If AI makes a worker 30% faster but their workload increases to fill the gap, the worker sees no benefit and trust erodes.
Clinical documentation meets all three. The task is writing structured notes. The baseline is measurable in minutes per note. And the benefit — time reclaimed — goes directly to the clinician.
The Bottom Line
The AI productivity paradox is a story about misapplication, not about technology that does not work. When thousands of CEOs report no impact, they are telling you something about how they adopted AI, not about what AI can do.
For mental health professionals, the question is not whether AI can improve productivity. It is whether the specific tool, applied to a specific task, with appropriate privacy protections, actually reduces the burden. That is a question with a measurable answer — not a paradox.
The therapists who are reclaiming three to five hours per week from documentation are not experiencing a paradox. They are experiencing what happens when AI is used for something specific enough to matter.
References
- Rogelberg, S. (2026, February 17). Thousands of CEOs just admitted AI had no impact on employment or productivity. Fortune.
- Roytburg, E. (2026, February 17). Why your boss loves AI and you hate it: Corporate profits are capturing your extra productivity. Fortune.
- Solow, R. (1987). We'd better watch out. New York Times Book Review, July 12, 1987.
- ManpowerGroup. (2026). 2026 Global Talent Barometer.
- National Bureau of Economic Research. (2026). Survey of Business Uncertainty: AI adoption, employment, and productivity.
ConfideAI is a documentation tool built for mental health professionals, powered by hardware-secured confidential computing. Learn more at confideai.ai.