You're not ready for what's coming
Intelligence is trending toward free. Your capacity to use it isn't.
In 1970, Alvin Toffler published a book called Future Shock. His thesis was simple and devastating: the acceleration of change would eventually outpace the human capacity to adapt. Not because the changes themselves were harmful, but because the rate of change would become the problem. Technology would arrive faster than we could absorb it. Institutions would shift faster than our ability to grieve the old ones. And the people caught in the middle, the capable, competent professionals who’d built entire careers on being the smartest person in the room, would suffer most. Not because they couldn’t learn. Because they were already running at capacity.
We’ve navigated transformation before. In 1900, roughly 40% of the American workforce farmed for a living. Today, that number is under 2%. The jobs that replaced farming: software engineer, UX designer, and content strategist, were literally unimaginable to the people holding ploughs. Entire categories of work vanished. Entirely new ones emerged. Humanity adapted. But the Industrial Revolution unfolded over roughly 80 years. Families had generations to shift. Children grew up learning skills their parents never needed, and the transition still caused enormous upheaval.
Demis Hassabis, CEO of Google DeepMind and a Nobel laureate, says what’s coming will be “10 times bigger than the Industrial Revolution, and maybe 10 times faster.” Do the maths. That’s not 80 years of adjustment compressed into a generation. It’s compressed into a decade.
But speed isn’t the only thing that’s different. The economics have shifted underneath.
We used to pay lawyers £500 an hour because legal reasoning was scarce. We used to pay accountants, consultants, and analysts handsomely because the ability to synthesise complex information, spot patterns, and make sound judgments was rare enough to command a premium. Scarcity justified the price. And for decades, that scarcity protected entire professional classes; if you could think clearly about hard problems, the market rewarded you for it.
That scarcity is evaporating. The cost of intelligence is trending toward zero. A reasoning engine that passes the bar exam, writes competent code, and synthesises research across disciplines now lives in your pocket. Not in a decade. Now. The same capability that once required a team of specialists and a six-figure budget is available to anyone with a browser and a question.
This changes everything. And not in the way most people fear. When intelligence becomes abundant, the world doesn’t collapse. It opens. The person with a problem and an idea can now build solutions that used to require a department. The professional who learns to work with these tools doesn’t lose leverage; they gain a kind of creative power that has never existed before in human history. You can construct almost any configuration of reality you want. But you have to start learning today.
A senior director I know was pulled into a meeting last month. The agenda: integrate AI into your team’s workflow by the end of the quarter. No additional headcount. No reduced deliverables. No training budget. Just a mandate and a deadline, layered on top of the 47 hours a week she was already working. She nodded, took notes, walked back to her desk, and opened a browser tab she wouldn’t get to for 3 days.
She’s not behind. She’s overloaded. And the overload is the point no one’s talking about.
Every conversation about AI right now is about what happens to the work. Which tasks get automated? Which roles get eliminated? Which industries fall first? The analysts are modelling displacement percentages. The thought leaders are posting frameworks for “AI-proofing your career.” Jensen Huang, CEO of NVIDIA, crystallised the anxiety into a single line:
“You’re not going to lose your job to an AI, but you’re going to lose your job to someone who uses AI.”
He’s probably right. But he’s answering the wrong question.
The question nobody’s asking is this: what happens to a human being who was already running a body budget deficit, already making 300+ decisions a day, already sleeping 6 hours instead of 8, when you add “learn an entirely new category of tools and rethink your entire workflow” to the pile?
The neuroscience is unambiguous. Lisa Feldman Barrett’s research on the body budget shows that every cognitive demand is a metabolic withdrawal. Every new tool to learn, every process to rethink, every meeting about “AI strategy” is a debit against a balance that was already overdrawn. Robert Sapolsky’s work on chronic stress demonstrates that sustained cognitive overload physically remodels the brain, shrinking the prefrontal cortex where strategic thinking lives, and enlarging the amygdala where reactivity lives. Matthew Walker’s sleep research shows that the prefrontal cortex is the first casualty of sleep deprivation. The person making strategic decisions about AI adoption on 6 hours of sleep is, measurably, a different decision-maker than the person making those same decisions on 8.
A recent study by METR put this in sharp relief. In a randomised trial, experienced developers using AI tools were 19% slower than those working without them. But they predicted AI would speed them up by 24%, and even after the study, still believed it had. When you’re running at capacity, you can’t even accurately assess what’s helping you.
The disruption isn’t coming for your job first. It’s coming for your capacity to respond to the disruption.
The bottleneck is never information. It’s never access to tools. It’s never intelligence. In every period of profound change, the bottleneck is the same: the biological and environmental infrastructure underneath.
Microsoft’s 2025 Work Trend Index found that 80% of the global workforce says they lack enough time or energy to do their work. Not enough time or energy. Employees are interrupted every 2 minutes on average. Gallup’s latest data shows global employee engagement has fallen to 21%, the first decline since the pandemic. Manager engagement dropped even further. And this is the baseline before you layer AI adoption on top.
The struggle has nothing to do with falling behind in the technology. Your body budget was overdrawn before the mandate arrived. The 3 pm fog isn’t laziness. The Sunday anxiety isn’t weakness. The growing sense that you can’t absorb one more thing isn’t a character flaw. It’s a system running on fumes being asked to sprint.
And the mandate itself is hollow. Harmonic Security’s analysis of 22 million enterprise AI prompts found that only 40% of companies have purchased official AI subscriptions, while employees at over 90% of organisations are already using AI tools through personal accounts that IT never approved. The directive is “use AI.” The infrastructure to support it doesn’t exist. McKinsey confirms the gap: 78% of organisations use AI in at least one function, but only 6% have achieved business transformation. You’re being told to run a race while the organisation is still lacing its shoes.
Even where tools exist, the training doesn’t. EY’s 2025 survey of 15,000 employees found that 88% use AI at work, but most never get beyond search and basic summarisation. Only 5% use it in ways that transform their work. Only 12% say they’ve received sufficient training. People are using the most powerful reasoning tools ever built the same way they used Google in 2004: typing a question and hoping for a good answer. The gap between what these tools can do and what people have been taught to do with them is where 40% of the potential value disappears.
Dario Amodei, CEO of Anthropic, put the timeline plainly:
“The pace of progress in AI is much faster than for previous technological revolutions. It is hard for people to adapt to this pace of change.”
He warned that AI could displace 50% of entry-level white-collar jobs and spike unemployment to 10-20% within 1 to 5 years. The IMF estimates that 60% of jobs in advanced economies will be affected. Workers with bachelor’s degrees are more than 5 times as exposed as those with only a high school education. For the first time in the history of automation, it’s the educated, experienced, well-compensated professionals who face the greatest disruption.
And those professionals are the ones already running at capacity.
You don’t have a skills gap, you have a capacity crisis.
Epictetus drew a line that has held for 2,000 years:
“Make the best use of what is in your power, and take the rest as it happens.”
The macro disruption, Amodei’s timeline, the IMF projections, the reverse skill bias; none of that is in your power. What is in your power is the infrastructure underneath your response.
This is the part where most advice fails. “Upskill.” “Take a course.” “Learn to prompt.” As if the problem is a knowledge deficit and the solution is more information poured into a system already drowning in it. You don’t need more input. You need the capacity to process what’s already arriving.
The prescription is environmental, not educational.
It starts with the same principle this newsletter has returned to again and again: protect the window. Not for email, not for admin, not for someone else’s emergency. For building. An hour a day; not consuming content about AI, but working with it. Giving it a real problem from your actual work and learning what it can do with your context, your judgment, and your domain expertise applied as input. The 5% of employees who use AI in ways that transform their work didn’t get there by reading articles about AI. They got there by doing the slow, unglamorous work of learning how to think alongside a new kind of tool.
Capacity architecture. Not another course on your to-do list; a deliberate redesign of how you spend the hours you already have. The same way you’d protect deep work from Slack notifications, you protect learning from the noise of adoption theatre; the town halls, the strategy decks, the mandatory webinars that teach you what a large language model is but never how to use one for the work sitting on your desk right now.
Epictetus put it simply:
“If you wish to be a writer, write. If you wish to be a reader, read. But if you spend your time reading when you claim you wish to write, you are deceiving yourself.”
The hours lost to shallow adoption; search-bar prompting, copy-paste summaries, tools running at 10% of their capacity; don’t come back.
The transformation is from consumer to architect. From someone who uses AI the way they used Google to someone who builds with it. From someone the disruption is happening to, to someone building the infrastructure to navigate it.
Toffler was writing in 1970. He’d never seen a large language model, never watched a machine pass a bar exam, never imagined a world where the most powerful reasoning tools in human history would be free to anyone with a phone. But he saw the shape of the problem before any of us lived it: the acceleration outpaces the infrastructure. Not the technology. The human underneath.
The senior director in the meeting hasn’t opened that browser tab yet. She will. Not because she found more hours, but because she stopped waiting for conditions to improve and started building.
Seneca, writing from exile with nothing but his mind and his will, left us a line worth carrying:
“Begin at once to live, and count each separate day as a separate life.”
Not next quarter. Not when the mandate comes with a training budget. Not when things calm down.
The tools are waiting. The window is yours.
Begin.
Start here. The largest AI companies are giving away the foundation for free: Anthropic AI Fluency Framework | OpenAI ChatGPT Basics



“The tools are waiting, the windows is yours”
Nailed it Dihan, like learning to drive—you don’t need to build the engine , you just need to choose the right car and handle it. Like wise with AI , you have to master the tools that serve the target, rather than getting stuck 'under the hood' of the math. Great perspective.
This resonates... not because the future is “shocking,” but because it exposes how unprepared most of our structures are for distributed intelligence.
What strikes me isn’t the speed of model improvement. It’s the widening gap between capability and coordination.
We’ve spent decades optimizing for centralized oversight, static workflows, and human-in-the-loop checkpoints. That works when systems are tools. It breaks when systems begin to reason, decompose tasks, and collaborate across boundaries.
The real question isn’t whether acceleration is coming. It’s whether we’re designing architectures that can remain legible under that acceleration.
Autonomy without boundaries becomes instability.
Control without adaptability becomes irrelevance.
If intelligence is becoming distributed, then governance must become structural: embedded in routing layers, capability discovery, feedback loops, and authority models. Not bolted on afterward.
The future won’t be decided by who builds the most powerful model.
It will be shaped by who architects coordination systems that can evolve without collapsing into noise.
That feels like the work in front of us.
Let's build stronger, let's build together.