GOD-TIER AI? Why there’s no easy exit from the human condition

Many working in technology are entranced by a story of a god-tier shift that is soon to come. The story is the “fast takeoff” for AI, often involving an “intelligence explosion.” There will be a singular moment, a cliff-edge, when a machine mind, having achieved critical capacities for technical design, begins to implement an improved version of itself. In a short time, perhaps mere hours, it will soar past human control, becoming a nearly omnipotent force, a deus ex machina for which we are, at best, irrelevant scenery.

This is a clean narrative. It is dramatic. It has the terrifying, satisfying shape of an apocalypse.

It is also a pseudo-messianic myth resting on a mistaken understanding of what intelligence is, what technology is, and what the world is.

The world adapts. The apocalypse is deferred. The technology is integrated.

The fantasy of a runaway supermind achieving escape velocity collides with the stubborn, physical, and institutional realities of our lives. This narrative mistakes a scalar for a capacity, ignoring the fact that intelligence is not a context-free number but a situated process, deeply entangled with physical constraints.

The fixation on an instantaneous leap reveals a particular historical amnesia. We are told this new tool will be a singular event. The historical record suggests otherwise.

Major innovations, the ones that truly resculpted civilization, were never events. They were slow, messy, multi-decade diffusions. The printing press did not achieve the propagation of knowledge overnight; its revolutionary power was in the gradual enabling of the secure communication of information, which in turn allowed knowledge to compound. The steam engine unfolded over generations, its deepest impact trailing its invention by decades.

With each novel technology, we have seen a similar cycle of panic: a flare of moral alarm, a set of dire predictions, and then, inevitably, the slow, grinding work of normalization. The world adapts. The apocalypse is deferred. The technology is integrated. There is little reason to believe this time is different, however much the myth insists upon it.

The fantasy of a fast takeoff is conspicuously neat. It is a narrative free of friction, of thermodynamics, of the intractable mess of material existence. Reality, in contrast, has all of these things. A disembodied mind cannot simply will its own improved implementation into being.

RELATED: ‘Unprecedented’: AI company documents startling discovery after thwarting ‘sophisticated’ cyberattack

Photo by Arda Kucukkaya/Anadolu via Getty Images

Any improvement, recursive or otherwise, encounters physical limits. Computation is bounded by the speed of light. The required energy is already staggering. Improvements will require hardware that depends on factories, rare minerals, and global supply chains. These things cannot be summoned by code alone. Even when an AI can design a better chip, that design will need to be fabricated. The feedback loop between software insight and physical hardware is constrained by the banal, time-consuming realities of engineering, manufacturing, and logistics.

The intellectual constraints are just as rigid. The notion of an “intelligence explosion” assumes that all problems yield to better reasoning. This is an error. Many hard problems are computationally intractable and provably so. They cannot be solved by superior reasoning; they can only be approximated in ways subject to the limits of energy and time.

Ironically, we already have a system of recursive self-improvement. It is called civilization, employing the cooperative intelligence of humans. Its gains over the centuries have been steady and strikingly gradual, not explosive. Each new advance requires more, not less, effort. When the “low-hanging fruit” is harvested, diminishing returns set in. There is no evidence that AI, however capable, is exempt from this constraint.

Central to the concept of fast takeoff is the erroneous belief that intelligence is a singular, unified thing. Recent AI progress provides contrary evidence. We have not built a singular intelligence; we have built specific, potent tools. AlphaGo achieved superhuman performance in Go, a spectacular leap within its domain, yet its facility did not generalize to medical research. Large language models display great linguistic ability, but they also “hallucinate,” and pushing from one generation to the next requires not a sudden spark of insight, but an enormous effort of data and training.

The likely future is not a monolithic supermind but an AI service providing a network of specialized systems for language, vision, physics, and design. AI will remain a set of tools, managed and combined by human operators.

To frame AI development as a potential catastrophe that suddenly arrives swaps a complex, multi-decade social challenge for a simple, cinematic horror story. It allows us to indulge in the fantasy of an impending technological judgment, rather than engage with the difficult path of development. The real work will be gradual, involving the adaptation of institutions, the shifting of economies, and the management of tools. The god-machine is not coming. The world will remain, as ever, a complex, physical, and stubbornly human affair.

​Ai, Return, Tech 

You May Also Like

More From Author