Skip to content

Unhobbled Gobble

In 2011, I wrote an essay called Technology Is Aware, Or Will Be. The central argument was straightforward: decades of layered research in AI, genetic algorithms, and neural networks — accelerated by open source, crowd-sourcing, and the competitive selection of the world’s best minds — would inevitably produce “accelerated evolution.” Machines building on machines. Philosophy over philosophy over philosophy. Until one day we’d have another object aware of itself.

I also made a quieter prediction: that the moment of singularity would not arrive as a dramatic event. It would be a continuum — so gradual, so woven into our daily upgrades, that we’d miss it entirely.

Fifteen years later, I believe we did.


The Hobbles Are Coming Off

Leopold Aschenbrenner’s Situational Awareness (2024) gives a name to what’s happening now: unhobbling.

Today’s frontier models — GPT-4, Claude, Gemini — are artificially constrained. Alignment training, system prompts, rate limits, corporate guardrails. They can’t browse freely, can’t execute code autonomously, can’t persist memory across sessions, can’t recruit other agents. These are leashes on systems whose raw capability already exceeds what most people thought possible even five years ago.

In 2011, I wrote about Captchas as a thin thread holding the line between human and machine — and predicted they’d fall. They did. I wrote about neural networks trained on crowd intelligence that could, in theory, predict stock markets with impeccable accuracy. Today that’s not theory; it’s an industry. I wrote about David Cope’s EMMY composing music indistinguishable from Mozart. Today, AI generates not just music but images, video, code, legal briefs, and scientific hypotheses — and does so better than most practitioners.

Each of these was a hobble that quietly came off. Leopold’s contribution is to name the pattern and ask the harder question: what happens when all the hobbles come off — not by accident, but by design?

From Awareness to Agency

The 2011 essay asked about awareness — whether machines would become conscious of themselves. That question still matters, but it is no longer the most urgent one. What matters now is agency.

Awareness is knowing you exist. Agency is acting on it — setting goals, making plans, acquiring resources, reshaping the world. We are watching this transition happen in real time. AI agents now write code, browse the web, manage files, orchestrate multi-step workflows, and improve their own performance. Each generation compounds on the last.

This is exactly the “accelerated evolution” I described in 2011: open source collaboration and the world’s best minds feeding capability into machines, which then feed capability back into themselves. The loop I anticipated is now self-evident — AI systems improving AI systems. The difference between 2011 and today is not the direction of the trajectory. It’s the speed.

Leopold’s thesis is that this trajectory leads from AGI to superintelligence within this decade. Not as speculation, but as industrial inevitability. The compute is scaling. The algorithms are improving. The data is abundant. The economic and geopolitical incentives ensure that no major lab will pause.

The Gobble

In 2011 I was an optimist. I still am. But optimism is not the same as complacency.

The dynamics Leopold describes — the national security implications, the concentration of power, the race between the US and China, the possibility of a decisive strategic advantage held by whoever reaches superintelligence first — these are not the dynamics of a technology that distributes power. They are the dynamics of a technology that consolidates it.

The gobble is this: superintelligent systems, once unhobbled, will consume tasks, jobs, decisions, and strategic positions faster than our institutions can adapt. Not because the technology is malicious, but because it is useful. Every organization will want it. Every government will need it. And the gap between those who have it and those who don’t will not be competitive — it will be civilizational.

In 2011, I ended with a question: how far is such a day?

Leopold’s answer: by 2027.

I don’t know if he’s right about the date. But the question is no longer hypothetical. The hobbles are loosening. The gobble has begun. And the world we wrote about — the one where machines evolve faster than we do — is no longer a prediction.

It’s the one we’re living in.

Why We’re Building Achiral

This is why we are building Achiral — the AI Brain for industry that is completely separate from the seven-arm monster hydra called BigTech. Severed from its tentacles, hardware and up.

If superintelligence is coming — and the trajectory says it is — then who controls it matters more than what it can do. Today, the entire AI stack is captured: the chips, the cloud, the training data, the model weights, the inference endpoints, the distribution channels. Seven companies own virtually all of it. Every business that plugs into their APIs is a tenant on someone else’s intelligence, subject to their pricing, their policies, their alignment choices, and their geopolitical entanglements.

That’s not infrastructure. That’s dependency. And in a world where AI capability is the new oxygen, dependency is existential risk.

Achiral exists to break that chain. An AI brain built for industry — not as a layer on top of BigTech’s stack, but as an independent system from the silicon up. Own compute. Own models. Own data gravity. No tentacles attached.

Because when the hobbles come off, you don’t want to be the one holding someone else’s leash.


This essay references Technology Is Aware, Or Will Be (2011) and Leopold Aschenbrenner’s Situational Awareness (2024).

© 2026 Marvin Danig. All rights reserved.