If Anyone Builds It, Everyone Dies: Maths, Gut Feel, and Why We’re All a Bit F***ed
Ambivalence is not an option
I’ve just been reading this Guardian review of If Anyone Builds It, Everyone Dies: How AI Could Kill Us All. The title isn’t exactly subtle and the book takes the strongest possible stance: if we keep pushing ahead with superintelligent AI, humanity is stuffed.
But how does that argument hold up? I asked Chat GPT to help me give it a poke:
The External Links: LessWrong and CAIS
Two sites loom large in this world:
- LessWrong: a rationalist forum where Eliezer Yudkowsky and others dissected AI alignment and existential risk long before ChatGPT was a twinkle in Sam Altman’s eye. It’s a place of deep thinking and sometimes apocalyptic vibes. Brilliant, but echo-chamberish.
- CAIS / AIStatement.com: the “Statement on AI Risk” signed by hundreds of big-name scientists warning that AI extinction risk should be treated like pandemics or nuclear war. Not fringe, but also not a scientific paper, more a rallying cry than a data-driven proof.
Both lend weight to the book’s stance: LessWrong provides the intellectual architecture; CAIS shows it’s not just a lone crank yelling into the void.
Doing the Numbers
Here’s the *simple maths (bear with me):
*I don't do simple or hard maths these days so these were Chat's numbers....
- Suppose the cost of slowing down AI development is huge: say $100 trillion in lost innovation and economic growth.
- Suppose the loss from extinction is $80 quadrillion (using a rough $10 million per person for 8 billion people).
...yes, you read that right: $10 million per human life. A neat round number policymakers use when running cost-benefit analyses. Which to my mind is even odder when you look at how lives are actually valued in practice: women denied autonomy over their own bodies, girls barred from education, whole populations treated as collateral damage in wars, supply chains, or climate collapse. It turns out the price of a life only stretches to eight digits when it’s an abstract figure on a government spreadsheet, and those abstractions are always imagined in that government’s own image...
Back to the 'abstract' maths, the “break-even” extinction probability where slowing down is worth it is:
100T/ 80Q =1.25 %, (about 1 in 800)
That’s tiny - or so Chat GPT informs me. Even if you think the chance that AI wipes us out this century is only one in a thousand, the case for 'strong brakes' is already there.
Surveys of AI researchers report median extinction risk estimates of around 5%. On this arithmetic alone, the book’s “slam the brakes” position is hard to dismiss.
Where the Argument Wobbles
- Probability creep. The authors lean toward inevitability: if superintelligence arrives, doom follows. But inevitability is a leap. Other researchers think risks are serious but manageable with governance and safety research.
- “Alienness” of AI. The book treats quirks of model behaviour (like odd punctuation) as proof of incomprehensibility. That may be over-reading? Sometimes the strangeness is more like language quirks than alien cognition.
- Policy binaries.The book’s framing is “halt or die.” Reality is messier than that. Stronger governance as a start, compute controls, liability laws, capability evaluations, international monitoring etc can cut extinction probabilities drastically without a full stop.
My Alternative Equation
If all the above felt too abstract, here’s the gut-maths version:
Speed of AI progress in the last 5 years
× Right-wingness + 'arseholeness' of tech bros + narcissistic world leaders
× Dominance of wealth as the ultimate life goal
= We’re all a bit f****d.
Let's actually put numbers on that equation, say 8/10 for speed, 7/10 for arseholeness, 9/10 for money-lust then we end up at an extinction risk of around 5% this century. Which, once more, is way above the 0.1% tipping point where “slow down” becomes the rational move.
A Feminist Futures Interruption
- Who sets the probabilities? Risk estimates come mostly from elite researchers in the Global North. They’re asked “what’s the chance of extinction?” while communities already living under AI’s harms - workers pushed into precarity, students subjected to biased surveillance, women silenced by online abuse, to make a few - aren’t asked at all. For them, AI risk isn’t hypothetical or future-tense, it’s daily.
- Whose futures are treated as expendable? When “economic cost of slowing down” is weighed against “loss from extinction,” the calculation assumes that GDP growth matters equally to everyone. But whose GDP? Which futures? GDP is still wheeled out as the justification metric, despite being the most useless indicator of humanity. It measures output, not justice; pollution, not care; extraction, not joy. A brake on AI arms races might look like “lost trillions” to Silicon Valley, but to communities at the sharp end of AI-powered policing or authoritarian monitoring, it could look like breathing space, even survival.
- Who benefits from urgency? The rhetoric of “if we don’t build it, someone else will” mirrors colonial logics: a scramble for resources, justified by fear of the other. Feminist futures work asks us to interrupt that scramble and imagine governance not as a race to own, but as a responsibility to share, slow, and sustain.
In short: yes, extinction risk matters. But so do the inequities and everyday harms that get brushed aside when “everyone dies” becomes the headline.
The Real Problem Isn’t AI
AI probably isn’t the problem anyway. The problem is who controls it, who profits from it, and the world we’ve already built around it. With this tech in the hands of extractive billionaires, authoritarian leaders, and GDP-worshipping policymakers, of course it’s going to trend towards dystopia - that is the inevitability.
That’s why people with social justice in their bones - the anti-fascists, the anti-authoritarians, the anti-billionaire-capitalists, the feminists, the just-not-an-arseholeists - need to be in the room, in the code, in the critique. We need to shape AI, challenge AI, and use AI as a crowbar to prise open something better.
Not because machines will inevitably kill us, but because humans already are - through the systems they defend and the futures they make expendable. The task isn’t to bow before AI or to fear it as fate. The task is to kick the world-as-it-is up its backside and demand change.
