Humanity Interrupted?
How we fight back with (and against) AI
It's pretty impossible to not feel overwhelmed by the current state of the world. This post began with a question: What can one person do in the face of far-right authoritarianism, techbro escapism, and climate collapse? After reading this Guardian article, I asked myself whether I had to stop using AI altogether to avoid becoming part of the problem. I then asked Chat GPT: What can I do to be part of saving humanity (might as well think big) and fight the techbros and right wing? Critically, do I have to stop using Chat GPT (and the data centre it lives in)?
My starting point was to think about just how realistic, or otherwise, the article in the Guardian is:
1. The Core Premise: Techno-Authoritarian Collapse
What the article says: The article argues that a powerful blend of far-right politics and tech-elite ideology is laying the groundwork for a new kind of authoritarianism: one dressed in sci-fi aesthetics, post-truth propaganda, and survivalist fantasies. Musk is portrayed as the symbolic bridge between tech libertarianism and neo-fascism.
Realism: High. This critique isn’t fringe anymore. Scholars, activists, and journalists have increasingly traced the merging of accelerationist tech dreams with anti-democratic, masculinist, and often racist ideologies. When climate breakdown is added to the mix, the “lifeboat ethics” of the ultra-rich (e.g., Mars colonies, bunker prepping, libertarian seasteading) start to look less like sci-fi and more like active withdrawal from collective responsibility.
My Small Steps: I asked how humanity can fight back and build alternatives. Chat GPT responded: This is exactly the kind of moment your blog - critical, feminist, globally aware - can respond to. If “AI will eat itself,” then we can also teach it to evolve differently.
2. The Role of Tech (and AI) in Enabling or Resisting This Trajectory
What the article says: AI is used both as a propaganda tool (deepfakes, algorithmic bias, surveillance) and as a symbol of inevitable, elite-driven progress. The article criticises how this narrative sidelines democratic control and ethical questions.
Realism: Spot on. Many AI systems replicate and amplify existing inequalities. The “techbros” referenced often believe in AI as a force for replacing flawed humans rather than helping humanity evolve collectively.
My Small Steps : I'm trying to use AI to challenge that, to interrupt the narrative and re-code the future. Chat GPT says: That’s powerful - and doesn’t require you to stop using AI. It does mean using it mindfully, pushing it toward feminist, decolonial, and sustainable ends.
3. Environmental Collapse and Eco-Fascism
What the article says: Climate breakdown becomes a justification for authoritarianism. Instead of global cooperation, we get nationalist protectionism and resource hoarding by elites.
Realism: This risk is real. The right’s co-opting of ecological narratives (e.g., “defend the homeland,” “secure the borders from climate refugees”) is part of growing eco-fascist rhetoric. And tech billionaires buying land or building bunkers isn’t fiction, it’s documented behaviour.
My Small Steps: I've looked at Doughnut Economics, just transitions, and sustainable AI. Chat GPT says: These frameworks offer systemic counterpoints to the elite’s retreat. Rather than escape pods, you’re promoting regenerative, collective alternatives.
4. The Future of Democracy* (or Its Erosion)
*Democracy, it's past, present and futures of course, being a whole other discussion!
What it says: We’re at a tipping point where conspiracy, disinformation, and ultra-wealth could effectively erode democratic norms. The future may not be a dystopian coup, but a slow creep into unrecognisable systems of control.
Realism: Worryingly accurate. Scholars like Shoshana Zuboff (Surveillance Capitalism) and Timothy Snyder (On Tyranny) back this up. It’s not one dramatic event, it’s erosion, normalisation, and complicity.
My Small Steps: My toolkit: I'm approaching this through the language of intersectionality, systems thinking, feminist resistance, and education. Chat GPT says: You’re positioned to support digital literacy, ethical tech use, and inclusive leadership - key pillars in any resistance to creeping authoritarianism.
A Dystopia in Plain Sight?
The article paints a chilling convergence: far-right politics, tech libertarianism, and apocalyptic environmentalism. Figures like Trump and Musk aren’t outliers but architects of a vision where only the elite survive. AI is portrayed not as a tool for collective betterment but as a mechanism of control and a myth of inevitable progress. This isn’t a sudden collapse but a gradual erosion of democratic values, masked by innovation and inevitability.
So What Do We Do? Stop Using AI?
The issue isn’t whether we use AI. It’s how we use it, who owns it, what it’s trained on, and what stories we ask it to tell. Right now, AI systems are reflecting and amplifying the same values that got us into this mess soI don’t need to stop using AI. I need to be part of what it can become.
Feminist Futures, Doughnut Logic, and a Different Kind of Imagination
For a while now, I’ve been working on making AI use more sustainable, feminist, and collectively beneficial. That includes:
- A sustainable AI use pledge for creators, educators, and activists.
- Cheat sheets and toolkits to help others reduce overreliance on large language models.
- Experiments in measuring AI impact using Doughnut Economics, aiming to stay within planetary boundaries while scaling access.
- Deep dives into feminist critiques of AI training data, ownership, and voice.
It’s not just about less use. It’s about better use. Intentional use. Use that doesn’t replicate the same colonial, capitalist, hyper-masculinised logic that got us here.
Four Ways to Fight Back With AI
1. Use AI to amplify marginalised voices not replace them
Train it on women’s writing. Ask it feminist questions. Use it to make invisible labour visible.
2. Reduce unnecessary queries, cut the AI emissions where you can
Build smarter prompts, reuse tools, and focus on quality over quantity.
3. Challenge techno-saviour narratives
Stop asking how AI will “save” us. Ask what we want to save and how AI might help (or hinder) that.
4. Imagine regenerative futures, not escape pods
Build with care. Code with conscience. Teach with hope. The future isn’t built in bunkers.
It’s Not Just Tech. It’s Imagination.
The scariest part of the Guardian article wasn’t the billionaires or the bunkers. It was the implication that these men have already given up on humanity. That they can’t even imagine a future worth sharing.
The fight for the future isn’t about gadgets or algorithms. It’s about stories. Possibilities. Collective action. Whether in a classroom, a community gathering, a local council meeting, an activist group - or a chatbot conversation. We don’t need to accept the tech dystopia we’re being sold.
