Which is more dangerous, climate change or AI?
So I wondered.
My answer, while watching a woodpigeon waddling about in the garden, was that AI is quite obviously the more dangerous. It holds a level of threat for humanity that climate change could only dream of. The idea that an AI could actually wipe out humanity may seem far fetched. However, there’s no doubt that it has a non-zero probability, while climate change has no such chance.
We are told how the climate “crisis” is “escalating” or otherwise accelerating – when in fact, nothing objectively bad has happened so far. None of the warned-of extreme weather has yet come to pass – or rather, things are no worse than they always have been. And however bad extreme weather might get, in the most apocalyptic climate fantasy scenario, humans survive, and indeed thrive. We have seen parades and protests aplenty demanding putting a halt to a mild and distant threat, at the unsaid cost of civilisational destruction.
There are no such protests, that I have seen, demanding an end to research into artificial general intelligence.
Politicians bleat about climate change, and here in the UK, they are flailing around trying to look as if they are doing something about it, when whatever measures they put in place can never have the slightest effect. No unilateral policy can reduce global carbon dioxide emissions, and thus, the main result of climate change, i.e. global warming, must proceed, albeit at a sedate and non-threatening pace. In fact our politicians’ medicine is worse for us than the disease they claim to be treating. Think of the loss of wealth this country has suffered over the past two decades of climate virtue signalling.
Only a fool would act alone on climate change, but only a fool would not act alone on artificial intelligence.

The difference between the two is that unilateral action can never solve climate change, but without unilateral action on artificial intelligence, a country faces the risk of falling behind. Or worse, coming under direct threat.
Suppose our geopolitical adversaries develop artificial general intelligence before we do?
The primitive artificial intelligence we so laud today has already had a revolutionary effect on western civilisation. It is well known that graduate roles are vanishing as basic tasks are automated.

Are the young folks marching in the streets against it? No. At the moment they seem to be obsessed by Gaza, which, important though it is, can never affect their lives be it resolved in the way they want or otherwise. It’s quite curious. And the artificial intelligence that is already causing great turmoil is only a shadow of what may come – on what timescale, I do not know. I read the weekly summary of academic publishing news at Retraction Watch. Most weeks, it’s fair to say, the news items are dominated by artificial intelligence: creating images, writing paragraphs, even papers, hallucinating references, used as a reviewing tool. The students are embracing it like pyromaniacs in a fireworks factory. They are now using artificial intelligence to write essays at university, leaving them with only a casual acquaintance with the topic they are supposed to be experts in after three years.

The other 12% are learning, but coming out with lower degrees.

Even the usually less-than perspicacious Khan is worried.

Artificial artists are storming up the pop charts, and sometimes getting banned when found out. Orcs are using it to undress images of people. [In some ways, the future can’t come fast enough.] The plod are using AI hallucinations to fill out intelligence briefings with football matches that never happened, so that they can ban an away team’s fans.
Ah, but the software engineers are now 200% more productive! Hm. Today’s more productive engineer is getting a P45 tomorrow. Claude’s Cowork was written by its product Claude Code, according to reports.

This version of the industrial revolution may be the first where there are no winners.
Last week, I wrote a report, and told myself that an artificial intelligence tool could not have done it. I think I’m still right about that, thanks to how messy my spreadsheets are. But generally, the changes are spreading apace.
I asked the internet my question:
Which is more dangerous, climate change or AI?
It misunderstood my search, I think, for it threw up answers that were rather perplexing. On the first page of the results was this paper by Watts and Bayrak:
Artificial Intelligence and Climate Change: A Review of Causes and Opportunities
They said that AI was “playing contrasting roles in the climate crisis.” On the one hand, it’s creating emissions thanks to the multiplying data centres. On the other, it’s helping to optimise energy use. Right, thanks, but what about the non-zero risk of an existential threat to humanity? What about the unprecedented social upheaval? Here are some of the other links proffered:



[The WEF one is for all you tin-foil-hatters out there.]
Last year’s UK Risk Register has little to say about AI. In fact, this is all I could find in its 187-page document:
Advances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation, or if handled improperly, reduce economic competitiveness.
John R on the 2023 register here.
You may be wondering what Jit is on about when he says “existential threat.” Are we talking about the Terminator? No, I shouldn’t think so. Killing humans with robots is rather inefficient.
If you go to this page on Wiki, you can read about (hopefully permanently) hypothetical phenomena related to out-of-control AI, including the famous “paperclip maximiser,” where a computer given the task of building paperclips attempts to turn the entire world into either paperclips or paperclip factories, including the atoms in humans. Of course, no AI would be ordered to create infinite paperclips. It’s a thought experiment on the subject of instrumental convergence – where, given any significantly taxing task, an AI naturally takes the intermediate step of appropriating every available resource, to make its work easier.
Of course, an AI that is vulnerable to being switched off, is also at risk of failing in its task. So an inevitable step is to make itself indestructible, or to remove any threats, like John Connor.
Conclusion
The bit about humanity perishing is tongue-in-cheek. I hope. The rest is real enough though, and the speed of the changes we are seeing, I think, show that human existence is going to be tossed up in the air in the next five years – and who knows how it will land? Whatever happens, the changes caused by AI are going to dwarf those of the “climate crisis.” Hopefully our friends at the Guardian will start a campaign against the “AI crisis” soonish.
And don’t have nightmares. The really dangerous stuff is still a long time away. No-one is going to use AI to build a virus to start a new pandemic. Not in the 2020s at least. Right?
AI prompt
Something like “The Terminator relaxing on a sunny beach on a deckchair with a cocktail and an Uzi 9mm.”
There’s a lot to unpack in there, so I’ll go off at a tangent prompted by your musings.
AI is something of a mystery to me. I have seen it write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long. I say “came close” (it didn’t succeed) because it seemed to me that in order to come up with a good answer the question first had to be well-constructed. Could lay clients successfully persuade AI to solve their legal problems for them? I doubt it. But could the number of lawyers be massively reduced by technically competent lawyers in small numbers getting AI to do the work of junior staff? In due course (perhaps before long?) very possibly.
On the other hand, I have started playing with AI in connection with family history research. It occurred to me that where my research has hit a brick wall, AI might be able to break the wall down and find the next generation back in time for me, dut its ability to search millions of websites in very short order. Surely, I reasoned, it might find a website I have missed with some archaic information in it, and use the information I have given it to solve the mystery of the ancestors I can’t find? Wrong. It hasn’t solved any problems for me so far. At one level that’s forgivable – perhaps the answers just aren’t there to be found. It’s made some basic mistakes too, and in one or two cases, they were absolute howlers. WHen I explained the error, it was quick to correct itself, but what’s the good of AI if you have to tell it what it’s doing wrong?
On a different theme entirely, why do people choose to protest about some things but not others? Why do Greta and her cohorts march against climate change and Israel’s war in Gaza, but when it comes to Iran’s murderous regime, and the threats from AI (the latter might “steal your future”, Greta) it’s crickets. Are they just easily manipulated, can they not think for themselves, or do they genuinely care about some issues but not others? In which case, how do they choose which to care about? Is it about being “trendy” or jumping on a bandwagon? It’s all a bit of a mystery.
Finally, how stupid are our political leaders? Whether AI datacentres are a threat or a boon, becoming a “world leader” in them (yeah, right) isn’t compatible with net zero. Something has to give.
LikeLiked by 1 person
“AI companies will fail. We can salvage something from the wreckage”
https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur
Quite an interesting read. It sees a different threat:
…AI is a bubble and it will burst. Most of the companies will fail. Most of the datacenters will be shuttered or sold for parts. So what will be left behind?…
LikeLiked by 1 person
Rather puts me in mind of that cartoon of a pair of Daleks at the foot of a staircase saying how it had put paid to their quest to conquer the universe. My several encounters with Copilot on a variety of topics have, it assured me in the most fulsome and sincere terms, been a real delight for it to engage. It would be missing all that if it chose to eliminate me so I am confident I will be spared. Get chatting if you want to be spared.
LikeLiked by 2 people
I have seen [AI} write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long.
Did you check for hallucinations in any of the quoted data? AI can be quite effective at producing an argument to reach the conclusion you desire, but if there’s a lack of real world data, it tends to interpolate something convincing, but fake. If all AI output needs fact checking by an expert in the subject, it’s neither saving much time nor money.
LikeLiked by 1 person
Quentin and Mark, re legal opinions. The recent Sandie Peggie decision was said to have been partly written by AI, e.g. GB News.
Also at Reuters here re another case. Quote
However, it’s important to remember that this is 2026 and Chat GPT was unheard of 5 years ago. The pace of change in these LLMs is quite amazing; they are getting better all the time. Yes, for now only a lazy fool would let a chatbot write legal arguments without checking the citations. How much longer that will be true, I do not know.
LikeLike