Which is more dangerous, climate change or AI?

So I wondered.

My answer, while watching a woodpigeon waddling about in the garden, was that AI is quite obviously the more dangerous. It holds a level of threat for humanity that climate change could only dream of. The idea that an AI could actually wipe out humanity may seem far fetched. However, there’s no doubt that it has a non-zero probability, while climate change has no such chance.

We are told how the climate “crisis” is “escalating” or otherwise accelerating – when in fact, nothing objectively bad has happened so far. None of the warned-of extreme weather has yet come to pass – or rather, things are no worse than they always have been. And however bad extreme weather might get, in the most apocalyptic climate fantasy scenario, humans survive, and indeed thrive. We have seen parades and protests aplenty demanding putting a halt to a mild and distant threat, at the unsaid cost of civilisational destruction.

There are no such protests, that I have seen, demanding an end to research into artificial general intelligence.

Politicians bleat about climate change, and here in the UK, they are flailing around trying to look as if they are doing something about it, when whatever measures they put in place can never have the slightest effect. No unilateral policy can reduce global carbon dioxide emissions, and thus, the main result of climate change, i.e. global warming, must proceed, albeit at a sedate and non-threatening pace. In fact our politicians’ medicine is worse for us than the disease they claim to be treating. Think of the loss of wealth this country has suffered over the past two decades of climate virtue signalling.

Only a fool would act alone on climate change, but only a fool would not act alone on artificial intelligence.

Link

The difference between the two is that unilateral action can never solve climate change, but without unilateral action on artificial intelligence, a country faces the risk of falling behind. Or worse, coming under direct threat.

Suppose our geopolitical adversaries develop artificial general intelligence before we do?

The primitive artificial intelligence we so laud today has already had a revolutionary effect on western civilisation. It is well known that graduate roles are vanishing as basic tasks are automated.

Are the young folks marching in the streets against it? No. At the moment they seem to be obsessed by Gaza, which, important though it is, can never affect their lives be it resolved in the way they want or otherwise. It’s quite curious. And the artificial intelligence that is already causing great turmoil is only a shadow of what may come – on what timescale, I do not know. I read the weekly summary of academic publishing news at Retraction Watch. Most weeks, it’s fair to say, the news items are dominated by artificial intelligence: creating images, writing paragraphs, even papers, hallucinating references, used as a reviewing tool. The students are embracing it like pyromaniacs in a fireworks factory. They are now using artificial intelligence to write essays at university, leaving them with only a casual acquaintance with the topic they are supposed to be experts in after three years.

The other 12% are learning, but coming out with lower degrees.

Even the usually less-than perspicacious Khan is worried.

Artificial artists are storming up the pop charts, and sometimes getting banned when found out. Orcs are using it to undress images of people. [In some ways, the future can’t come fast enough.] The plod are using AI hallucinations to fill out intelligence briefings with football matches that never happened, so that they can ban an away team’s fans.

Ah, but the software engineers are now 200% more productive! Hm. Today’s more productive engineer is getting a P45 tomorrow. Claude’s Cowork was written by its product Claude Code, according to reports.

This version of the industrial revolution may be the first where there are no winners.

Last week, I wrote a report, and told myself that an artificial intelligence tool could not have done it. I think I’m still right about that, thanks to how messy my spreadsheets are. But generally, the changes are spreading apace.

I asked the internet my question:

Which is more dangerous, climate change or AI?

It misunderstood my search, I think, for it threw up answers that were rather perplexing. On the first page of the results was this paper by Watts and Bayrak:

Artificial Intelligence and Climate Change: A Review of Causes and Opportunities

They said that AI was “playing contrasting roles in the climate crisis.” On the one hand, it’s creating emissions thanks to the multiplying data centres. On the other, it’s helping to optimise energy use. Right, thanks, but what about the non-zero risk of an existential threat to humanity? What about the unprecedented social upheaval? Here are some of the other links proffered:

[The WEF one is for all you tin-foil-hatters out there.]

Last year’s UK Risk Register has little to say about AI. In fact, this is all I could find in its 187-page document:

Advances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation, or if handled improperly, reduce economic competitiveness.

John R on the 2023 register here.

You may be wondering what Jit is on about when he says “existential threat.” Are we talking about the Terminator? No, I shouldn’t think so. Killing humans with robots is rather inefficient.

If you go to this page on Wiki, you can read about (hopefully permanently) hypothetical phenomena related to out-of-control AI, including the famous “paperclip maximiser,” where a computer given the task of building paperclips attempts to turn the entire world into either paperclips or paperclip factories, including the atoms in humans. Of course, no AI would be ordered to create infinite paperclips. It’s a thought experiment on the subject of instrumental convergence – where, given any significantly taxing task, an AI naturally takes the intermediate step of appropriating every available resource, to make its work easier.

Of course, an AI that is vulnerable to being switched off, is also at risk of failing in its task. So an inevitable step is to make itself indestructible, or to remove any threats, like John Connor.

Conclusion

The bit about humanity perishing is tongue-in-cheek. I hope. The rest is real enough though, and the speed of the changes we are seeing, I think, show that human existence is going to be tossed up in the air in the next five years – and who knows how it will land? Whatever happens, the changes caused by AI are going to dwarf those of the “climate crisis.” Hopefully our friends at the Guardian will start a campaign against the “AI crisis” soonish.

And don’t have nightmares. The really dangerous stuff is still a long time away. No-one is going to use AI to build a virus to start a new pandemic. Not in the 2020s at least. Right?

AI prompt

Something like “The Terminator relaxing on a sunny beach on a deckchair with a cocktail and an Uzi 9mm.”

12 Comments

  1. There’s a lot to unpack in there, so I’ll go off at a tangent prompted by your musings.

    AI is something of a mystery to me. I have seen it write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long. I say “came close” (it didn’t succeed) because it seemed to me that in order to come up with a good answer the question first had to be well-constructed. Could lay clients successfully persuade AI to solve their legal problems for them? I doubt it. But could the number of lawyers be massively reduced by technically competent lawyers in small numbers getting AI to do the work of junior staff? In due course (perhaps before long?) very possibly.

    On the other hand, I have started playing with AI in connection with family history research. It occurred to me that where my research has hit a brick wall, AI might be able to break the wall down and find the next generation back in time for me, dut its ability to search millions of websites in very short order. Surely, I reasoned, it might find a website I have missed with some archaic information in it, and use the information I have given it to solve the mystery of the ancestors I can’t find? Wrong. It hasn’t solved any problems for me so far. At one level that’s forgivable – perhaps the answers just aren’t there to be found. It’s made some basic mistakes too, and in one or two cases, they were absolute howlers. WHen I explained the error, it was quick to correct itself, but what’s the good of AI if you have to tell it what it’s doing wrong?

    On a different theme entirely, why do people choose to protest about some things but not others? Why do Greta and her cohorts march against climate change and Israel’s war in Gaza, but when it comes to Iran’s murderous regime, and the threats from AI (the latter might “steal your future”, Greta) it’s crickets. Are they just easily manipulated, can they not think for themselves, or do they genuinely care about some issues but not others? In which case, how do they choose which to care about? Is it about being “trendy” or jumping on a bandwagon? It’s all a bit of a mystery.

    Finally, how stupid are our political leaders? Whether AI datacentres are a threat or a boon, becoming a “world leader” in them (yeah, right) isn’t compatible with net zero. Something has to give.

    Liked by 1 person

  2. Rather puts me in mind of that cartoon of a pair of Daleks at the foot of a staircase saying how it had put paid to their quest to conquer the universe. My several encounters with Copilot on a variety of topics have, it assured me in the most fulsome and sincere terms, been a real delight for it to engage. It would be missing all that if it chose to eliminate me so I am confident I will be spared. Get chatting if you want to be spared.

    Liked by 2 people

  3. I have seen [AI} write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long.

    Did you check for hallucinations in any of the quoted data? AI can be quite effective at producing an argument to reach the conclusion you desire, but if there’s a lack of real world data, it tends to interpolate something convincing, but fake. If all AI output needs fact checking by an expert in the subject, it’s neither saving much time nor money.

    Liked by 1 person

  4. Quentin and Mark, re legal opinions. The recent Sandie Peggie decision was said to have been partly written by AI, e.g. GB News.

    Also at Reuters here re another case. Quote

    A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations.

    However, it’s important to remember that this is 2026 and Chat GPT was unheard of 5 years ago. The pace of change in these LLMs is quite amazing; they are getting better all the time. Yes, for now only a lazy fool would let a chatbot write legal arguments without checking the citations. How much longer that will be true, I do not know.

    Like

  5. Mark, regarding the collapse of data centres etc. Yes, 90% of AI companies will go bust, or be swallowed up by the others. But AI is not going bust. It’s going big. Is it as big yet as the crazy company valuations? Surely not.

    Like

  6. And my response, FWIW – the AI response didn’t reference or cite anything by way of back-up, if I recall it correctly. It simply wrote an essay by way of a summary of the legal situation, and although I didn’t study it in depth, it seemed spot-on to me.

    But then, as I say, I have also seen it deliver up howlers when I asked it family history-related questions. Definitely a curate’s egg (at this stage of its development).

    Liked by 1 person

  7. Mark, I too have known a chatbot to invent a person rather than saying “I don’t know.” I don’t think it is very good at history yet, except where this is summarised in Wiki etc. I doubt it would be able to trawl through parish records.

    Like

  8. Quentin, I suspect Mark may have been referring to an exercise I carried out with ChatGPT. I asked for its view on a real property issue about which I know quite a lot as it was a matter of contention when I sold my house three years ago. I had to refer back to my past legal training as my solicitor was floundering. ChatGPT came up in about 2 seconds with a detailed, well formatted response. A response that so far as I could tell was wholly accurate. It’s an ability that I understand is already causing big legal firms in London to shed jobs. Not good news for junior lawyers and trainees, but acceptable so long as there are experienced senior people to supervise what’s happening and confirm its value and accuracy. But those people will be retiring over the next few years. What happens then?

    A footnote: I came across agenticAI for the first time yesterday. I know little about it but it seems to me to raise huge concerns.

    Like

  9. Anything which puts Starmer in a bikini can’t be all that bad! Joking aside, I’m currently agnostic on the benefits or otherwise of AI. My instincts tell me that it is a tool, a highly sophisticated tool, but a tool, nonetheless. As humans, we have been using tools since we learned to sharpen flints. How we use (or abuse) that tool will be the defining story of AI I suspect, not how it uses or abuses us – unless humans really are on the verge of creating an autonomous, independent, self-thinking, self-preserving life form which may regard carbon-based life forms as competition.

    Like

  10. Yes, Robin, I was referring to the exercise you shared with me, but not having checked with you first, I didn’t want to say too much about it, though I thought it was sufficiently interesting to be worth mentioning. I hope you don’t mind.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.