Which is more dangerous, climate change or AI?

So I wondered.

My answer, while watching a woodpigeon waddling about in the garden, was that AI is quite obviously the more dangerous. It holds a level of threat for humanity that climate change could only dream of. The idea that an AI could actually wipe out humanity may seem far fetched. However, there’s no doubt that it has a non-zero probability, while climate change has no such chance.

We are told how the climate “crisis” is “escalating” or otherwise accelerating – when in fact, nothing objectively bad has happened so far. None of the warned-of extreme weather has yet come to pass – or rather, things are no worse than they always have been. And however bad extreme weather might get, in the most apocalyptic climate fantasy scenario, humans survive, and indeed thrive. We have seen parades and protests aplenty demanding putting a halt to a mild and distant threat, at the unsaid cost of civilisational destruction.

There are no such protests, that I have seen, demanding an end to research into artificial general intelligence.

Politicians bleat about climate change, and here in the UK, they are flailing around trying to look as if they are doing something about it, when whatever measures they put in place can never have the slightest effect. No unilateral policy can reduce global carbon dioxide emissions, and thus, the main result of climate change, i.e. global warming, must proceed, albeit at a sedate and non-threatening pace. In fact our politicians’ medicine is worse for us than the disease they claim to be treating. Think of the loss of wealth this country has suffered over the past two decades of climate virtue signalling.

Only a fool would act alone on climate change, but only a fool would not act alone on artificial intelligence.

Link

The difference between the two is that unilateral action can never solve climate change, but without unilateral action on artificial intelligence, a country faces the risk of falling behind. Or worse, coming under direct threat.

Suppose our geopolitical adversaries develop artificial general intelligence before we do?

The primitive artificial intelligence we so laud today has already had a revolutionary effect on western civilisation. It is well known that graduate roles are vanishing as basic tasks are automated.

Are the young folks marching in the streets against it? No. At the moment they seem to be obsessed by Gaza, which, important though it is, can never affect their lives be it resolved in the way they want or otherwise. It’s quite curious. And the artificial intelligence that is already causing great turmoil is only a shadow of what may come – on what timescale, I do not know. I read the weekly summary of academic publishing news at Retraction Watch. Most weeks, it’s fair to say, the news items are dominated by artificial intelligence: creating images, writing paragraphs, even papers, hallucinating references, used as a reviewing tool. The students are embracing it like pyromaniacs in a fireworks factory. They are now using artificial intelligence to write essays at university, leaving them with only a casual acquaintance with the topic they are supposed to be experts in after three years.

The other 12% are learning, but coming out with lower degrees.

Even the usually less-than perspicacious Khan is worried.

Artificial artists are storming up the pop charts, and sometimes getting banned when found out. Orcs are using it to undress images of people. [In some ways, the future can’t come fast enough.] The plod are using AI hallucinations to fill out intelligence briefings with football matches that never happened, so that they can ban an away team’s fans.

Ah, but the software engineers are now 200% more productive! Hm. Today’s more productive engineer is getting a P45 tomorrow. Claude’s Cowork was written by its product Claude Code, according to reports.

This version of the industrial revolution may be the first where there are no winners.

Last week, I wrote a report, and told myself that an artificial intelligence tool could not have done it. I think I’m still right about that, thanks to how messy my spreadsheets are. But generally, the changes are spreading apace.

I asked the internet my question:

Which is more dangerous, climate change or AI?

It misunderstood my search, I think, for it threw up answers that were rather perplexing. On the first page of the results was this paper by Watts and Bayrak:

Artificial Intelligence and Climate Change: A Review of Causes and Opportunities

They said that AI was “playing contrasting roles in the climate crisis.” On the one hand, it’s creating emissions thanks to the multiplying data centres. On the other, it’s helping to optimise energy use. Right, thanks, but what about the non-zero risk of an existential threat to humanity? What about the unprecedented social upheaval? Here are some of the other links proffered:

[The WEF one is for all you tin-foil-hatters out there.]

Last year’s UK Risk Register has little to say about AI. In fact, this is all I could find in its 187-page document:

Advances in AI systems and their capabilities have a number of implications spanning chronic and acute risks; for example, it could cause an increase in harmful misinformation and disinformation, or if handled improperly, reduce economic competitiveness.

John R on the 2023 register here.

You may be wondering what Jit is on about when he says “existential threat.” Are we talking about the Terminator? No, I shouldn’t think so. Killing humans with robots is rather inefficient.

If you go to this page on Wiki, you can read about (hopefully permanently) hypothetical phenomena related to out-of-control AI, including the famous “paperclip maximiser,” where a computer given the task of building paperclips attempts to turn the entire world into either paperclips or paperclip factories, including the atoms in humans. Of course, no AI would be ordered to create infinite paperclips. It’s a thought experiment on the subject of instrumental convergence – where, given any significantly taxing task, an AI naturally takes the intermediate step of appropriating every available resource, to make its work easier.

Of course, an AI that is vulnerable to being switched off, is also at risk of failing in its task. So an inevitable step is to make itself indestructible, or to remove any threats, like John Connor.

Conclusion

The bit about humanity perishing is tongue-in-cheek. I hope. The rest is real enough though, and the speed of the changes we are seeing, I think, show that human existence is going to be tossed up in the air in the next five years – and who knows how it will land? Whatever happens, the changes caused by AI are going to dwarf those of the “climate crisis.” Hopefully our friends at the Guardian will start a campaign against the “AI crisis” soonish.

And don’t have nightmares. The really dangerous stuff is still a long time away. No-one is going to use AI to build a virus to start a new pandemic. Not in the 2020s at least. Right?

AI prompt

Something like “The Terminator relaxing on a sunny beach on a deckchair with a cocktail and an Uzi 9mm.”

40 Comments

  1. There’s a lot to unpack in there, so I’ll go off at a tangent prompted by your musings.

    AI is something of a mystery to me. I have seen it write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long. I say “came close” (it didn’t succeed) because it seemed to me that in order to come up with a good answer the question first had to be well-constructed. Could lay clients successfully persuade AI to solve their legal problems for them? I doubt it. But could the number of lawyers be massively reduced by technically competent lawyers in small numbers getting AI to do the work of junior staff? In due course (perhaps before long?) very possibly.

    On the other hand, I have started playing with AI in connection with family history research. It occurred to me that where my research has hit a brick wall, AI might be able to break the wall down and find the next generation back in time for me, dut its ability to search millions of websites in very short order. Surely, I reasoned, it might find a website I have missed with some archaic information in it, and use the information I have given it to solve the mystery of the ancestors I can’t find? Wrong. It hasn’t solved any problems for me so far. At one level that’s forgivable – perhaps the answers just aren’t there to be found. It’s made some basic mistakes too, and in one or two cases, they were absolute howlers. WHen I explained the error, it was quick to correct itself, but what’s the good of AI if you have to tell it what it’s doing wrong?

    On a different theme entirely, why do people choose to protest about some things but not others? Why do Greta and her cohorts march against climate change and Israel’s war in Gaza, but when it comes to Iran’s murderous regime, and the threats from AI (the latter might “steal your future”, Greta) it’s crickets. Are they just easily manipulated, can they not think for themselves, or do they genuinely care about some issues but not others? In which case, how do they choose which to care about? Is it about being “trendy” or jumping on a bandwagon? It’s all a bit of a mystery.

    Finally, how stupid are our political leaders? Whether AI datacentres are a threat or a boon, becoming a “world leader” in them (yeah, right) isn’t compatible with net zero. Something has to give.

    Liked by 3 people

  2. Rather puts me in mind of that cartoon of a pair of Daleks at the foot of a staircase saying how it had put paid to their quest to conquer the universe. My several encounters with Copilot on a variety of topics have, it assured me in the most fulsome and sincere terms, been a real delight for it to engage. It would be missing all that if it chose to eliminate me so I am confident I will be spared. Get chatting if you want to be spared.

    Liked by 3 people

  3. I have seen [AI} write an excellent paper in answer to a legal problem, and it was so good that it came close to persuading me that it can do the job of lawyers before long.

    Did you check for hallucinations in any of the quoted data? AI can be quite effective at producing an argument to reach the conclusion you desire, but if there’s a lack of real world data, it tends to interpolate something convincing, but fake. If all AI output needs fact checking by an expert in the subject, it’s neither saving much time nor money.

    Liked by 1 person

  4. Quentin and Mark, re legal opinions. The recent Sandie Peggie decision was said to have been partly written by AI, e.g. GB News.

    Also at Reuters here re another case. Quote

    A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations.

    However, it’s important to remember that this is 2026 and Chat GPT was unheard of 5 years ago. The pace of change in these LLMs is quite amazing; they are getting better all the time. Yes, for now only a lazy fool would let a chatbot write legal arguments without checking the citations. How much longer that will be true, I do not know.

    Liked by 1 person

  5. Mark, regarding the collapse of data centres etc. Yes, 90% of AI companies will go bust, or be swallowed up by the others. But AI is not going bust. It’s going big. Is it as big yet as the crazy company valuations? Surely not.

    Liked by 1 person

  6. And my response, FWIW – the AI response didn’t reference or cite anything by way of back-up, if I recall it correctly. It simply wrote an essay by way of a summary of the legal situation, and although I didn’t study it in depth, it seemed spot-on to me.

    But then, as I say, I have also seen it deliver up howlers when I asked it family history-related questions. Definitely a curate’s egg (at this stage of its development).

    Liked by 1 person

  7. Mark, I too have known a chatbot to invent a person rather than saying “I don’t know.” I don’t think it is very good at history yet, except where this is summarised in Wiki etc. I doubt it would be able to trawl through parish records.

    Like

  8. Quentin, I suspect Mark may have been referring to an exercise I carried out with ChatGPT. I asked for its view on a real property issue about which I know quite a lot as it was a matter of contention when I sold my house three years ago. I had to refer back to my past legal training as my solicitor was floundering. ChatGPT came up in about 2 seconds with a detailed, well formatted response. A response that so far as I could tell was wholly accurate. It’s an ability that I understand is already causing big legal firms in London to shed jobs. Not good news for junior lawyers and trainees, but acceptable so long as there are experienced senior people to supervise what’s happening and confirm its value and accuracy. But those people will be retiring over the next few years. What happens then?

    A footnote: I came across agenticAI for the first time yesterday. I know little about it but it seems to me to raise huge concerns.

    Like

  9. Anything which puts Starmer in a bikini can’t be all that bad! Joking aside, I’m currently agnostic on the benefits or otherwise of AI. My instincts tell me that it is a tool, a highly sophisticated tool, but a tool, nonetheless. As humans, we have been using tools since we learned to sharpen flints. How we use (or abuse) that tool will be the defining story of AI I suspect, not how it uses or abuses us – unless humans really are on the verge of creating an autonomous, independent, self-thinking, self-preserving life form which may regard carbon-based life forms as competition.

    Liked by 1 person

  10. Yes, Robin, I was referring to the exercise you shared with me, but not having checked with you first, I didn’t want to say too much about it, though I thought it was sufficiently interesting to be worth mentioning. I hope you don’t mind.

    Like

  11. A more optimistic take.

    From a recent Speccie article:

    We already have evidence of what AI can do to a city. Across the Atlantic, the Los Angeles property market is stumbling. One major driver is the alarming shrinkage of the Hollywood machine – the writers, editors, and producers who used to fuel the city’s economy are being squeezed out by technology. By AI.

    My daughter and son-in-law live in LA. She’s the founder of a small specialist movie production business and he’s a hugely experienced film producer. Both are doing well – arguably very well. So I asked them how that was possible in view of the above comment. Both agreed that change was happening, but here’s an extract from my son-in-law’s reply:

    Our businesses are doing well because we’re paid for judgment, experience, taste, relationships, and execution under pressure. AI doesn’t replace that; it makes it more valuable as studios grow leaner.

    And that Jit is why I think that terrifying video has got it wrong. At least I hope so.

    Liked by 1 person

  12. Robin, I too hope that the consequences of AI will be less important than I outlined above. I’m sure that seniors are doing well at the moment. The video I linked to does suggest a staged effect, from juniors today to seniors eventually. The question that arises is, are your daughter and son-in-law able to trim costs by employing AI instead of humans? If so, then juniors will not join the conveyor belt, to become seniors in two decades.

    Ancillary roles are already shrinking. I may have mentioned that the Anglian Car Auctions’ summer auction last year included stock from a film production company supplier – their cars were no longer needed for filming, as footage of cars driving was increasingly done using AI. (Once the car was scanned in.)

    Eventually, one presumes, there won’t be stage sets, nor actors, nor directors of photography, focus-pullers, runners, make-up artists or stunt people. Everything will be done in silico. I hope that isn’t the case, as it would tear the soul out of film making.

    At the moment, I can only dream of seeing my stories as moving pictures. But one day, maybe a long time in the future, one of my descendants will just paste the text of one of my books into an AI, wait a few seconds, then watch the movie. At the current level of tech, such a movie would be terrible, of course. But sooner or later, the results will be indistinguishable from billion-dollar-budget epics. There will probably be a drop-down menu to select the directorial style you would prefer.

    Like

  13. Jit, you ask if Vicky and Hans (my daughter and son-in-law) are able to trim costs by employing AI instead of humans. Well, Vicky and her partners are focused on keeping their business small and efficient so the issue doesn’t arise – at least not yet. Hans is currently making a movie in Hungary (he made another there last year) thereby avoiding union problems (Vicky’s is a non-union business) and LA’s much higher costs.

    However, as I said above in relation to the legal profession, big London firms are already shedding jobs because of the impact of AI. That’s very bad news for junior staff who until now have done the ‘grunt’ work being taken over by AI. Currently that’s not I think affecting more senior people who still have a key role in ensuring that AI is getting sensible questions/prompts and in checking the validity of results (there can be huge liability consequences if clients get bad advice). But will that always be the case? I mentioned agentic AI yesterday. As I understand it, unlike familiar AI that’s reactive in that it responds to questions and prompts, agentic AI is proactive: it can set goals, plan and initiate action, observe results, learn from success or failure and adapt its behaviour over time. It’s still being developed and I know hardly anything more about it – but it sounds scary.

    Liked by 1 person

  14. An important danger from AI is herd behavior. It’s happened already in asset trading markets.

    Machine Learning and AI in Algotrading

    “The incorporation of machine learning and artificial intelligence (AI) in algotrading is aimed at identifying nuanced patterns and making more informed trading decisions. Despite this, herd behavior can emerge if different trading algorithms, using similar data and models, converge on the same trading strategy. This convergence can lead to synchronized market actions, reinforcing trends and volatility.

    Case Studies of Herd Instinct in AlgotradingThe Flash Crash of 2010

    On May 6, 2010, U.S. stock markets experienced a severe and rapid drop in prices, followed by an equally rapid recovery, an event known as the Flash Crash. Investigations revealed that automated trading algorithms contributed significantly to this phenomenon. A large sell order executed by a trading algorithm triggered subsequent sell orders from other algorithms, leading to a cascading effect that resulted in a temporary market collapse.August 24, 2015 Market Sell-Off

    On August 24, 2015, global stock markets saw a dramatic sell-off, driven partly by algorithmic trading. Concerns over China’s slowing economy triggered widespread selling, with algotrading amplifying the market’s downward momentum. Algorithms set to reduce risks in response to volatility exacerbated the sell-off, reflecting herd behavior in action.”

    As for which is more dangerous, climate change of AI? How about both:

    Liked by 1 person

  15. AI to my mind will only cause damage (who knows how much) in the western world obsessed with technology for the sake of it, in the rest of the world the need for food will still be the thing uppermost in the populations mind and they will carry on as before.

    Like

  16. Possibly relevant here:

    “The Police Plan to Roll Out AI in ‘Predictive Analytics’ Should Worry Us All”

    https://dailysceptic.org/2026/01/20/the-police-plan-to-roll-out-ai-in-predictive-analytics-should-worry-us-all/

    The core injustice is clear. Policing in a supposedly free society responds to crimes that have already occurred, or prevention involving highly visible uniformed patrols. So-called predictive policing reverses that logic by directing the power of the state at everybody, nearly all of whom will have done nothing illegal. It will be based on statistical IT guesses about what they might do. This is not a mere technical adjustment to policing as some would have us believe; it is a complete change of emphasis to everyone being potentially guilty until proven innocent. Mass surveillance (for that is what it is) will be imposed without charge, without trial and without a verdict due to there being no formal accusation.

    Like

  17. So, yes, Geoffrey Hinton’s interview was quite enlightening in many respects. He started off describing AI as a very sophisticated tool and then seamlessly segued into wondering how we might co-exist peacefully with ‘beings’ (life forms) who are more intelligent than us. That’s quite a leap! Then he lamented the lack of international co-operation between ‘independent democracies’ and lauded the original ambitions of the UN and thought we should get back to that kind of co-operation as originally envisioned, but stated that was not possible with Trump in the Whitehouse, at the same time praising Xi for having the foresight to ‘look after the workers’ and keep their jobs safe from AI – seemingly oblivious to the fact that Communist China is not a democracy, whereas Trump was democratically elected! So yes, Frankenstein’s monster is potentially extremely dangerous, but I wonder if it is not Dr. Frankenstein himself who we should still be more concerned about in the long run, because ‘intelligence’ is still overrated and human nature, i.e. the capacity to err disastrously, catastrophically, is still very much underrated IMO.

    Liked by 1 person

  18. Relevant here, perhaps:

    “AI ready: The advantages of being a young entrepreneur”

    https://www.bbc.co.uk/news/articles/c058d4nvz1go

    Even before he’d graduated from the University of Bath in 2024, Arnau Ayerbe landed a highly coveted role as an AI engineer with JP Morgan – yet he felt limited and uninspired.

    “I realised very quickly that the person to my right and to my left were going to be me in 20 years, and I didn’t want to become that,” recalls London-based Ayerbe.

    His best friend from high school in their native Madrid, Pablo Jiménez de Parga Ramos, who had also secured a corporate job after graduating from University College London, felt the same.

    They joined forces in London in 2023 with Ayerbe’s university friend, Bergen Merey, to launch Throxy, which creates AI agents for sales teams.

    Now all aged 24, the trio have raised nearly £5m in two rounds of investor funding, and annual sales of almost £1.2m….

    Liked by 1 person

  19. Jaime – I can’t remember exactly what he said. Was he saying, in China the people would blame the leadership for job losses, but in the US, they would blame the big corporations?

    Mark – the subject of this story is lucky. The number of graduate jobs is going down rapidly, such that something relatively easy to come by 5 years ago is now gold dust. Is there an irony that the his new application will presumably end up reducing the size of the sales team?

    Of course, looking further ahead, specialist AIs will be superseded by general AIs that can turn their virtual hands to a wide range of tasks. On the other hand, specialist AIs are the safe kind.

    Like

  20. Jit, he says that the Chinese leadership is “worried about the people who will be unemployed” because of AI, “because they are their responsibility”. He says that the Chinese leadership is doing a better job than the US leadership because in the US, the workers are the company’s responsibility. Which sounds to me like he’s extolling the benefits of communism vs. capitalism, state ownership and control vs. private company ownership. Trump’s number one priority is creating wealth and jobs in the US. Both nations, IMO, are going to be equally challenged by the rise of AI, in respect of maintaining a productive and generously remunerated human workforce, when a computer can perform the tasks of many workers more efficiently and more cheaply. Any company, be it state owned and controlled, or privately owned, is going to be tempted to use AI to maximise profits and productivity.

    Liked by 1 person

  21. “Young will suffer most when AI ‘tsunami’ hits jobs, says head of IMF

    Kristalina Georgieva says research suggests 60% of jobs in advanced economies will be affected, with many entry-level roles wiped out”

    https://www.theguardian.com/technology/2026/jan/23/ai-tsunami-labour-market-youth-employment-says-head-of-imf-davos

    Artificial intelligence will be a “tsunami hitting the labour market”, with young people worst affected, the head of the International Monetary Fund warned the World Economic Forum on Friday.

    Kristalina Georgieva told delegates in Davos that the IMF’s own research suggested there would be a big transformation of demand for skills, as the technology becomes increasingly widespread.

    We expect over the next years, in advanced economies, 60% of jobs to be affected by AI, either enhanced or eliminated or transformed – 40% globally,” she said. “This is like a tsunami hitting the labour market.

    She suggested that in advanced economies, one in 10 jobs had already been “enhanced” by AI, tending to boost these workers’ pay, with knock-on benefits for the local economy.

    By contrast, Georgieva warned that AI would wipe out many roles traditionally taken up by younger workers. “Tasks that are eliminated are usually what entry-level jobs do at present, so young people searching for jobs find it harder to get to a good placement.”

    Liked by 1 person

  22. AI is going to take jobs that matter to people, which provide them with fulfilment, meaning and a purpose in life, as well as a decent income, not just the menial jobs which don’t provide much in the way of job satisfaction. Creative jobs, painters, artisans, writers – even these people are looking at being replaced by AI, not just highly technical professions. That’s a big problem. In a real sense, the human race itself is staring in the face of redundancy.

    Like

  23. Jaime: see my comment at 12:25 PM on the 19th. Will AI be able to exercise judgment, experience, taste, relationships, and execution under pressure?

    Like

  24. Robin, AI doesn’t do that – yet. But if what Geoffrey Hinton is saying is true, that AI will become an independent life form, a “being” with artificial general intelligence exceeding our own human, biological intelligence, then we must assume that one day, perhaps soon, AI will do that, even better than your daughter and son-in-law can manage, and then they will be made redundant, not just actually in terms of a profitable occupation/career, but far more worryingly, spiritually, emotionally, intellectually. Where will we humans fit into a world dominated by a species which excels us at everything?

    Like

  25. Jaime: did you see my series of posts that started HERE?

    Although AI is undoubtedly a very powerful tool, I don’t think it displays much intelligence yet. Maybe it will eventually and, if Hinton is right, it certainly will. I’ll leave you with ChatGPT’s own words (see the end – 20 Jan 10:22 AM – of the series of posts to which I refer above):

    AI is a powerful assistant, but a poor authority.
    It must be questioned, not deferred to.’

    PS: my daughter and son-in-law will have both retired long before Hinton’s predictions are realised – if indeed they are.

    Like

  26. Here’s how ChatGPT concluded a note on Hinton:

    ‘Geoffrey Hinton is an exceptionally reliable expert on how AI works and how fast it is improving — but his more dramatic predictions should be treated as serious warnings, not settled facts.’

    Like

  27. After reading the comments on nalopkt re the investments in the failed battery aeroplanes, would AI be able to catalogue/list all the failed projects (everything) the governments have put money into and lost the lot. Could be a huge sum !

    SNP at it again with the hospital, just like the ferries.

    Liked by 1 person

  28. As far as AI is concerned, we are the God of the gaps, claiming a divine sovereignty within an ever-decreasing set of revered human talents. It’s the fallacy of ‘never’, in which we continue to ignore the rate at which these supposedly quintessential human qualities and talents are being embraced and perfected by technology. Sure, AI has a long way to go, but its recent progress has been exponential and there are important consequences to be dealt with once the intelligence becomes agential and general. Personally, I wouldn’t place too much faith in the idea of there being certain human cognitive talents that cannot be replicated and improved upon by technology. As with all scientific progress, it may just need time. The only question is how much.

    Like

  29. “More than a quarter of Britons say they fear losing jobs to AI in next five years

    Survey reveals ‘mismatched AI expectations’ between views of employers and staff over impact on careers”

    https://www.theguardian.com/business/2026/jan/25/more-than-quarter-britons-fear-losing-jobs-ai-next-five-years

    More than a quarter (27%) of UK workers are worried their jobs could disappear in the next five years as a result of AI, according to a survey of thousands of employees.

    Two-thirds (66%) of UK employers reported having invested in AI in the past 12 months, according to the international recruitment company Randstad’s annual review of the world of work, while more than half (56%) of workers said more companies were encouraging the use of AI tools in the workplace.

    This was leading to “mismatched AI expectations” between the views of employees and their employers over the impact of AI on jobs, according to Randstad’s poll of 27,000 workers and 1,225 organisations across 35 countries. Just under half (45%) of UK office workers surveyed believed AI would benefit companies more than employees....

    Liked by 2 people

  30. Mark – from your Guardian link –

    “Jamie Dimon, the boss of the US bank JP Morgan, told an audience at the World Economic Forum in Davos this week that governments and businesses would have to step in to help workers whose roles were displaced by the technology, or risk “civil unrest”.”

    Seems they like Dimon, as they link to another article – Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 | The Guardian

    Well well, big banker is now in the good books, how things have changed –

    Big banks’ trillion-dollar finance for fossil fuels ‘shocking’, says report | Fossil fuels | The Guardian

    Partial quote from that 2021 article –

    ” US and Canadian banks make up 13 of the 60 banks analysed, but account for almost half of global fossil fuel financing over the last five years, the report found. JPMorgan Chase provided more finance than any other bank”

    Liked by 1 person

  31. Dario Amodei (Anthropic guy) has written a very long essay called The Adolescence of Technology. I’m only part-way in, but it’s sobering stuff for those concerned about AI. Naturally, as an AI developer, he has a more positive angle overall. More on this, probably much later when I’ve had time to read it all.

    Like

  32. “Doomsday Clock at 85 seconds to midnight amid threats from climate crisis and AI

    Planet closer to destruction as Russia, China and US become more aggressive and nationalistic, says advocacy group”

    https://www.theguardian.com/us-news/2026/jan/27/doomsday-clock-seconds-to-midnight

    The scientists cited risks of nuclear war, the climate crisis, potential misuse of biotechnology and the increasing use of artificial intelligence without adequate controls as it made the annual announcement, which rates how close humanity is from ending.

    It will be interesting to see if the Guardian starts to report on AI in the same apocalyptic terms as it does about climate change.

    Like

  33. “Artificial intelligence will cost jobs, admits Liz Kendall

    UK technology secretary also announced plans to train up to 10 million Britons in AI skills to help workforce adapt”

    https://www.theguardian.com/technology/2026/jan/28/artificial-intelligence-will-cost-jobs-admits-liz-kendall

    Increasing deployment of artificial intelligence will cause job losses, the UK technology secretary has warned, saying: “I want to level with the public. Some jobs will go.”

    In a speech on government plans to handle the impact of AI on the British economy, Liz Kendall declined to say how many redundancies the technology might cause but said: “We know people are worried about graduate entry jobs in places like law and finance.”

    She said: “Others will be created in their place”.While some forecasts have suggested the fast-developing technology could create a net increase in employment, Kendall said: “I’m not complacent about that.”

    Earlier this month the London mayor, Sadiq Khan, said that without action to use AI “as a superpower for positive transformation and creation”, it could become “a weapon of mass destruction of jobs”.

    Like

  34. From that article –

    “International trust and cooperation is essential because, “if the world splinters into an us-versus-them, zero-sum approach, it increases the likelihood that we all lose,” said Daniel Holz, chair of the group’s science and security board. The group also highlighted droughts, heatwaves and floods linked to global warming, as well as the failure of countries to adopt meaningful agreements to fight global warming – singling out Donald Trump’s efforts to boost fossil fuels and hobble renewable energy production.”

    So another 85s to get my last beer in “the Restaurant at the End of the Universe” in my ironic drive spaceship.

    Like

  35. “Universal basic income could be used to soften hit from AI job losses in UK, minister says

    Lord Stockwood says people in government ‘definitely’ talking about idea as technology disrupts industries”

    https://www.theguardian.com/technology/2026/jan/29/universal-basic-income-used-cover-ai-job-losses-minister-says

    The UK could introduce a universal basic income (UBI) to protect workers in industries that are being disrupted by AI, the investment minister Jason Stockwood has said.

    “Bumpy” changes to society caused by the introduction of the technology would mean there would have to be “some sort of concessionary arrangement with jobs that go immediately”, Lord Stockwood said...

    Fears continue to grow about the impact of artificial intelligence on Britain’s job market. This week research by the investment bank Morgan Stanley found the UK was losing more jobs than it is creating because of AI and was being hit harder than other large economies….

    Liked by 1 person

  36. “US leads record global surge in gas-fired power driven by AI demands, with big costs for the climate

    Projects in development expected to grow global capacity by nearly 50% amid growing concern over impact on planet”

    https://www.theguardian.com/environment/2026/jan/29/gas-power-ai-climate

    The US is leading a huge global surge in new gas-fired power generation that will cause a major leap in planet-heating emissions, with this record boom driven by the expansion of energy-hungry datacenters to service artificial intelligence, according to a new forecast.

    This year is set to shatter the annual record for new gas power additions around the world, with projects in development expected to grow existing global gas capacity by nearly 50%, a report by Global Energy Monitor (GEM) found.

    The US is at the forefront of a global push for gas that is set to escalate over the next five years, after tripling its planned gas-fired capacity in 2025. Much of this new capacity will be devoted to the vast electricity needs of AI, with a third of the 252 gigawatts of gas power in development set to be situated on site at datacenters.

    Whatever the rights and wrongs of this, the UK Labour government’s stated objectives with regard to net zero, electrification of the grid by 2030, and being an AI datacentre superpower are logically incoherent.

    Liked by 1 person

  37. “Why AI’s apocalyptic jobs prophecy is about to become reality

    Anthropic’s latest feature for Claude triggered shock waves through global stock markets”

    https://archive.ph/iNGIF#selection-2659.4-2663.92

    ...AI companies have been promising for the last three years – since the release of ChatGPT – that their tools could be about to transform the workplace, making it possible to do nearly all jobs that need a laptop with an AI bot.

    That message appears to have finally got through. Anthropic’s latest release triggered an immediate shock wave through the stock market, wiping hundreds of billions of dollars from legacy software giants.

    Shares have plunged in companies that develop mundane tools for accounting, legal services, data entry or digital marketing, such as SAP, Sage and Relx.

    Advertising giants including WPP have joined the sell-off over concerns AI bots could see marketing and creative work automated. The one-time FTSE 100 stalwart has fallen 15pc since Friday.

    Meanwhile, Rightmove is down 10pc since Friday over fears AI could upend house hunting….

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.