In the summer of 1969, a young astrophysics graduate, J. Richard Gott III, was touring Europe and found himself gazing upon the Berlin Wall. Pondering just how long it would take before it would finally be pulled down, he turned to his touring companion and confidently proclaimed that it would last at least two and two thirds more years but no more than 24. In 1987, President Reagan said ‘Mr Gorbachev, tear down this wall!’, and within five years the wall was gone. That’s within 23 years of Gott making his 24 year prediction.

So was Gott a crack political analyst, or did he possess a crystal ball? Neither, in fact. He was simply employing a principle he had learned at college, i.e. that we have no reason to presuppose that we occupy a special place in the universe. This principle, known as the Copernican principle, can be applied in both space and time, and is not restricted to cosmological questions. Gott just reasoned that he shouldn’t assume he was living in any special epoch as far as the Berlin Wall was concerned. That is to say, he wasn’t likely to be living during the very early life of a long surviving edifice, nor at the very end of a shorter-lived one. The likelihood instead was that the wall’s future would be as long as its past (to be precise, the probabilities would take the form of a normal distribution centred upon Gott’s Copernican judgement).

In 1993 Gott submitted a paper to Nature describing his reasoning (which he referred to as the ‘delta t argument’) and, much to everyone’s horror, Nature accepted it. Experts of all persuasions dismissed it as facile numerology, unworthy of a prestigious magazine’s attention. The problem, however, is that the technique works, and has been successful in predicting everything from Broadway show runs to the future value of stock market investments. Indeed, the idea has been independently developed by others in a number of forms (all of them essentially variations on Bayesian reasoning). The techniques now go by the name Doomsday Argument, principally because they are used to predict how much longer we can expect the human race to survive.

The answer isn’t reassuring, particularly if you take into account that it isn’t the chronological time on Earth that should be used as the basis for the calculation, but the number of man-years that have been lived to date compared to how many remain before mankind’s demise. Given that the vast majority of humans that have lived since Neolithic times are still alive today, and given the projected birth rate, delta t reasoning, based upon man-years past and future, gives as few as 12 years to as many as 18,000 years remaining to us (at 95% confidence).

Granted, there is massive uncertainty. Even so, it is quite sobering to appreciate that sound statistical reasoning, teamed up with the Copernican principle, suggests that there is a one in twenty chance that the human race has only got another 12 years ahead of it. And keep in mind that none of this analysis is in any way informed by the specific threats facing mankind, such as nuclear conflict, dwindling resources or even climate change. It’s just an example of how to determine a species’ expected survival span using basic Bayesian statistics, liberally sprinkled with the principle of indifference.

Ah yes, I hear you say, but we are not just any old species, we are the human race, famed for its resourcefulness and durability. We will find a way, because we always do. There will always be a technological revolution around the corner that kicks the Copernican principle into the long grass.

Oh really?

Wait, I have a cunning plan

Well let’s put that argument to the test by examining the greatest technological revolution the human race is likely ever to experience, one that is not just around the corner but already very much upon us: the proliferation of Artificial Intelligence (AI). And, in deference to my readership’s primary interest, let us consider the role it may play in weaning mankind off fossil fuels — because you don’t have to believe in a mad rush to net zero to understand that we cannot rely upon them forever.

Depending upon who you talk to, the AI revolution is either going to be the best thing that has ever happened to the human race or the last thing. Either way, the only thing one can say for certain is that it is going to be massively transformative and utterly inevitable. Nation states see falling behind in AI as an existential threat; trillions upon trillions of dollars are being invested; AI is seen as vital in solving the challenges currently facing mankind; and its development is being driven by a scientific and technological community devoted to its development, if only because of the personal and corporate glory to be had. These are all reason enough to believe that AI isn’t going to fizzle out and it isn’t going to be tamed. You only have to look at the exponential rate of development recently experienced, and then factor in the self-reinforcement that autonomous, self-designing, general AI promises, and one has to concede that the human race’s AI future is not only inevitable, it is also inherently unpredictable. But I’m not here to talk about robot wars and singularities; let us just reflect upon the risks and benefits facing those who would wish to apply AI in solving the fossil fuel problem.

Not such lovely jubbly

If you want a quick summary of AI’s potential role, you only have to google how AI can be put to use in tackling climate change and an AI summary will pop up for you. So we have:

  • Analysing vast amounts of data from energy systems to optimize generation, distribution, and consumption whilst helping to integrate renewable energy sources like solar and wind power more seamlessly into the grid.
  • Providing more accurate weather forecasts to assist disaster preparedness and climate change adaptation.
  • Detecting and monitoring greenhouse gas emissions, enabling faster response and mitigation, and tracking and analysing emissions across various sectors.
  • Optimizing agricultural practices, helping farmers adapt to changing climate conditions and improve crop yields.
  • Accelerating the development of carbon capture and storage solutions.
  • Accelerating the development of new materials for renewable energy technologies, such as more efficient solar panels, battery storage solutions, etc.
  • Developing new turbine designs that optimize energy generation across a range of wind speeds.
  • Improving climate modelling leading to more reliable predictions.
  • Looking into ways in which AI technology’s own carbon footprint can be minimised using modified algorithms and developing new and more efficient computer technologies.

This all sounds great, but it has to be kept in mind that none of the above is a silver bullet, and taken as a whole it still isn’t nearly enough. The challenge is far greater than the above list would seem to suggest, and it is not at all clear that AI is up to the job. As Mustafa Suleyman, CEO of Microsoft AI, explains:

Despite well-justified talk of clean energy transition, the distance still to travel is vast. Hydrocarbons’ energy density is incredibly hard to replicate for tasks like powering aeroplanes and containerships. While clean electricity generation is expanding fast, electricity accounts for only about 25 percent of global energy output. The other 75 percent is much trickier to transition. Since the start of the twenty-first century global energy use is up 45 percent, but the share coming from fossil fuels only fell from 87 to 84 percent – meaning fossil fuel use is greatly up despite all the moves into clean electricity as a power source.

Vaclav Smil, Distinguished Professor Emeritus in the Faculty of Environment at the University of Manitoba in Winnipeg, calls ammonia, cement, plastics and steel the four pillars of civilisation, and it is highly germane to point out that their production is extremely carbon-intensive and that each has no obvious successor. As Suleyman says:

Without these materials modern life stops, and without fossil fuels the materials stop.

And the problems keep stacking up. Electric vehicles may not emit carbon dioxide when being driven, but their production is hugely resource hungry, including a great many unsustainable materials. And it’s all very well to talk of the development of public transport and getting everyone to cycle to work, but how is that going to work for the remote rural communities?  More to the point, how on earth is AI going to solve what is ultimately a social problem? As Suleyman puts it:

To meet this global challenge, we will have to re-engineer our agricultural, manufacturing, transport, and energy systems from the ground up with new technologies that are carbon neutral or probably even carbon negative. These are not inconsiderable tasks. In practice it means rebuilding the entire infrastructure of modern society while hopefully offering quality-of-life improvements to billions.

But even that will not be enough. He concludes with the following words of warning:

A school of naïve techno-solutionism sees technology as the answer to all of the world’s problems. Alone, it’s not. How it is created, used, owned and managed all make a difference. No one should pretend that technology is a near magical answer to something as multifaceted and immense as climate change. But the idea that we can meet the century’s defining challenges without new technologies is completely fanciful.

All of this is before we start to count up the ways in which AI and the required technology revolution may destabilize society. Suleyman speaks of “hopefully offering quality-of-life improvements to billions” and yet a society in which human intelligence has become increasingly redundant doesn’t sound like utopia to many people. An AI that is up to the job of solving the transition to a carbon neutral society would be easily up to the job of facilitating our self-destruction through a myriad of ways. The bottom line is that even if AI can ultimately wean us off fossil fuels, let alone tackle climate change, it may only be AI that is around to ‘enjoy’ the benefits.

Doh!

So where does that leave us? Seemingly between a rock and a hard place. The Doomsday Argument hints at a bleak future based upon simple (some would say simplistic) logic. Fossil fuel depletion and climate change may not be the immediate existential threat that Greta thinks they are, but they are still challenges that may very well determine how the doomsday statistics pan out. That said, the same could be said of AI itself, particularly in the employment of a pointless and self-destructive rush to net zero.

The problem is that we can’t expect our technological ingenuity to bail us out, even if superintelligent AI steps up to the plate. Indeed, by the time AI has helped wean us off fossil fuels, it will probably be so all-invasive as to pose a threat to society in its own right – and I’m not just talking here about power-hungry data centres. So if you believe that net zero is essential by 2050, then consider yourself doomed. On the other hand, if you think we have a lot longer, and AI will ultimately bail us out, then you must also consider yourself doomed. In fact, with or without climate change, and with or without a complete depletion of fossil fuels, you should consider yourself doomed. Doomed, I tell you, doomed.

Further Reading:

For an account of the inevitability, benefits and risks of the coming AI revolution: ‘The Coming Wave: AI, Power and Our Future’, Mustafa Suleyman and Michael Bhaskar, ISBN 978-1-52992-383-4.

For an account of the history and employment of Doomsday Arguments: ‘How to Predict Everything: The Formula Transforming What we Know About Life and the Universe’, William Poundstone, ISBN 978-1-78607-756-1.

See also the Lindy Effect.

15 Comments

  1. I can understand how the Berlin Wall could be predicted to fall. Especially given that there are known ways in which walls can be demolished, and have been. I find it more difficult to see how the fleshy cockroach known as Homo sapiens is going to meet its end in the time frame allowed. (I can envisage the end of civilisation, but that is not the same thing). The only things that could potentially accomplish extinction of humanity must be those outside of humanity’s control. Unless the AI goes rogue and there are T100s wandering about the place.

    The wiki page listing dates predicted for apocalyptic events makes for interesting reading. So far, those betting against are on a long winning streak. I particularly like the prediction of Charles Piazzi Smyth, who seems to have predicted that the Second Coming would occur between 1892 and 1911, based on the dimensions of the Great Pyramid. Other classics are the second or third time so-and-so had revised their prediction, after the failure of the earlier one(s) to occur.

    From a quick search, no-one seems to have predicted the end of the world being caused by a virus, so there’s a gap in the market there, if anyone’s interested.

    But heads’ up: someone has predicted that we’re going to be wiped out by a collision with an asteroid next year, so don’t save any of the good port at Christmas.

    Liked by 3 people

  2. One of the great things about Cliscep is the casual dropping in of information and statistics that cause one to stop and take note. One such in this article is:

    …Given that the vast majority of humans that have lived since Neolithic times are still alive today

    A truly sobering thought. It possibly goes some way towards explaining why climate alarmists are so obsessed with timescales which in the context of the planet’s life are meaninglessly short. We humans have far too great a tendency to think that we are all that matters, and only time-frames that are meaningful to us are the ones we need to think about.

    For instance, a BBC article today:

    https://www.bbc.co.uk/news/articles/c8d1vglgj9zo

    Dairy farmer Richard Cornock from Tytherington in South Gloucestershire has been farming for more than 40 years and has “never known a spring like this”.

    It has been the warmest spring on record and the driest in more than 50 years, according to Met Office figures.

    Oh well, it must be climate change then….

    Apologies if I’ve gone off topic!

    Liked by 1 person

  3. Jit,

    It may indeed be difficult to envisage the circumstances in which the human race could be extinguished, but species can go extinct and I wouldn’t bet that we are the exception to that rule. Perhaps more to the point, however, there is a distinction to be made between species going extinct and civilisations falling. Indeed, the best chance of the human race defeating the Doomsday Argument is for there to be a mass die-off resulting from the fall of civilisation. A high price to pay for species longevity, perhaps.

    Like

  4. So…

    This Copernican Principle is just like climate models — choose the proper parameters and you can prove anything you want.

    Like

  5. John – as an avid SF reader – latest by M.R. Carey – (Pandominion Series), the AI taking control of human future is a long standing theme.

    Until now I thought it was just fiction, but you have me wondering!!!

    Like

  6. ps – more Germane to your post but still with a SF slant.

    Most SF authors take it for granted that humanity have left old dying earth for new worlds to colonize using deep sleep to survive the journey. Without some AI (Space Odyssey HAL) it’s just a dream & were stuck on planet Earth. So we need to think deeply about our planets ice/warm phases & to me, look to the oceans as a resource, but with careful management.

    Like

  7. dfhunter,

    We will find ourselves in a world of pain long before AI is in a position to ‘take over’. As just one example from many, we are on the verge of seeing AI-controlled cyber-attacks in which the malware constantly and intelligently mutates as it infects the network, thereby thwarting all possible countermeasures.

    Like

  8. Jit,

    Thank you for the link. It was most interesting.

    There are two recently read books currently informing my views. The first is the Suleyman book I quote in the above article. The second is a book titled How AI Thinks, by Nigel Toon. Both were written by AI entrepreneurs but they take very different positions.  Toon is optimistic and emphasises all the benefits to be had. He recognises the risks but thinks they are overhyped and can be readily dealt with using suitable legislation and government controls. He’s all for cracking ahead as quickly as possible. Suleyman is far more guarded and his book provides much more detail of the possible downsides, emphasising how difficult, if not impossible, it will be to avert the detrimental effects.

    According to the UnHerd article, I should be in a position of not really knowing who to believe. But in practice, it is often quite easy for a layman to evaluate who to believe between two experts, just by reflecting upon the quality of the reasoning presented: for example, is it coherent, consistent, well-evidenced, logically sound, etc. On those criteria I have to say that Suleyman wins the battle hands down. Having read Toon’s book first I was left thoroughly unconvinced. It just read like an entrepreneur desperate to protect his investment. His plea came across as naïve, complacent and tendentious. Suleyman, on the other hand offered a much more thorough analysis that came across as hugely authoritative and well-balanced. So when he said he was very worried, I took it to heart. The bottom line is that I cannot recommend Toon’s book but I urge everyone to read Suleyman’s. The worry is that it is Toon and not Suleyman who is on the expert panel advising the UK government.

    Liked by 1 person

  9. PS. I should add that Toon’s book has the subtitle ‘How we built it, how it can help us, and how we can control it‘, so it definitely delivered on the subtitle, albeit unconvincingly. What the book did not do, however, is explain at any point how AI thinks — and yet it was that promised explanation that persuaded me to buy it!

    Like

  10. Chivers seemed worried about the difficulty of getting the AI to do precisely what is wanted, because it will be adept at finding loopholes in any instruction. Could this be trapped out by higher-level commands, like Azimov’s Laws? The more I think about it, the less I think it could. The words, even in something as simple as the Laws, are open to a degree of interpretation.

    My own fictional question (finished, but languishing naked in need of a cover design) was what happens if people (neo-Luddites) realise that things have gone too far and reject technology absolutely? Billions would starve, so the AI would be justified in enslaving them, for their own good.

    Liked by 1 person

  11. Jit,

    There are a few organisations out there who dedicate themselves to addressing the problem of ensuring that AI acts ethically. One of these is the Machine Intelligence Research Institute (MIRI) founded by Eliezer Yudkowsky — quite a big name in the field. He is one of the more pessimistic experts, so much so that he advocates that development be halted so as to allow safety technologies and expertise time to catch up. Bizarrely, I was once interviewed online by this group in order to solicit my opinions on the generalities of functional safety in software systems.

    As for the idea that AI pessimism could lead to a Luddite rebellion, Suleyman thinks this very unlikely and (as you suggest) probably unwise. The problem, as he sees it, is that AI developments will be necessary to address existential challenges we are already facing, so not going there is not an option. In his own words:

    Even if it were possible, the idea of stopping the coming wave isn’t a comforting thought. Maintaining, let alone improving, standards of living needs technology. Forestalling a collapse needs technology. The costs of saying no are existential. And yet every path from here brings grave risks and downsides. This is the great dilemma.

    Like

  12. You will recall from my article that, according to Google’s AI summary, one of the potential contributions that AI can make towards tackling climate change is the improvement of weather forecasts. This possibility has been taken up by the following BBC article:

    ‘Tech giants unleash AI on weather forecasts: are they any good?’

    https://www.bbc.co.uk/weather/articles/cwy6ykp7049o

    By all accounts, machine learning is not going to outperform existing, physical model-based forecasting any time soon. Meanwhile, this being the BBC, the article writer couldn’t resist the opportunity to plug the problem of heat-related deaths, nary a mention being given to the much more pressing problem of cold weather mortality.

    Liked by 1 person

  13. Jit,

    Further to the above, although I say there are a few institutes focused on researching AI ethics and safety, they are in fact only very small in number. Worldwide there are approximately thirty to forty thousand scientists and mathematicians currently active in AI research. This contrasts with only three or four hundred specializing in the technicalities of AI safety. The complacency within the field is shocking.

    I personally came face to face with this complacency when I had a run-in some while ago with the denizens of ATTP regarding the British Computer Society’s call for software development standards to be introduced to deal with academic, safety-related software (the poor coding standards found in the pandemic modelling software used for Covid-19 set it all off). The ATTP reaction was to belittle the BCS and to attempt to portray me as a clueless idiot. One individual in particular to go on the attack goes by the name Dikran Marsupial. This is the pseudonym used by Dr Gavin Cawley, a senior lecturer in the School of Computing Sciences at the University of East Anglia and expert in machine learning. At one point, I referred to high levels of attention towards the need for software development standards applied to machine learning software. Cawley responded with:

    So much attention? Google scholar suggests otherwise, the most highly cited paper I could find was that one, with 42 citations, and a paucity of journal papers. That suggests there is a small community of researchers working on that topic. I’ve been going to machine learning conferences for decades, and there is rarely more than a special session about that sort of thing, if that. Give me a break!

    Which actually just served to make my point for me. There are indeed very few people within the machine learning community who are focused on software safety standards, but there are many software safety experts outside of the ML field who are looking in and responding with alarm. As I put it when later reflecting upon the exchange:

    The problem with this argument, of course, is that it assumes that there is only one community of importance in the debate, i.e. the community of academic machine learning researchers. In actual fact, I was alluding to the interest shown within the functional safety engineering community, who are well aware of the lack of functional safety activity within the machine learning community and are very concerned about it. After all, it is the functional safety guys who will have to take the machine learning software and prove its safety in operation.

    For a fuller account of my reflections, see my comments under the following article:

    https://cliscep.com/2020/05/28/when-code-goes-wong/

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.