We are pollution.
Humans breathe out 500 ml of air at 4% CO2 concentration * 12 breaths per minute * 60 minutes per hour * 24 hours per day * 365.25 days per year – that’s north of 3 tonnes of CO2 per person, per year, year in, year out. Every breath you take is your little contribution to the climate crisis. Don’t be thinking that you can give up flying and swap to an electric vehicle and everything will be OK. It won’t. As long as you’re alive, you’re part of the problem.
That’s where we come in. Don’t worry, we don’t offer to send hitmen around to your house to reduce your carbon emissions that way! Instead, we’ve developed the VIRTUEsignal mask to help you take the guilt out of just being alive.
The idea for the VIRTUEsignal came when our CEO took a holiday to the Maldives, and with a pang of guilt wished that he could somehow attach some sort of device to his G5 private jet to absorb the CO2 shooting out the back as he whizzed over the Indian Ocean. Well, we had to lower our horizons a little, and decided to make a start by tackling personal emissions at source instead.
Disposable Filter
Both daytime models of VIRTUEsignal have disposable filters that should be good to absorb 6 hours of CO2 exhalation. The content of the filters is proprietary, but the key mineral comes from artisanal mines in the Belgian Congo. The international commodity trade offers great opportunities for young Congolese entrepreneurs to learn new skills. We don’t use machines in our mining operations, so you don’t need to worry about great diesel monstrosities spewing out tonnes of CO2 and doing the work of a hundred on your behalf. In our mines, a hundred do the work of a hundred! Our workers get paid for what they dig. Working out how much we owe them improves their numeracy levels too – it’s a win-win, because there are no schools where they can learn such skills.
Once the minerals are extracted, we ship them to China to be refined, and of course we observe the highest environmental standards every step of the way. We have green certificates to prove it! After that, the refined minerals are shipped to the UK, where we assemble the filters in our VIRTUEsignal factory, built in Sunderland on the site of a former car parts manufacturer. The local council wanted us in so badly, they offered to pay our rent for the first 3 years, and waived business rates for the first half decade. The disposable cartridges themselves are of tough plastic. A patented magic dye reacts to the CO2 passing through the filter; new filters have a healthy green hue, and when the filter is full, they turn a dull hellscape red. When it goes red, it’s dead. Simply twist, remove and replace. After that, the spent cartridge can just be thrown in with the rest of your rubbish. Once buried in landfill, the CO2 should remain trapped for at least 500 years.
The Gaia and the Greta
The entry level mask is the Gaia. This will retail at about £9.99, and comes with a free starter cartridge. The Greta meanwhile is our flagship mask (about £99.99), and comes in the form of a mischievous goblin. A stylish upgrade to the Gaia, the Greta is Wi-Fi enabled, and can be set to automatically tweet to your thousands of followers whenever you reach a CO2 saving milestone. We like to think of it as showing off the size of your halo! Both masks have handy straw holes, so you can drink your oatmilk latte without letting any CO2 into the atmosphere.
A year’s supply of replacement cartridges should cost under £5000, not including double-capacity overnighters which go with our extra-comfort Hypnos mask (likely to be priced at £19.99, with a free double-capacity cartridge).
At under £100 per week, it’s an inexpensive way to show the world you care.
There are so many reasons one shouldn’t laugh but I guffawed loudly in my local cafe at that.
LikeLike
Brilliant Jit, just brilliant. Thank you.
LikeLike
Satire of the highest order Jit. Absolutely brilliant.
LikeLike
LOL.
Just one question: will the auto-tweets of the Greta model self-destruct after 5 years?
LikeLiked by 1 person
Excellent. All we need now is the technology to fuel the aviation industry on recycled farts and we will have nailed net zero.
LikeLike
What a wonderful way to start all fools day on the 1st day of April. Congratulations
LikeLiked by 1 person
Martin Wellock: Indeed, Jit is playing the April Fool to perfection here, in the full King Lear sense. Sometimes the only way to speak truth to power. And hilarious with it.
LikeLiked by 1 person
By the way, astute readers might have noticed that the featured image is the work of AI, in this case DALL-E. The prompt I entered was something like “a hippie wearing a green respirator in a rainforest, digital art”. Note the appalling attempt at fingers, which is a hallmark of AI images of people – for now at least. This was the first time I had played with DALL-E, and I must admit to being very impressed. I will make use of it for featured images again… until such time as it ceases to be free (it’s metered at the moment).
I read people scoffing at this sort of thing, but it is not to be scoffed at, I think. If I had known about it before, I might have used this image to illustrate an earlier post (prompt something like “a golden frog in a mushroom grove, digital art”):
Somewhere I have a list of April 1st ideas. You will have to pretend to be surprised when reading the next one in a year’s time.
LikeLike
The thing with ‘artificial intelligence’ is, it’s not artificial, it’s just intelligence, but not human intelligence, which we created, but don’t fully understand. I think the call for a 6 month moratorium on the development of these open source AI platforms is sensible.
LikeLike
Thanks for the DALL-E tip, Jit.
Clever as it is, such AI has a way to go yet. I tried entering ‘a sweaty bishop with a climate tattoo on his forehead’ at the DALL-E-derived http://www.craiyon.com website and it produced pix of nine mostly non-sweaty prelates etc and only two of the nine had tattoos on their foreheads, neither of which markings had anything obvious to do with climate.
Never mind. Here’s a picture of an actual sweaty prelate with a (fake) climate tattoo on his forehead:
https://www.oxfordmail.co.uk/news/4713069.bishop-oxford-faces-climate-change/#gallery3
(That bloke is currently the CofE’s #2.)
*
I’ve just tried http://www.craiyon.com again, this time with ‘sweaty bishop with a climate tattoo on his forehead’.
What a difference an ‘a’ makes!
This time, six of the nine are a bit sweaty and six have tattoos on their foreheads.
But, again, none of the forehead markings have anything obvious to do with climate.
And this time four of the pix aren’t obviously prelatical.
LikeLike
I am agnostic on AI, but I find it interesting that the BBC and the Guardian/Observer have decided to question it at exactly the same time. What’s the agenda, and who’s driving it? Whatever – the BBC and the Guardian, two sides of the same coin.
“Laura Kuenssberg: Should we shut down AI?”
https://www.bbc.co.uk/news/uk-65147841
“AI has much to offer humanity. It could also wreak terrible harm. It must be controlled”
https://www.theguardian.com/commentisfree/2023/apr/02/ai-much-to-offer-humanity-could-wreak-terrible-harm-must-be-controlled
LikeLike
This tweet thread by Simon Goddek is pretty alarming. Something has got the wind up quite a lot of people who are very knowledgeable about AI. I think it’s the pretty awesome exponential learning curve of AI and some rather unexpected emergent properties of the AI programs.
LikeLiked by 1 person
“Paul Homewood has the version of this story in the Telegraph:
https://notalotofpeopleknowthat.wordpress.com/2023/04/02/gaviscon-for-cows/
but it appears that it really isn’t an April Fool:
“British cows could be given ‘methane blockers’ to cut climate emissions
UK’s 9.4m cattle produce 14% of human-related emissions, mostly from belching, but green groups remain sceptical”
https://www.theguardian.com/environment/2023/apr/02/british-cows-could-be-given-methane-blockers-to-cut-carbon-emissions
LikeLike
Jaime,
AI is the one that’s been sneeking up on all of us whilst we have been fretting about climate change.
LikeLiked by 1 person
These guys have been trying to address the concerns for some years now:
https://intelligence.org/
LikeLiked by 1 person
Well, David Deutsch disagrees with Scott Alexander on this one, calling it a “misconceived moratorium”
An AI disaster sceptic you might I think reasonably call him.
For myself, I adopt Manuel’s mantra in that episode of Fawlty Towers: “I know nothing”.
Worth remembering this reaction from Deutsch to Peterson’s chat with Lindzen in January:
Chris Colose was very put out by that. If I had to back a horse on AI it would be Deutsch
(PS While I loved the post I disliked the pic. Just sayin’)
LikeLike
Will the AI person be an idiot savant programmed with Democrat Woke principles?
LikeLiked by 1 person
Bit O/T – But watched the “Matrix” last night, am now wondering if I now live in cyberspace & how do I get back to sanity – I feel a song coming on – https://www.youtube.com/watch?v=VANDWa-LJ8c
LikeLike
“British cows could be given ‘methane blockers’ to cut climate emissions
UK’s 9.4m cattle produce 14% of human-related emissions, mostly from belching, but green groups remain sceptical”
Yes, yes, but what are they going to do with all those wildebeest and buffalos in Africa?
LikeLiked by 2 people
Africa is beyond them. What about those panic stampedes when a twig snaps and they’re off!
LikeLike
Richard,
Deutsch tweets:
“Automation by AIs leaves humans unprecedentedly free to exercise their creativity.”
I think it’s more the opposite. Human creativity is being replaced by AIs. If you’ve got access to an all-singing, all-dancing AI program which can do or create anything you ask of it, then most people are going to use it, stifling their own natural creative impulse. Look at the number of people asking ChatGPT for its ‘advice’, bidding it to find information and solve problems which they cannot or more likely will not do themselves, or asking it to write essays which they can pass off as their own. At the moment, it’s a bit of a ‘fun’ pastime for such people, but dependence is very quickly engendered. That doesn’t seem to me like a very likely route for the enhancement of human creativity, which has already been significantly stifled by the use of ‘smart’ phones and numerous ‘apps’.
On climate change, we’ve already got climate ‘scientists’ using AI to generate future scary scenarios and we have the sad case of a Belgian man who apparently killed himself after chatting to an AI bot about climate change.
https://www.2ndsmartestguyintheworld.com/p/from-psyop-19-to-psyop-climate-change
Perhaps as AIs become more knowledgeable, they will reassure the climate concerned, but can we trust an AI bot to divulge its superior knowledge compassionately and wisely, for the benefit of its human enquirers? More to the point, why would human enquirers appeal to the authority of a machine on such a vital topic? Human creativity and curiosity should dictate that they check for themselves. I find all this rather concerning.
LikeLike
AppleNews reports that “The Sun” tells it as it is :“ Cows to be given burp and fart blockers”
LikeLike
I think humans may be inclined to trust the output of computers, a phenomenon I call “cyber deference”, because we instinctively grasp that the computer is dispassionate – unbiassed, non-partisan, and without motivation other than to provide us with true facts. That would be to miss a lot of important factors of course, whether the domain happened to be climate models or the opinion of a chatbot.
Apparently there is video of a chatbot threatening to kill the human it was engaged in dialogue with. And of course thinking further, were the AI to become very well embedded in the human world and have motivations of that sort (whether by itself or by its controller) then it would certainly be able to find ways to harm particular people. It doesn’t take an SF writer’s mind to imagine how.
However, the idea of any kind of moratorium is absurd because countries that are not entirely altruistic will continue along the AI path if we don’t, which would be like falling behind a totalitarian state in a nuclear arms race. Unwise.
I have never used the chatbots. I understand they also solve maths problems, potentially (in my case) saving 12 pieces of paper in the process. They also generate pretty (and gross) pictures based on the user’s natural language prompts. I read somewhere that there may be future careers for those who specialise in crafting such prompts. The importance of the prompts is why I gave the ones I used, & I guess it is why Vinny did too. In a very small way it’s like describing the method in a paper.
LikeLiked by 1 person
Jit:
Great minds?
LikeLike
And I was thinking too of what you call “cyber deference” – not least the attitude of ‘intellectuals’ to climate models like GCMs for the last 30+ years. Nobody with any sense is saying that there’s no risk of this with misdirected GPT-4 and its successors. But I aver we have to adopt the von Neumann/Churchillian approach in response – makeshifts and expediency:
That’s from Bruce Sharp and his remarkable site about the Cambodian genocide of 1970s – a remarkably obscure topic, which absolutely shouldn’t be. But I’m always blessed to see that piece hasn’t yet become the victim of Bit Rot. https://www.mekong.net/cambodia/2020.htm
LikeLike
Richard, Jaime,
It may be helpful to consider what the Machine Intelligence Research Institute (MIRI) thinks about the risks posed by AI. This is what they say about themselves in their mission statement:
“The Machine Intelligence Research Institute is a research nonprofit [organisation] studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.”
And this is how they summarise the safety issues:
“Researchers at MIRI tend to be relatively agnostic about how the state of the art in AI will change over the coming decades, and how many years off smarter-than human AI systems are. However, we think some qualitative predictions are possible:
— As perception, inference, and planning algorithms improve, AI systems will be trusted with increasingly complex and long-term decision-making. Small errors will then have larger consequences.
— Realistic goals and environments for general reasoning systems will be too complex for programmers to directly specify. AI systems will instead need to inductively learn correct goals and environmental models.
— Systems that end up with poor models of their environment can do significant harm. However, poor models limit how well a planning system can control its environment, which limits the expected harm.
— There are fewer obvious constraints on the harm a system with poorly specified goals might do. In particular, an autonomous system that learns about human goals, but is not correctly designed to align its own goals to its best model of human goals, could cause catastrophic harm in the absence of adequate checks.
— AI systems’ goals or world-models may be brittle, exhibiting exceptionally good behavior until some seemingly irrelevant environmental variable changes. This is again a larger concern for incorrect goals than for incorrect belief and inference, because incorrect goals don’t limit the capability of an otherwise high-intelligence system.”
I would add to the above that the risks are to do with how humans respond to AI as much as they are to do with comparative intelligence. The problem is often portrayed as a risk that occurs when AI becomes more intelligent than humans. However, I think that we are seeing very real problems a lot earlier than that. The level of intelligence that exposes human gullibility seems to be the more relevant threshold and I have to say that the bar we have set for AI seems to be a lot lower than we assumed.
On the subject of a moratorium, I think it would be a good idea because the research into developing AI capability is currently outstripping the research into developing in-built safety. Unfortunately, however, I have to also agree that a moratorium is unlikely to work. I am reminded of the scientists in the late 1940s and 1950s and how they tried to stand in the way of the exploitation of nuclear fission.
Incidentally, I recently wrote a short story for the Safety Critical Systems Safety Club newsletter that explored the issue raised in MIRI’s fourth bullet point. I could publish it here if anyone were interested.
LikeLiked by 2 people
“I would add to the above that the risks are to do with how humans respond to AI as much as they are to do with comparative intelligence. The problem is often portrayed as a risk that occurs when AI becomes more intelligent than humans. However, I think that we are seeing very real problems a lot earlier than that. The level of intelligence that exposes human gullibility seems to be the more relevant threshold . . . . . ”
Exactly this. If we’re seeing problems now as regards the human response to AI, it’s difficult to predict if those problems will get much worse or they will iron themselves out once AI achieves a level of GENERAL intelligence which exceeds that of the humans it is interacting with. In this case, the dispassionate, unsentimental, unbiased nature of AI might, contrary to being of benefit to human beings, present a severe safety risk.
Personally, I think that going for a moratorium even if it is only observed by only some countries/AI developers, is a better option than just going all out in a ‘space race’ to develop AI with scant regard for the potential social consequences and no clear conception of the safety risks involved.
LikeLiked by 1 person
Above comment should be ref. John. Apologies.
LikeLike
Spent some time reading your various comments about possible danger to humans from AI with high expectations of someone at least mentioning Azimov’s three laws of robotics which were installed into all autonomous AI entities.
Law 1 a robot should cause no harm to a human, or by inaction cause harm
Law 2 a robot should obey a human unless by doing so should cause harm to a human
Law3 a robot should preserve itself unless by doing so should violate laws 1 or 2
Why can’t there be international agreement that all AI be fitted with such controls? Or aren’t we up to this yet?
LikeLike
Alan,
I have read a few MIRI papers over the years and one recurring theme seems to be the difficulties in developing an algorithmic basis for the implementation of a moral or ethical code. Such things seem easy for us to conceive of but much more difficult to turn into machine language. The other difficulty is that we may be dealing with adaptive, self-modifying systems that develop their own objectives in a way that we may not anticipate. Making safety cases depends upon a predictability of behaviour that may simply not be possible. Another problem that occurs to me with regard to Azimov’s laws is that they presume an ability to determine what is and is not human. But the whole point of of AI and its ability to mimic is that this discrimination may become impossible, even between AI systems.
LikeLike
Jaime,
If I understand correctly, a dispassionate, unsentimental, unbiased AI is something that MIRI wants to avoid. They do not see such traits as conducive to system safety — not in AI, at least.
LikeLike
Satire?
Bugger, Ive just ordered some Greta masks🙄.
LikeLike
Everyone seems to be talking of AI just now:
“ChatGPT will never be ‘intelligent’
The worship of AI betrays a lack of confidence in humanity.”
https://www.spiked-online.com/2023/04/03/chatgpt-will-never-be-intelligent/
LikeLike
Mark,
I took a quick look at the Spiked article and I have to say I was somewhat unimpressed. The safety issues relating to AI have nothing at all to do with philosophical matters as to whether or not a computer can ever experience consciousness. Ultimately the safety concerns are a matter of software validation and the behaviours of a system that may obstruct it. Believe me, such behaviours emerge in systems long before they become candidates for being consciously intelligent. And as for leaving humans behind, it doesn’t really matter if AI can be considered intelligent or not in the strong AI sense, it only matters that the system might outperform the properly intelligent and that the properly intelligent may fail to appreciate what they have created or even recognise it when they see it.
LikeLike
I understand they also solve maths problems,
Indeed they do, quite happily. Occasionally even correctly!
What’s funny is watching how absolutely idiotic the mistakes are. Chat-GPT will do 90% of a problem correctly, then stuff up the last — very simple — line.
If anything shows that there is no actual intelligence there, it is its approach to Maths.
LikeLike
Chester, I think Chat-GPT may soon master maths – with a little help from some humans.
https://www.magyar.blog/p/remember-when-chatgpt-sucked-at-math
John, yes, I think this is what I’m trying to get across. We can create an alternative to human intelligence, which can wildly outperform human intelligence, but we cannot manufacture synthetic human morality and sentiment, emotions, affinity, responsibility etc., love even. The reason being, we cannot measure or define those traits in real human beings, so we’re never going to be able to build them into a synthetic neural network. In the absence of such data, the machine is probably going to construct its own set of rules for interacting with biological life forms.
LikeLiked by 1 person
Jaime,
“…but we cannot manufacture synthetic human morality and sentiment, emotions, affinity, responsibility etc., love even.”
You raise some important issues here. Firstly, you may not think that synthetic morality is possible but MIRI was set up to explore its possibilities and, indeed, argues that such morality will be essential if super-intelligent AI were to be safe. As they would put it, we will need ‘super-ethicality’ to go with super-intelligence. Interestingly, they seem to see the best prospect as lying within the mathematical foundations of decision theory. See the following paper, for example:
Click to access EthicsofAI.pdf
Synthetic emotion is even more of an interesting concept, not least because of the role that emotion plays in human decision making. Neuroscientists have gained some interesting insights into that role by studying patients who have brain trauma resulting in neural disconnection between the emotion mediating areas of the brain and those that mediate rationality (e.g. the prefrontal cortex). What they found is that such patients did not make decisions with pure rationality, uncorrupted by emotionality, but instead were unable to make any decisions at all! It turns out that emotion is not something that impairs decision making, it is fundamental to its enablement. So if we are to replicate human level decision making, does that mean that we have to develop synthetic emotion? What would that even mean in a machine?
I think in order to answer the above question we need to first understand more about the neurobiology of emotion in humans. Current best thinking seems to be that it is an epiphenomenon arising from the brain’s monitoring of somatic states. That could certainly have its counterpart in AI machines but I don’t think the analogy should be taken too far. Besides which, the key issue would not be whether or not machines could ever experience emotion in the sense that humans do but whether or not humanlike decision making in machines could be possible without synthetic emotion and whether or not this will invariably lead to super-irrationality. I’m presuming we would not want that.
LikeLiked by 1 person
Why strive for complex human emotion? Pets are capable of expressing genuine love for their owners and dogs can be trained to hate trespassers. If I were tasked with developing AI emotions I would probably start by mimicking parts of the simpler reptile brain, as in birds, that can also express emotions.
LikeLike
Alan,
I guess I wouldn’t want to have to deal with a machine with the intelligence of Stephen Hawking and the emotional development of a lizard. Velociraptors anyone? 🙂
More seriously, I’m not convinced that the correct way forward would be to try to mimic brain architecture. It may suffice to focus upon the general principles of functioning. At its essence, emotion is the name we give to a complex, autonomous, adaptive, self-monitoring system’s cognition of its internal state, which is what we are and what AI machines would be. The mechanics of that cognition may or may not matter. I’m not sure.
LikeLike
Chester,
You are quite right. What Chat-GPT does cannot be termed intelligent by any stretch of the imagination. All it does is trawl the internet for text that scores highly on association indices and uses it to construct output in accordance with learnt rules that obey language syntax. There is no original thinking involved and no understand by the performing agent. Journalists, on the other hand…
No wait, I’ve got that the wrong way round. It’s journalists who cannot be termed intelligent by any stretch of the imagination.
LikeLiked by 1 person
Old AI doing Deepak Chopra as a bafflegab merchant:
http://www.wisdomofchopra.com
Newer AI doing Deepak Chopra as a bafflegab artist (Craiyon: ‘do a watercolour by Deepak Chopra about orderliness and the cosmos’).
https://postimg.cc/QB1MY0nW
LikeLike