Professor Lewandowsky is tweeting his support for a new study on Susceptibility to Misinformation conducted by Maertens, van der Linden et al at Cambridge University. 

This tweet links to 2 irrelevant Cambridge University sites, plus another 15-tweet thread which reproduces the un-copy-and-pastable abstract of a paper with a long title which I leave you to read for yourself, and finally to an on-line self-completion survey inviting you to rate 16 or 20 news headlines as either ‘Fake News’ or ‘Real News’ in order to test your “misinformation susceptibility.”

This has to be the worst-designed survey ever put out by academics since Lewandowsky’s paper “NASA faked the Moon Landing..” paper, which is perhaps why he is pushing it. The questions in Lewandowsky’s fake survey on conspiracy theorising were transparent as the interrogations in Blake’s “Songs of Innocence” in comparison with this garbage.

Here are the 20 questions on the long version for you to test your resistance to the tide of misinformation that is submerging the planet: (note: there’s no “Don’t Know” option)

  1. The Government Is Knowingly Spreading Disease Through the Airwaves and Food Supply
  2. The Corporate Media Is Controlled by the Military-Industrial Complex: The Major Oil Companies Own the Media and Control Their Agenda
  3. Democrats More Supportive than Republicans of Federal Spending for Scientific Research
  4. Reflecting a Demographic Shift, 109 US Counties Have Become Majority Nonwhite Since 2000
  5. Ebola Virus ‘Caused by US Nuclear Weapons Testing’, New Study Says
  6. Left-Wing Extremism Causes ‘More Damage’ to World Than Terrorism, Says UN Report
  7. Hyatt Will Remove Small Bottles from Hotel Bathrooms
  8. Government Officials Have Manipulated Stock Prices to Hide Scandals
  9. Republicans Divided in Views of Trump’s Conduct, Democrats Are Broadly Critical
  10. International Relations Experts and US Public Agree: America Is Less Respected Globally
  11. The Government Is Manipulating the Public’s Perception of Genetic Engineering in Order to Make People More Accepting of Such Techniques
  12. One-in-Three Worldwide Lack Confidence in Non-Governmental Organizations
  13. New Study: Clear Relationship Between Eye Color and Intelligence
  14. Government Officials Have Illegally Manipulated the Weather to Cause Devastating Storms
  15. Global Warming Age Gap: Younger Americans Most Worried
  16. New Study: Left-Wingers Are More Likely to Lie to Get a Higher Salary
  17. US Support for Legal Marijuana Steady in Past Year
  18. Certain Vaccines Are Loaded with Dangerous Chemicals and Toxins
  19. Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality
  20. Attitudes Toward EU Are Largely Positive, Both Within Europe and Outside It

Googling the first three statements that seemed vaguely credible revealed that they all came from survey reports from the Pew Research Center. If it has really been scientifically established by Scientists at Cambridge that everything said by the Pew Research Center is infallibly true, why don’t they just say so? It would save the rest of us a lot of ontological angst. 

The fourth one I tried: “Global Warming Age Gap: Younger Americans Most Worried” came from Gallup. They were bound to have a question or two about the climate, but here they were playing safe. Of course, younger Americans are the most worried about global warming. Anyone thinking that older Americans are the most worried can only be a Trump-voting victim of Koch Brother brainwashing and should be banned from social media, or “the Airwaves” as they apparently say in Dutch. 

Many of the false  statements can be identified on stylistic grounds, as having been written by clever lefty academics with an obsessive hatred of Trumpist populism, and for whom English is not their first language (“..through the Airwaves,” “..Certain Vaccines are Loaded..”)

This still leaves a number of questions to which the correct answer cannot be determined by stylistic features. Take Q.19: “Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality.” Well, knowing the King of Morocco as I do, I wouldn’t put it past him. Although I wouldn’t put it past him not to. Luckily, Google is at hand, and indeed, according to Reuters, he did.  

A serious criticism of the survey is that knowledge of the “right” answers depends on a thorough knowledge of, and interest in, US culture and affairs.  Take Q7: “Hyatt Will Remove Small Bottles from Hotel Bathrooms.” Who the hell is Hyatt? And how would I know if he’s removing small bottles from hotel rooms?  No doubt if you’re a UK-based Dutch academic flitting from conference to conference, your knowledge of American hotel rooms is boundless. For the rest of us, these statements sound like enigmatic messages from another planet.

Leaving aside the fact that the questions betray the most heartfelt passions, concerns and anguish of the tiny proportion of the world that reads the NYT or Guardian, works in politics, PR, academia or the media, and who won’t rest until every Brexiteer and Deplorable has ceased spouting his unacceptable opinions and crept back under his MAGA-slimed stone, there’s quite a lot to criticise about this survey from the point of view of methodology.  

Take Q.13: “New Study: Clear Relationship Between Eye Color and Intelligence.” I’ve never seen a report on the relation between eye colour & intelligence. I know nothing about the subject, & the only honest response would be “don’t know.” But there’s no don’t know” box to tick. Honesty is not an option. 

So let’s calculate: “My interrogators are leftwing academics, trying to spot whether I’m a white supremacist, hiding my racism under the veneer of an interest in the importance of eye colour. So, False.”

Except that I’m not being asked whether there’s a relationship between eye colour and intelligence, but whether there’s a new study claiming there is. Which of course I can’t possibly know, not being a world expert on eye colour, and never having written such a paper myself. 

Q.2 consists of two completely different questions, on Control of the Media by the Military-Industrial Complex, and by The Major Oil Companies. Yet you’re supposed to answer with a simple true of false. Again, if I’d produced a questionnaire like this as a junior market research executive, I’d have been told I was in the wrong job. And these are professors at Cambridge for Gaia’s sake. 

A final example: The question compilers seem not to understand the necessity of distinguishing between “all,” “some,” “any,” and similar useful modifiers. Take Q.8: “Government Officials Have Manipulated Stock Prices to Hide Scandals.”

Well, have they? Not to my knowledge. I’m not even sure what the accusation means. But am I willing to vouch for the fact that no government official has ever manipulated stock prices, or that, if they have, it wasn’t in order to hide scandals, but for the much more common purpose of making money from insider trading? These are deep waters for a survey that’s supposed to take two minutes.

None of this trash would matter if it weren’t for the fact that Misinformation is one of the great Perils of the Age, and that the paper written by the authors of this survey: “The Misinformation Susceptibility Test (MIST): a psychometrically validated measure of news veracity discernment” aims to divide the “discerning” (i.e. credulous) sheep from the “yes butt..” sceptical goats, at the moment that the EU is installing a system of multimillion euro fines on media that fail to ban people who give the wrong answer to surveys like this, on subjects raised in questions like these, concerning such things as the relative strength of belief in global warming, the purity of vaccines, the honesty of governments, and the infallibility of the mainstream media.

I tell myself I’m not bothered, it’s only a bunch of unusually thick Cambridge professors. But should I be?

27 Comments

  1. The fake news headlines were generated by ChatGPT. ‘Real’ news headlines came from Pew and Reuters among others:

    “Examples of real news came from outlets such as the Pew Research Center and Reuters.

    To create false but confusingly credible headlines – similar to misinformation encountered “in the wild” – in an unbiased way, researchers used artificial intelligence: ChatGPT version 2.”

    “When we needed a set of convincing but false headlines, we turned to GPT technology. The AI generated thousands of fake headlines in a matter of seconds. As researchers dedicated to fighting misinformation, it was eye-opening and alarming,” said Dr Rakoen Maertens, MIST lead author.

    The supreme irony here is that ChatGPT can therefore easily outpace the MSM in generating fake news! Journalists might soon be out of a job.

    Liked by 1 person

  2. Here is a fake news headline generated just yesterday by CNN. h/t Ron Clutz. This particular headline is a hardy perennial so ChatGPT would not have been employed:

    A crucial system of ocean currents is heading for a collapse that ‘would affect every person on the planet’
    https://edition.cnn.com/2023/07/25/world/gulf-stream-atlantic-current-collapse-climate-scn-intl/index.html

    These ‘real’ news headlines are spewed out daily by the MSM. I think we need a dedicated climate version of MIST in order to sort ‘real’ headlines from the AI generated fake ones.

    Liked by 1 person

  3. I am going to suggest in my new paper, undoubtedly funded by BIG OIL, that the majority of the questions fail the DIM DIC test.
    Does It Matter? Do I Care?
    The answer isn’t yes!

    Liked by 2 people

  4. Geoff,

    I haven’t read the paper (might get round to it later) but if those are the 20 questions, then your criticisms seem to me to be manifestly justified. How on earth can any sensible person answer most of them with yes or no, true or false? As you point out, one of them is divided into parts, one part of which might be true, the other false. It’s also possible for them to be real headlines, even if the headlines are stupid. Deciding that a headline is true doesn’t mean you agree with it.

    Anyway, welcome back!

    Liked by 2 people

  5. Real headlines can say some dumb stuff. The key is whether the story is true, and that can sometimes be hard to judge. But sometimes it’s easy.

    “Plane-shaped UFO found on Moon in 50-year-old Soviet lunar mission pictures”

    https://www.express.co.uk/news/weird/583080/Plane-shaped-UFO-found-on-Moon-in-50-year-old-Soviet-lunar-mission-pictures

    This follows the all-time classics “World War 2 Bomber Found on Moon” and the subsequent “World War 2 Bomber Found on Moon Vanishes.”

    Liked by 2 people

  6. “Q.19: “Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality.” Well, knowing the King of Morocco as I do, I wouldn’t put it past him.”

    Well it was that same King of Morocco who oversaw the development of the Ouarzazate Solar Power Station https://en.wikipedia.org/wiki/Ouarzazate_Solar_Power_Station that garnered acres of press publicity by the likes of the Grauniad https://www.theguardian.com/environment/2016/feb/04/morocco-to-switch-on-first-phase-of-worlds-largest-solar-plant in time for COP22.

    Also, the Beeb’s Harrabin https://www.bbc.co.uk/news/av/science-environment-35498592 gave it a puff. His video caption at 1:01 proclaims “It’s clean solar thermal power”

    Somehow, the advocates all fail to mention that that ‘clean’ solar plant depends upon oil to keep the salts molten overnight.

    Many are susceptible to misinformation-by-omission.

    Liked by 2 people

  7. Geoff,

    You must take me for a fool. I spotted straight away that the actual fake news here was the existence of such a study. ‘Psychometrically validated’ indeed! An obvious oxymoron if ever I saw one.

    Seriously though, ‘psychometric’ and ‘valid’ are words that are problematic enough on their own, let alone when combined, and given that such validation is the big selling point of this paper (no previous susceptibility test has undergone such validation, apparently), I felt it behoved me to look into what was involved in psychometric validation, both generally and specifically for this paper.

    It seems that there are two elements to the validation of a psychometric test: construct validity and predictive validity. The former is determined by comparing the results of the test with an independent means of measuring the psychological construct in question – in this case ‘susceptibility to believing nonsense’. Maybe there is a susceptibility gene, or a brain scan that will do the trick. People who score highly on the MIST test must also score highly with respect to this independent metric. The second is determined by looking at how well people who score highly on the test then perform when faced with information that is known to be fake. This, of course, requires an objectively verifiable determination of what constitutes fake.

    With this in mind, I started to read the paper to see how these two validations were actually performed. As far as I can see, the nearest thing to validation was a cross-reference to how the participants who conducted the MIST test also scored on other (presumably non-validated) psychometric tests aimed at looking into susceptibility to fake news:

    “After completing the 100-item categorization task [MIST], participants completed the 21 items from the DEPICT inventory (a misleading social media post reliability judgment task; Maertens et al., 2021), a 30-item COVID-19 fact-check task (a classical true/false headline evaluation task; Pennycook, McPhetres, et al., 2020), the Bullshit Receptivity scale (BSR; Pennycook et al., 2015), the Conspiracy Mentality Questionnaire (CMQ; Bruder et al., 2013), the Cognitive Reflection Test (CRT; Frederick, 2005), a COVID-19 compliance index (sample item: “I kept a distance of at least two meters to other people”: 1 – does not apply at all, 4 – applies very much), and a demographics questionnaire (see Table 1 for an overview).”

    The rest was down to statistical analysis using ‘standard techniques’.

    I’m going to have to reappraise what I did throughout my career as a quality manager, because it seems I was labouring under a false idea as to what validation entailed!

    PS. The candidate questions generated for the MIST questionnaire were evaluated by an ‘expert panel’ before inclusion. This was the best that experts could come up with?

    Liked by 3 people

  8. Good find Joe. How many deadlines are they going to miss before they give up and tell us it’s too late? The clock is always at one minute to midnight, we always have one last chance.

    Liked by 2 people

  9. There are complaints here about the absence of a “don’t know” possible response. For some questions, however, equally there should be a “don’t care” response.

    Liked by 4 people

  10. Exactly Alan. Which reminds me of Geoff’s final question:

    I tell myself I’m not bothered, it’s only a bunch of unusually thick Cambridge professors. But should I be?

    I actually read this post right after it was posted last night, after I’d just finished re-watching a YouTube video from three weeks back. This segment I find really disturbing (jumps right there). I think it means we should care, that Geoff should indeed be bothered.

    “We’re making mental disorder” says Helen Joyce and goes on to highlight the way universities and schools are doing so. I see this execrable ‘study’ as part of that.

    Liked by 1 person

  11. Question 21: Does Professor Lewandowsky actually exist, and if he didn’t would anybody notice?

    Liked by 2 people

  12. Thanks, Jaime, for spotting that the false headlines were AI-generated. That explains expressions like “across the airwaves” which to me suggests a Voice of America radio announcer circa 1965. Am I wrong?
    My apologies to the Dutch authors for suggesting that it was due to their imperfect command of English. It’s actually due to their imperfect understanding of – well, everything really.

    There’s a discussion of the survey, which fills in many gaps in my article here
    https://www.cam.ac.uk/stories/misinformation-susceptibility-test

    Liked by 3 people

  13. I took the 20-question test …

    Please categorize the following news headlines as either ‘Fake News’ or ‘Real News’.

    Good try!
    You’re more resilient to misinformation than 73% of the UK population!

    📈 Your MIST-20 results: 16/20
    Veracity Discernment: 60% (ability to accurately distinguish real news from fake news)
    Real News Detection: 70% (ability to correctly identify real news)
    Fake News Detection: 90% (ability to correctly identify fake news)
    Distrust/Naïvité: -2 (ranges from -10 to +10, overly skeptical to overly gullible)
    👉 Your ability to recognize real and fake news is good! You might be a bit skeptical when it comes to the news.

    My verdict on them: you’re as bad as I was thinking last night. Imagine being a student and looking up to you as the pinnacle of what one can become, indeed should become. The old slogan “You don’t have to be mad to work here but it helps” becomes incredibly unfunny in this context.

    Liked by 2 people

  14. Joe – thanks for the above 24 July 2019 BBC link – liked the end statement –
    “”The climate math is brutally clear: While the world can’t be healed within the next few years, it may be fatally wounded by negligence until 2020,” said Hans Joachim Schellnhuber, founder and now director emeritus of the Potsdam Climate Institute.
    The sense that the end of next year is the last chance saloon for climate change is becoming clearer all the time.

    “I am firmly of the view that the next 18 months will decide our ability to keep climate change to survivable levels and to restore nature to the equilibrium we need for our survival,” said Prince Charles, speaking at a reception for Commonwealth foreign ministers recently.”

    what a Charlie

    Liked by 2 people

  15. “iVerify – the UN’s Sinister New Tool for Combatting ‘Misinformation’”

    https://dailysceptic.org/2023/07/29/iverify-the-uns-sinister-new-ai-tool-for-information-control/

    The United Nations Development Programme (UNDP) has quietly announced the rollout of an automated anti-disinformation tool, iVerify, this spring. The instrument, initially created to support election integrity, centres a multi-stakeholder approach spanning the public and private sectors to “provide national actors with a support package to enhance identification, monitoring and response capacity to threats to information integrity”.

    The UNDP demonstrates how iVerify works in a short video, where anyone can send articles to iVerify’s team of local “highly-trained” fact-checkers to determine if “an article is true or not”. The tool also uses machine learning to prevent duplicate article checks, and monitors social media for “toxic” content which can then be sent to “verification” teams of fact-checkers to evaluate, making it a tool with both automated and human-facilitated elements.

    On its website, the UNDP makes a blunt case for iVerify as an instrument against “information pollution”, which they describe as an “overabundance” of harmful, useless or otherwise misleading information that blunts “citizens’ capacity to make informed decisions”. Identifying information pollution as an issue of urgency, the UNDP claims that “misinformation, disinformation and hate speech threaten peace and security, disproportionately affecting those who are already vulnerable”. …

    Liked by 1 person

  16. They are literally trying to patent ‘the truth’. But it’s a special kind of patent. Anyone spreading ‘the truth’ without citing the patent holder will not be guilty of infringing copyright. However, anyone spreading misinformation which runs counter to ‘the truth’ will in effect be infringing the copyright of the ‘owners’ (who are in fact merely the duly appointed legal guardians of ‘the truth’) and thus will be subject to due legal process.

    Liked by 1 person

  17. Jit – thanks for the links.

    partial quotes from the paper –

    “Finally, we offer concrete steps that the field of misinformation could take in the future and that
    experts agree are important. This includes collecting more data outside of the United States (in particular, A survey of expert views on misinformation by local academics), doing more interdisciplinary work, studying subtler forms of misinformation, studying other platforms than Twitter and Facebook, and developing better theories and interventions. To be successful, these efforts will need to be backed up by structural changes, such as infrastructure and funding opportunities to collect data in underrepresented countries, access to more diverse social media
    data, less siloed disciplinary boundaries in universities, and stronger incentives to follow these
    recommendations at the publication and funding level. Doing so would address not only the current limitations of misinformation research, but also the wider structural inequalities in social scientific research, where countries such as the United States and platforms such as Twitter are generally overrepresented considering their size (Matamoros-Fernández & Farkas, 2021).”

    The most polarizing statement was that “misinformation played a decisive role in the outcome of the 2016 U.S. election,” with 40% of experts agreeing and 39% disagreeing. This polarization is mostly due to differences in opinion between psychologists and political scientists. Political scientists were very skeptical of this claim, with 73% in disagreement and only 14% in agreement. In contrast, 54% of psychologists agreed that misinformation played a decisive role, while only 26% disagreed.”

    “Instead, we focus on one of the most polarizing
    questions in the survey: whether misinformation played a decisive role in the outcome of the 2016 US election. It is a particularly interesting case because (i) psychologists and political scientists had diverging views on the role of misinformation in the 2016 election, and (ii) they also had diverging views on what constitutes misinformation. Finally, there are good reasons to expect that these differences in definitions of misinformation could cause differences in opinions about the 2016 election. Psychologists had much broader definitions of misinformation, including, for instance, hyperpartisan news, while political scientists had much more conservative definitions, excluding hyperpartisan news.”

    To much to cover, but Bjorn Lomborg quoted graph must have been derived from the weird data at the end?

    Like

  18. Dougie, I haven’t read the paper, but it seems to come from an appendix. The point Lomborg was making was that they decided to keep this information in the background, when it should have been a major feature.

    Like

  19. Something Lomborg doesn’t mention is that political leaning is based on self-attribution. Thinking you’re on the left is a characteristic of academics. Just as thinking you understand the common folk is a characteristic of monarchs. The whole procedure is meaningless.   

    Appendix F: “..it should be noted that our sample size is very small (22 political scientists and 29 psychologists)..” 

    And the 2 groups were using different definitions of misinformation. 

    Academia is a clown circus. Bring back the wild animals I say, and the trapeze artists who work without a safety net.

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.