One-sentence summary: the Consensus Crew believe that telling people there’s a 97% consensus about climate change leads to support for action on climate change, but the data shows that it doesn’t, as explained in a new paper by Dan Kahan.
Consensus as a gateway belief
In February 2015, a paper came out in PLOS ONE, The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence, by van der Linden, Leiserowitz, Feinberg and Maibach, referred to henceforth as VLFM.(Maibach is the “expert in the uses of strategic communication” who failed to foresee that his RICO letter would backfire so disastrously.)
The paper proposed a “gateway belief model”: the idea that consensus-messaging (telling people there’s a 97% consensus about climate change) increases belief in AGW and leads to “increased support for public action”. The authors claimed that their experiments, using 1104 people, provided direct evidence to support this model: “these findings provide the strongest evidence to date that public understanding of the scientific consensus is consequential.”
Participants were asked some questions, such as “how strongly do you believe climate change is or is not happening”, and “Do you think people should be doing more or less to reduce climate change” and asked to give an answer on a 0-100 scale. They were then asked the same questions again, after being told that there’s a scientific consensus about climate change.
There are some rather odd things about the paper. It doesn’t explain clearly what they did. It mentions a control group but doesn’t report a comparison between the test group and the control group. It claims that Republicans responded particularly well to the message, but again doesn’t report any results on this. But the strangest thing is that the effect that they report is tiny: the average reported number on the key question of support for action only went up from 75.19 to 76.88. So the reported results don’t support the main claim of the paper (see Kahan’s blog post from the time the paper came out).
PLOS ONE and its data policy
The journal PLOS ONE is not particularly highly regarded. It is an open access journal, which is generally thought to be a good thing, but the downside is that authors pay $1500 to the journal if their paper is published. This obviously gives the journal an incentive to accept papers.
However, a strength of PLOS ONE is its strong data policy — “PLOS journals require authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. When submitting a manuscript online, authors must provide a Data Availability Statement describing compliance with PLOS’s policy”. The policy applies to all papers submitted after March 2014, which includes the VLFM paper.
The VLFM paper quite blatantly flouts the PLOS data policy. Its Data Availability Statement is:
Data Availability: All relevant data are within the paper.
This is completely untrue. As noted above, there is very little data in the paper at all. For some of the reported claims, there is not even any summary data, let alone any raw data. The paper does have some supporting information, but that file turns out to be just another of their papers, again containing very little data.
This open contempt for the journal’s “all data…” policy was noticed by a reader who wrote to the journal requesting that the full raw data set should be made available, and in March this year it was made public as a CSV spreadsheet at figshare.
Kahan’s new paper
Dan Kahan, who runs the Cultural Cognition research project and blog, has recently written a substantial new paper re-analysing the VLFM data, and put it on the SSRN preprint server. His title is “The Strongest Evidence to Date . . .”: What the van der Linden et al. (2015) Data Actually Show. Here is the abstract:
This paper analyzes the data collected in the study featured in van der Linden, Leiserowitz, Feinberg, and Maibach (2015). VLFM report finding that a consensus message “increased” experiment subjects’ “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it. However, VLFM fail to report study data essential to evaluating this claim. Subjects told that “97% of climate scientists have concluded that human-caused climate change is happening” did indeed increase their own estimates of “the percentage of scientists [who] have concluded that human-caused climate change is happening.” But the degree to which they thereafter “increased” their expressed levels of belief in global warming and support for mitigation did not vary significantly (in statistical or practical terms) from the degree to which control-group subjects, who read only “distractor” news stories, increased theirs. The median and modal changes in the 101-point scales used to measure these “increases” was in fact zero for both groups. In addition to reporting the responses of the control-group subjects, the paper corrects VLFM’s misspecified structural equation model and identifies other discrepancies between the data and VLFM’s characterizations of it, including ones relating to the impact of the experimental treatment on subjects of opposing political outlooks.
One of the oddest things about VLFM is their failure to make use of the control group. At the risk of insulting the intelligence of readers, a control group is a group of individuals who are not subjected to a treatment, who can then be compared with those who did get the treatment. In this case, the “treatment” was being told that there’s a consensus on climate change, whereas the control group weren’t told that. Kahan points out that VLFM claim to have compared the treatment group to the control group, but in fact they did not do so. Here’s the table of results from VLFM:
It’s not clear, but you’d assume that these columns of numbers refer to those who were given the “treatment”. In fact, what they did can only be worked out from the numbers in the raw data file. The first row, on the consensus, is indeed for the treatment group. But for the other rows, Belief in climate change … Support for Public Action, the treatment group and the control group were lumped together. This is absurd — I am not aware of any other paper where tests have been done for a treatment and a control, but then the two groups were lumped together rather than compared!
Kahan’s paper shows that there is very little, if any, difference between the control group and the treatment group. For both groups, the median score on belief that climate change is happening when this was asked for a second time was 86. On the support for action question, 27% of those who had been given the consensus message opted for the highest possible number, 100, but so did a slightly higher proportion, 29%, of the control group.
Kahan also criticises the “structural equation model” used by VLFM. This is a piece of circular reasoning based on their hypothesis that knowledge of a consensus feeds through into support for action, where VLFM came up with embarrassingly small effects and again neglected to make proper use of the control group.
Another curious feature of the VLFM paper is that the survey asks about political views, and the paper claims that “the consensus message had a larger influence on Republican respondents”, but no data on this is presented. In fact, this isn’t true (see figs 7 and 8 in Kahan’s paper) — the influence on support for action is equally minute for both Republicans and Democrats, the median change after consensus messaging being zero for both groups.
Here are some averages I worked out for the treatment group only on the question of support for action on climate change:
Democrats, pre-treatment: 81.8
Democrats, post-treatment: 84.3
Republicans, pre-treatment: 64.8
Republicans, post-treatment: 66.7
So the Democrats were moved by a small 2.5 points and the Republicans by an even smaller 1.9 points, the opposite of what VLFM claim. This is confirmed by the statistics in Kahan’s Table 4 that show a slight increase in the partisan divide after the consensus messaging for most of the questions. Furthermore, these numbers show that the tiny effect of the consensus messaging is completely swamped by the difference between Democrats and Republicans, which is almost ten times larger.
In summary, here’s a quote from the Kahan paper just before the Conclusion section:
VLFM (p. 6) exuberantly proclaim that “all [their] stated hypotheses were confirmed”: “increasing public perceptions of the scientific consensus causes a significant increase in the belief that climate change is (a) happening, (b) human-caused and (c) a worrisome problem,” and ultimately “increased sup-port for public action.” But if one simply looks at their data, it is hard to understand what they are so excited about.
Update 19 May:
Dan Kahan has written a detailed blog post on his new paper.