I was moved to comment on this Granite Geek post from David Brooks ("No, the other one") of the Concord Monitor: Mistrust of scientists can hinder fight against Zika, says UNH study. I'll expand on my comment here.
Brooks begins:
As a confirmed skeptic, I agree that it’s a good take to not blanket accept the statements of others but to consider them and weigh evidence when it exists. That is not the same thing as saying “I never believe X, period” – that’s a stupidly superficial response.
Excellent attitude. Except then Brooks immediately proceeds to uncritically echo a new study emitted from the Carsey School of Public Policy at the University Near Here.
What do you have to say about that, John Arnold?
The four most dangerous words: "A new study shows..."
— John Arnold (@JohnArnoldFndtn) May 31, 2016
Before we look at the study, let's note that the "Carsey School of Public Policy" is hardly an imprimatur of unbiased policy analysis. As we discussed back in November, the school's director, Michael Ettinger, sent (Wikileaked) mail to the Hillary Clinton campaign, offering to "be helpful from my perch" as director, and offered to connect the campaign with the "large population of influential and well-off progressives" in Portsmouth. [I'd speculate Ettinger had his eyes peeled for a plum job in the Hillary Clinton Administration, but such positions turned out to be only available in an alternative-fact universe.]
The Carsey school's benefactor, Marcy Carsey, is a reliably heavy contributor to Democratic Party causes and candidates. As is (to an appropriately smaller dollar figure) one of the study's authors, Lawrence Hamilton. Not that that necessarily means anything with respect to the study itself. But political bias has certainly been known to tilt what researchers, especially in social science, choose to study and the results they expect to get.
Maybe not in this case. But also: maybe.
Now that our skepticism detectors have been calibrated, lets move on to the "new study": The Zika Virus Threat. Subtitle: "How Concerns About Scientists May Undermine Efforts to Combat the Pandemic".
Well, there's another problem right there. A "pandemic" is something pretty dire. And (sure enough) you can find a lot of Google hits claiming that Zika might become a pandemic. They are notably, entirely from 2016. You'll find precious few claiming that Zika was (let alone is) a pandemic. The notable exception is a New England Journal of Medicine article from February 2016 from Drs. Anthony Fauci and David M. Morens of NIH, asserting "pandemic" status for Zika. The Carsey study treats this as definitive, and reflects the current state of affairs. But that's dubious.
Although definitions are fuzzy, the relevant Wikipedia article on Zika deems Zika an epidemic. Which is bad, but not as bad as a pandemic. Even more relevant, in the lead paragraph of the article, these two sentences are adjacent:
In January 2016, the World Health Organization (WHO) said the virus was likely to spread throughout most of the Americas by the end of the year. In November 2016 WHO announced the end of the Zika epidemic.
Um. It's difficult to read that and avoid thinking that most of the fear-mongering and heavy breathing about Zika was, at best, overblown. And it doesn't inspire a lot of confidence in pronouncements from "science".
That's not to say there are no reasons to be concerned and vigilant. Obviously it's a good idea to stomp on Zika until its danger to humans is minimized, assuming that's the most efficient use of scarce epidemiological resources. But how much trust can we put in the Carsey study when the headline recycles the panic-inciting yarns from last year as fact?
Moving on, because it gets worse. The Carsey study is entirely based on an October 2016 Granite State Poll, carried out by the UNH Survey Center. Around the same time, the Survey Center was also doing election polls. Their final polling, published two days before the election, contrasted poorly with reality:
- In the Presidential race, the Survey Center predicted "51% for Clinton,
40% for Trump, 6% for Johnson, 1% for Stein and 2% for other
candidates." The actual percentages were 48/47/4/1. They overestimated
Hillary's winning margin by 10 percentage points.
-
In the Senate race, the prediction was "52% for [Democrat] Hassan,
47% for [Republican] Ayotte, and 1% for other candidates". Actual percentages: 48/48/4.
An overstatement of the winning margin for the Democrat by 5 percentage points.
-
In the Governor's race, the prediction was "55% for [Democrat] Van
Ostern, 44% for [Republican] Sununu, and 2% for other candidates" Actual percentages:
47/49/4, Sununu winning. A 13 percentage point miss here.
So there's every reason to take the roughly-contemporaneous polling here with more than a grain of salt.
Let's look at one of the polling questions:
Do you agree or disagree that scientists adjust their findings to get the answers they want? If agree or disagree: Is that strongly or just somewhat?
The results:
Strongly agree | 17% |
Agree | 26% |
Neutral/Don't Know | 13% |
Disagree | 20% |
Strongly disagree | 24% |
How did the researchers report this?
Nearly one-half of New Hampshire residents agreed with the statement, “scientists adjust their findings to get the answers they want.” These individuals were significantly less likely to trust the CDC as a source of information about Zika.
"Nearly one-half" is actually 43%. "Adjusted findings", indeed.
I think if I were asked this question, I'd respond something like this:
It's not that simple.
Scientists are human beings, and are therefore subject to bias, both
conscious and unconscious. They have strong incentives to be seen as
"productive", because that is the pathway to their professional success.
There might be some saintly automatons out there that rise above these
human failings, but it's a sight less than 100%.
So I would have to be a damn fool to think that these factors cannot
sometimes cause some scientists to report "answers" that
don't reflect reality and can't
be reproduced. In fact, there have been
studies
done that show this is a huge issue in psychological research.
I'm not sure how that extends to other fields, but I'm relatively
certain it does. Nor am I sure what you mean when you say "adjust their
findings", but I think it skews what gets published.
I wonder how the Survey Center would pigeonhole that response? Probably as "Agree". Shoot me.
The study further concludes:
These results suggest that the erosion of trust in scientists not only affects highly politicized issues but may also undermine efforts to curb the spread of infectious disease and protect public health.
I'm pretty sure the Carsey researchers mean this to imply that the public should be less skeptical of "scientists". I'd argue that it indicates that scientists should make efforts toward being more trustworthy.