Some time back, there were news stories reporting on studies of several communities that showed smoking bans to be followed by reductions in heart attacks. There are now reports of a much larger study done at the NBER which finds no such effect. How can one explain the discrepancy?
The simple answer is that in some communities heart attack deaths went up after smoking bans, in some they went down, in some they remained more or less unchanged. Hence a study of a single community could find a substantial reduction even if it was not true on average over all communities.
How did the particular communities reported in the early stories get selected? There are two obvious possiblities.
The first is that the studies were done by people trying to produce evidence of the dangers of second hand smoke. To do so, they studied one community after another until they found one where the evidence fit their theory, then reported only on that one. If that is what happened the people responsible were deliberately dishonest; no research results they publish in the future should be taken seriously.
There is, however, an alternative explanation that gives exactly the same result with no villainy required. Every year lots of studies of different things get done. Only some of them make it to publication, and only a tiny minority of those make it into the newspapers. A study finding no effect from smoking bans is much less likely to be publishable than one that finds an effect. A study finding the opposite of the expected result is more likely to be dismissed as an anomaly due to statistical variation or experimental error than one confirming the expected result. And, among published studies, one that provides evidence for something that lots of people want to believe is more likely to make it into the newspapers than one that doesn't.
The simple answer is that in some communities heart attack deaths went up after smoking bans, in some they went down, in some they remained more or less unchanged. Hence a study of a single community could find a substantial reduction even if it was not true on average over all communities.
How did the particular communities reported in the early stories get selected? There are two obvious possiblities.
The first is that the studies were done by people trying to produce evidence of the dangers of second hand smoke. To do so, they studied one community after another until they found one where the evidence fit their theory, then reported only on that one. If that is what happened the people responsible were deliberately dishonest; no research results they publish in the future should be taken seriously.
There is, however, an alternative explanation that gives exactly the same result with no villainy required. Every year lots of studies of different things get done. Only some of them make it to publication, and only a tiny minority of those make it into the newspapers. A study finding no effect from smoking bans is much less likely to be publishable than one that finds an effect. A study finding the opposite of the expected result is more likely to be dismissed as an anomaly due to statistical variation or experimental error than one confirming the expected result. And, among published studies, one that provides evidence for something that lots of people want to believe is more likely to make it into the newspapers than one that doesn't.
10 comments:
I think your second, innocent, possibility describes what's known as "publication bias".
I guess this is how religion gets started and maintained. You never read that, "She died a horrible death in spite of the fact that everyone in town was praying for her."
Something like that seems to be true of our perception of the world through our senses. We are more attuned to patterns of stimuli that we already recognize than we are to new patterns.
Negative results are just as valuable as positive results, and yet they essentially never get published or the researchers refunded. Everyone I've ever talked to about this agrees it's an enormous waste and just plain bad scientific practice, but it continues anyway.
Who knows how many people waste their time investigating hypotheses that many others have already disposed of?
matt,
There are a few journals of negative results around now.
Part of explanation 2 is that we have so many more studies of all sorts being generated. And (as David has argued before) computers have made this possible. In the old days, when people ran regressions by hand, it was time-consuming (and thus costly) to run a regression. People would only invest the time if they thought there was some reasonable chance of showing something important. But now with computers, we can easily run every sort of regression, often with only a few clicks of the mouse. Se we get much more data analysis, and thus more results at the tail of the distribution.
Computers have created many more monkeys typing on many more typewriters.
Academic journals generally tend to publish original research, it doesn't really matter whether a result is positive or negative: what matters most is that the empirical result must be "unexpected". This is the case of Card & Kruger's classical paper on minimum wage. What is really surprising to me is the extreme sensitivity of economists to fancy results. On the contrary, in other sciences like medicine, the widespread use of meta-analyses tends to put less emphasis on originality and more on objective assessment of empirical evidence.
The real lesson from this is "Don't trust one study", of anything, and especially if it tells you what you want to hear.
I realize this is a pretty old article so I won't make an extensive post. However there have been TWO very large studies based on fully public data that contradict these micro-studies. The first was one I did with Missouri researcher Dave Kuneman back in 2005. The full story on how such "negative results" were greeted by medical journals can be seen here:
http://www.acsh.org/factsfears/newsID.990/news_detail.asp
The second study was finished in March/April of this year and basically confirmed and duplicated our results. This study is evidently still seeking formal publication outside of its host, but it stands a better chance than Dave and I had: it was done by RAND and Harvard researchers for NBER: the National Bureau of Economic Research... a bit more prestigious than "Mike 'n Dave."
You can read some interesting analysis of the NBER study here:
http://www.reason.com/blog/show/133255.html
If the British Medical Journal had agreed to correct the mistake it had made by publishing the Helena study by publishing ours we might presently be seeing no smoking bans at all outside of California - Delaware and NY would probably have rolled theirs back under economic pressures.
Michael J. McFadden
Author of "Dissecting Antismokers' Brains"
Updated Link Correction:
The link in my immediately preceding posting to the study at ACSH is no longer valid. The article can now be accessed at:
http://acsh.org/2007/07/a-study-delayed-helena-mts-smoking-ban-and-the-heart-attack-study/
- MJM
Post a Comment