A pattern I have observed in a variety of public controversies is the attempt to establish some sort of official scientific truth, as proclaimed by a suitable authority—a committee of the National Academy of Science, the Center for Disease Control, or the equivalent. It is, in my view, a mistake, one based on a fundamental misunderstanding of how science works. Truth is not established by an authoritative committee but by a decentralized process which (sometimes) results in everyone or almost everyone in the field agreeing.
Part of the problem with that approach is that, the more often it is followed, the less well it will work. You start out with a body that exists to let experts interact with each other, and so really does represent more or less disinterested expert opinion. It is asked to offer an authoritative judgement on some controversy: whether capital punishment deters murder, the effect on crime rates of permitting concealed carry of handguns, the effect of second hand smoke on human health.
The first time it might work, although even then there is the risk that the committee established to give judgement will end up dominated not by the most expert but by the most partisan. But the more times the process is repeated, the greater the incentive of people who want their views to get authoritative support to get themselves or their friends positions of influence within the organization, to keep those they disapprove of out of such positions, and so to divert it from its original purpose to becoming a rubber stamp for their views. The result is to subvert both the organization and the scientific enterprise, especially if support by official truth becomes an important determinant of research funding.
The case which struck me most recently had to do with second hand smoke. A document defending a proposal for a complete smoking ban on my campus was supported by a claim cited to the Center for Disease Control. Following the chain of citations, it turned out that the CDC was basing the claim on something published by the California EPA, which cited no source at all for it. As best I could determine, the claim originated with research that was probably fraudulent, using cherry-picked data to claim enormous and rapid effects from smoking bans. Pretty clearly, the person on my campus who was most responsible for the document had made no attempt to verify the claim himself, merely taken it on the authority of the CDC. For more details see my post on the case.
An interesting older case involved Cyril Burt, a very prominent British Psychologist responsible for early studies of the heritability of I.Q., a highly controversial subject. After his death he was accused of academic fraud of various sorts. The official organization consulted was The British Psychological Association, which concluded that he was guilty, a conclusion that many people then took, and some still take, for gospel. Subsequently, two different authors published books arguing convincingly that some or all of the charges against him were bogus. Interested readers can find a detailed discussion of the case in Cyrus Burt: Fraud or Framed, which concludes that much, at least, of the case against Burt was in error. I am not certain, but I believe that the BPA later reversed its judgement, withdrawing the claim that his work had been fraudulent. Perhaps one of my readers can confirm that—I did not manage to with a brief Google search.
It is natural enough that observers of such controversies want an authoritative answer from a authoritative source—quoting the CDC is much less work than actually looking at the research a claim is based on. But treating such answers as really authoritative is a mistake, and a pattern of treating them that way a dangerous mistake.
And how about the National Cancer Institute? Here is a third case, with a link to a law review comment I wrote on it: http://www.peoplevstate.com/?p=1883
http://www.nytimes.com/2011/02/08/science/08tier.html?_r=2& I think this quite well illustrates the problems. It's good that at least some psychologists realize that there are biases that may in some cases be so significant as to jeopardize the whole scientific effort...
Actually the article I link to shows perhaps a more serious problem than that of "official truth institutions". Institutions can be ignored. But if such an overwhelming majority (as in Haidt's cases of psychologists - see the article) of the experts in a given field is biased in one direction (it is absolutely unrealistic to expect the experts - or anybody for that matter - to be unbiased), then even in the absence of an official "truth institution" it is going to be very hard to come up with resulst that are meaningful. If the biases are more or less evenly distributed, the chances are both sides (or all sides) will point out the mistakes of the other and you end up with something rather balanced. But if 90 (or more) percent of the people in a field are biased in one direction, then the remaining few seem like a bunch of lunatics next to the unison voice of the vast majority - even if they are actually right about a lot of things...
kinda reminds me of anarchism vs. statism...but maybe that is just my own bias :)
Thanks for the link--I've just used a quote from the article for the new quote of the month on my web page.
One example of the point you raise is the issue of the effect of child-rearing on adult personality. Judith Harris, in The Nurture Assumption, offered quite a lot of evidence that there is very little effect, contrary to what virtually everyone in the field believed, and set off a considerable controversy by doing so. Her two books give an interesting picture of the problem of false consensus in an academic field.
The trouble with your method of looking for what most experts believe is that that consensus can itself be faked (and has been in the case of climate science).
The best way to know for sure is to make the effort to know at least some of the math. The next best thing is to look at whether either or both sides have engaged in willful fraud.
I think the Climategate e-mails, as distilled in Montford's The Hockey Stick Illusion, clearly show that fraud on the part of the alarmist side.
There's a third way, too, which is almost as reliable as the first two: look for alarmism per se, and distrust those who engage in it. After all, emergencies (real or faked) are the proven best way to sell people on giving government extraordinary powers, as well as to sell newspapers and TV ads. (This is why the media appear to have a leftist bias: they naturally gravitate to anyone who screams that SOMETHING MUST BE DONE!)
I didn't suggest looking for consensus, I suggested looking at the evidence offered.
My own rule of thumb for judging sources of information is to try to find someplace where their information overlaps an area I already know a good deal about, and see how good it is there.
I'd be intyerested in a link to the CDC report you cite that quotes only a California EPA report which in turn gives no source at all. The CDC's web site cites numerous sources for research on second-hand smoke.
The initial cite in the document circulated on my campus was to:
It contains a 46,000 figure for annual deaths due to second-hand smoke, with no support.
When I raised the question with the faculty member apparently responsible, he offered:
Surgeon General Report on Secondhand Smoke 2006.pdf
It cited the CA EPA report, which he also offered me, and it linked to:
That offered the report at:
That contains the sentence:
"Overall, Cal/EPA estimated that about 50,000 excess deaths
result annually from exposure to secondhand smoke(Cal/EPA 2005). "
which appears to be its only support for the 50,000 excess death figure. It has a variety of other assertions about health effects that I didn't follow up.
The CalEPA report is:
CA EPA 2005 Report.pdf
which can be viewed at:
The relevant figure is on Table 6, on page 13.
There are two footnotes, but they refer to vital statistics data, not to studies of the effect of second hand smoke.
Feel free to follow through the literature for yourself. If you find an actual cite to a study that claims empirical data on the effect of second-hand smoke, by all means let me know.
I suspect that the use of "recursive logic", as your campus seems to be using, is pretty common. It seems to me that John Stossel showed this regarding low sodium recommendations in his TV special, "Junk Science", several years ago (actually, I think a lot of junk science amounts to weak data that has been repeated so often it becomes "common knowledge").
Early in my career in real estate development, I ran across this phenomena in a local jurisdiction (Fairfax County, VA). Fairfax had banned underground storm water detention facilities (tanks to temporarily retain rainwater to prevent stream/river flooding) on the grounds that their risk management consultant had claimed that they were dangerous. I tracked down the risk management report's author, and was told by him that he actually had been told by the county staff that such facilities were dangerous, so he recommended banning them in his report. Other neighboring jurisdicions actually require such facilities and had never found any safety problems related to them.
I had read the case against Burt in Thomas Korner's book Fourier Analysis, where it was analyzed as a likely case of fraud. From the short review of the facts given in that book it seemed likely that there may have been fraud involved just based on the statistical analysis, but even if Burt wasn't a fraud any conclusions reached by him was likely to be of very little value. Did the book you cited seem to say that Burts research was still valuable, or simply say that he didn't fake the data?
The book in question had chapters written by different people, so it didn't have a conclusion of its own. The basic result, as I remember it, was that none of the charges that had been made could be well supported. The charge that he had invented research workers and at least one co-author was provably false, since the people in question eventually showed up.
I don't think anyone seriously argued that he was incompetent--as best I can tell, he was a fairly major figure in the history of statistics. The central charge was that he published a series of articles which claimed to analyze an increasing amount of data and that at least the later ones were fraudulent, based on invented data.
The main evidence offered was that at one point he added a bunch of data but reported exactly the same correlation coefficient as before, to three figures. That's very unlikely, but I think it's much more likely as carelessness in revising the table in his article than as fraud--he was pretty old by that time. It's also argued that he couldn't have found that many more separated identical twins--but part of that argument was the supposed nonexistence of his research assistants, who turned out to exist.
The book, however, introduced some new evidence which the author of at least one chapter thought it was hard to explain away. The problem is that, since it was new, I don't know if someone else could have offered an innocent explanation.
In any case, his results on identical twins have been replicated in independent studies by other people, which is the best test--and strong evidence that at least the initial work was real.
I discussed the case with a friend who was at one point the chairman of the statistics department of a top university, and he thought that Burt was innocent.
Incidentally, according to at least one account, it was one of the people who made the initial accusation who, before doing so, advised Burt's housekeeper to destroy his papers--which looks distinctly suspicious.
Part of the context of the whole dispute is that Burt was producing results a lot of people didn't want to believe--that IQ was to a large degree heritable. And he was involved with other and related controversies--I think a supporter of the 11+ test system. So there were reasons why people would want to claim he was a fraud, even if he wasn't.
I think a good rule for evaluating such claims is to "follow the money."
The Roman Catholic church fails all the sensible tests:
1. Authoritarian decrees.
2. Fantastic claims totally wanting in evidence.
3. Filthy lucre from false claims.
By the way Haidt has some interesting research going on here:
It has its problems (I think some questions are at least ambiguously formulated, so I really don't know what to answer) and of course what I see as a fundamental problem with such questionaires - if you roughly know which answer leads to which result, you are likely (consciously or not) going to answer in the way that gives you a result that you want to see, not a result that is accurate. And since some qualities are hold by a vast majority of people to be desirable, while others not, I think there is no chance to expect that "noise" to average out.
I still think it is interesting to go through, though.
Oh and I just got reminded of this (loosely related) thing:
It is an old experiment where a "game theory expert" (who is really an actor who know nothing about it) has a lecture (in front of Ph.D. students of medicine) about applications of game theory in medicine.
The actor however just chooses words that sound "scientific" and constructs sentences that "seem smart" but the whole thing is a nonsense and has no actual meaning. He then even answers the questions of the students - who are satisfied by it.
The students were supposed to evaluate (before they told them there was an actor in front of them) the lecture on its usefulness and other aspects. Almost all of them gave it a very high rating. The whole thing was recorded on video.
If you haven't already, you might want to read Prof. William Starbuck's 2006 book The Production of Knowledge, as it explores the fundamental reasons why the sort of thing you describe occurs (although germane mostly to the social sciences, the faults and errors he describes apply equally to much research that one would consider as being within the physical sciences).
Very interesting comments from everyone. This is a subject that has fascinated me since a long time: in a life where I don't have enough time to evaluate all claims that come to my knowledge, whom should I believe?
My own experience is that, every time I thoroughly researched a topic to make my own opinion, I found my conclusions quite different from what the official opinion would be, except when there was no political or monetary consequence attached to the statement.
For example, I trusted the physicists when they said they had seen the Top quark, although I could not replicate their experiment, as my previous study of the physicists' claims showed me they tend to tell the truth. Conversely, I don't trust car salesmen and politicians (my apologies to car salesmen who at least are accountable at some point.)
As to Tibor Mach's statement of 03/08, I have been a witness of what would qualify as the reverse experiment. At my engineering school, a speaker was once invited who contested the theory of Relativity. There were at least 200 people attending the conference. You could hear the students snicker and giggle during the conference, making fun of the speaker's claims. Yet when the question period came, no student could ask a question that would destroy the speaker's own alternate theory: he answered them all. But that did not stop them from laughing at him in the end. I could not at the time ask questions because I didn't understand the topic enough. Yet my fellow students disappointed me quite a lot.
Post a Comment