Thursday, January 30, 2014

Is (insert name of newspaper/blog/TV channel) Biased?

I recently came across a post with the title "Is the New York Times Biased?" My immediate reaction was to ask not what the answer was but what the question meant. There are a lot of stories out there and no newspaper can cover all of them, so how do we judge the selection of what to cover?

One basis for deciding what to cover, common to practically all news sources, is what you think your readers will find interesting, but that was probably not what the author of the post was thinking of. Another is what you see as important and informative. That will, inevitably, depend on your view of the world. If you believe that a lot of policemen are irresponsibly violent, go around smashing down doors, shooting dogs with no good reason and beating unarmed victims to death, you will see an example of such behavior as important—this is a big problem people need to know about—and informative, since it teaches a lesson about the world that you think is true. If you believe that policemen are generally responsible and restrained in their use of force, you may see the same incident as experimental error rather than data, an exception due to a single bad apple—assuming you believe it at all. Probably not worth covering.

Suppose you do cover the incident. You are likely to look for, believe, and report evidence that fits your prior views, be skeptical of evidence that does not. If you are sufficiently honest to report the latter, you will do so only after going to a good deal of trouble to make sure it is true, more trouble than you go to with regard to evidence that supports your beliefs. The result will be a pattern of coverage that tends to support what you already believe.

In order to conclude that the New York Times' selection of stories to cover is biased, I need to compare it with how many stories on each side are out there to be covered and how important and informative they are. My view of that will reflect my own view of the world. In my case, not only does the selection of stories by the New York Times strike me as obviously biased against free markets, so does the selection of stories by the Wall Street Journal. The Journal is more favorable to the market than the Times but not nearly as much more as I am. I conclude that to describe a news source as biased says little more than that its view of the world is substantially different from mine. 

There are two other criteria for judging news sources that are, in my view, both more objective and more useful: honesty and competence. For an example of the first, consider the Huffington Post. Some years back when I was following news stories about how nutty various Tea Party candidates were said to be, one of them dealt with a candidate claimed to be opposed to the separation of church and state. I found a story on him on the Post website. It included a video of the talk the claim was based on from which it was clear that he supported the separation but disagreed with some interpretations of what it implied, and the story was consistent with that. I concluded that while the Post might have a strong left wing bias, it was also honest.

For an example of the other criterion, consider a story I read years ago in the Wall Street Journal dealing with the adoption market. It was described as a situation where the free market did not work, since there was a shortage of babies to be adopted. The article never mentioned that this was a market with price control at a price of zero, it being illegal to pay a mother for permission to adopt her child. It is possible that that occurred to the authors and they decided not to mention it, in which case the article was dishonest. But I think it more likely that, because the authors were not accustomed to thinking like economists, the role of price in equalizing supply to demand simply never occurred to them. In which case the article was incompetent but not dishonest.

To control for bias, get your information from a range of sources. If you want it to be reliable information, try to find sources that are, so far as you can tell, both honest and competent.


Julien Couvreur said...

As probability teaches us, all knowledge and learning is conditional on prior beliefs (see [1], an awesome and accessible summary papers on probability theory). In that sense, all beliefs and assignments of confidence are subjective/conditional.

For instance, if you have super carefully verified the dice is fair (p=1/6 for each face), you will not change your belief when rolls come out.
If you have no reason to believe the dice if fair (p is some distribution rather than spike, because of uncertainty), then each roll will affect your belief about the dice.

In short, there is no such thing as unbiased information, reporting, beliefs, etc.

Even further, prior belief can cause two people's belief to diverge even as they receive the same new evidence.
If a journalist reports new studies that the minimum wage works, then you can either increase your belief in the minimum wage, or increase your belief that the journalist is crooked or the study flawed.
That depends on the relative strength of your priors for different propositions (the weakest link).


Unknown said...

This seems relevant: "Never attribute to malice that which is adequately explained by stupidity."

Regarding honesty and competence, they seem to be good criteria, but they don't cover the point you raised earlier about bias: "The result will be a pattern of coverage that tends to support what you already believe."

Even honest and competent reporting have bias, and a biased yet honest and competent news source can still paint a picture of the world that might not be accurate. For example, tragedies tend to make the headlines, and even if all the reporting is accurate, it can still leave the impression that the world is more violent or awful than it really is. We see this with gun related crimes a lot, where gun crimes frequently make headlines, but they are hardly the only type of weapon used in crimes.

Matt Drudge also does this sort of thing with Chicago - he links frequently to Chicago related crimes and political corruption. Nearly any story he links to about Chicago has the term "Chicagoland". I wouldn't consider it either dishonest or incompetent, but it does reveal Drudge's bias against Chicago politicians. Are they really worse than politicians in other cities, such as Detroit or DC?

Power Child said...

But David, none of this addresses how you present the information once you've gotten it from the sources.

If you're a journalist, once you've got your information you first get into your journalistic communication mode. This mode consists of a set of affectations. A TV reporter will don a suit and put a strange trope in his voice, one that viewers expect to hear. A radio correspondent will use an even more exaggerated trope than the TV reporter. A print journalist will write his story according to a certain style and structure known to be associated with journalism.

The purpose of all this affectation is to signal to the audience "A journalist is speaking. Journalists are unbiased conveyors of the news--simply, the events of the day. Whatever they say is what happened, and what the say happened is what is important. They will not opine or preach to you, they will only convey an even record of the important events." And so on.

David Friedman said...

Julien: As it happens, just before reading your comment I put a comment on Facebook explaining Bayesian probability to someone who thought a confidence result showed how likely it was that the hypothesis was true.

Tibor said...
This comment has been removed by the author.
Tibor said...

David: The thing you mention in the last comment is a problem. A friend of mine basically shares some of the (mostly justified) criticism of the commenters here about the fact that news usually only cover that which attracts attention. He said that basically, if it is in the news, it is probably very uncommon (hence attracting a lot of attention) and so not much to worry about...and you should worry about things once the newspapers stop writing about them. He suggests using statistics for making decisions instead. But there is exactly the problem you mention - most people, including a lot of researchers, particularly in social sciences (i.e. those areas where controversies most often arise) are often very bad at understanding statistics (by the I am not sure why explaining hypotheses testing and confidence levels requires Bayesian approach...even though I do believe that Bayesian stastistics is even more useful for understanding and making sense of the world than frequentist approach). One of my professors from when I studied in Prague told me about sociologists constantly calling him with questions not very far from "what does that mean that we reject the null hypothesis on level alpha?" If people like these make studies, then I have a lot of skepticism towards the results. Especially (and that is quite sad) since it is not widely considered as a standard to include precise methodology of picking the sample for the statistics and then the exact method that was applied. This is standard when applied mathematicians publish papers, but usually not when social scientists do. If all you have is a vague description of how the sample was picked and then a table of results, you cannot check if all was done as it should be.

Also, as this is related to your previous post about prediction power of theories, this affect that also. If a theory basically says "if A then ceteris paribus B", it is often hard to judge whether it has a high or low prediction power, because things are rarely ceteris paribus and you end up relying on statistics again to "clean" data from additional effects. It is less of a problem if the theory states "the global temperature will change by X degrees in the next Y years", but often it is not this simple.

For example, recently I discussed the minimum wage laws with a left-wing economist and he linked me to a CEPR paper which lists a lot of studies supporting the idea that it does not increase unemployment. To my understanding, CEPR is a left-wing CATO and so heavily biased. I believe I could find a similar set of studies presented by CATO which would show the opposite result. Also a theory to explain the statistics is included. Their argument is based on an "institutional model" instead of the "fully competitive model" (as they describe those) which does not strike me as very sensible (as it suggests that somehow a significant amount of companies work inefficiently until forced to increase efficiency by a worsening of conditions such as an increase of minimum wage...which sounds quite implausible to me), but the reason it does not strike me as reasonable is not that I observe that it does not work that way (which would require usage of statistics again) but that I find it rather illogical, which could simply be a manifestation of my bias. So, my conclusion is that while I don't agree with those results, I also am much more weary of saying anything with too much confidence. Maybe except for maths. That is fortunately the one area where you can be absolutely sure you are right :)

Lon Mendelsohn said...

As an academic librarian, part of my job is to teach students how to find and evaluate information. One of the points that I make fairly early on has to do with the existence of bias in pretty much every information source, including (especially?) the so-called scholarly or peer-reviewed materials. One of the tasks of the researcher is to identify such biases. For that matter, the researcher needs to have a healthy awareness of his or her own biases.

Power Child said...

@Lon Mendelsohn:

Maybe in that case a good first step would be to remove the stigma from bias. Right now bias is treated as a dirty word, like being racist or something, rather than as something normal, natural, and even healthy.

David Friedman said...


I've long argued that an important intellectual skill that our educational system does not teach, indeed often anti-teaches, is the ability to judge sources of information on internal evidence.

David Friedman said...


The problem with confidence levels is that what people want to know is how likely it is that the theory is false conditional on the result of the experiment, while what classical statistics tells you is how likely the result of the experiment is conditional on the theory being false (in a particular way defined by the null hypothesis). It's easy to confuse what they want with what they get, since they sound so similar, and many people, I suspect most who talk about confidence intervals, do.

Bayesian probability gives you what you want, provided you have a prior, so can be used to show the difference. My standard example is hypothesis that a random coin is double headed. You flip it twice, it comes up heads each time. The probability that the evidence for the hypothesis will be that good if it's a fair coin is only .25. It doesn't follow that the probability that it's a fair coin is only .25. Using Bayesian probability and a plausible prior for a random coin being two headed you can show that.

Tibor said...

David: Well, you have to show that (generally) P(A|B) != P(B|A) to illustrate the mistake. And since the Bayes theorem shows the exact relationship between these two probabilities you do need to use at least that, so yes, you do need Bayesian probability for that after all.

Also, this is a good example of a difference between the two approaches: (I have a couple of xkcd comics in my web browser bookmarks so I can refer to them when appropriate :) )

What do you think about the problem of judging prediction power of theories in the cases when it can only be done through statistics (and you do not have direct access to the data which the statistics are made of)? Is this a problem as I think, or am I missing something and the situation is not so hopeless? :) I can't think of a better decision process than which I described above and I am not all that satisfied with it.

Will McLean said...

News media are biased, but the biggest problem isn't the ideological bias the conservatives complain about.

Media are biased in favor of covering a story quickly, because viewers want their news when it's new. Information that comes in later that gives a fuller picture will be discounted accordingly.

Media are biased in favor of presenting a simple dramatic narrative. Complex stories without clear villains and heroes and a clear conclusion are less compelling for the average viewer or reader, and less satisfying for reporters and editors.

Media are biased in favor of stories that can be presented with a compelling visual image.

Media are biased in favor of stories that are cheap to present. Rerunning a video of an incident released first and calling in a pair of talking heads that you can expect to argue with each other is cheap. Sending a real reporter to the scene to find out what actually happened costs more.

David Friedman said...


I agree. The biggest bias is in favor of telling a good story, since most media are selling entertainment, not information.

Will McLean said...

One of the worst sins of journalism is posting a story based on unreliable anonymous sources that the journalist has good reason to know are unreliable, and withholding that reason from the readers. I would consider this a subset of dishonesty.

For example: "sources who were on the ground in Benghazi" should probably have been "sources who were on the ground in Benghazi but had no first hand knowledge of what actually happened"

Will McLean said...
This comment has been removed by the author.
Will McLean said...


Even putting the question of who is selling what aside, we humans just like a good story, and that's a fact.