## Wednesday, February 28, 2007

### Who to Believe: Auto Speed and Accident Mortality

In a recent Usenet exchange, a poster informed me that "The studies show that if you hit a pedestrian at 20mph there is a 95% survival rate, at 30mph it is 80%, at 40mph it is 10% (or 20% for a small child)." I found the size of the effect implausibly large, asked for a source, and was told that it was "A paper by Ashton and Mackay."

Curious, I turned to Google. The facts, so far as I can determine them:

The paper is cited as Aston and Mackay (1979). Many references give what appear to be the same figures, cited to a U.K. dept of Transport publication from 1992; my guess is that it's citing the paper. One webbed source gives the figures as:

"A pedestrian has a 95 per cent survival rate when hit by a car driving at less than 20mph. At less than 30mph their survival rate is 55 per cent. At 40mph, survival rates are only 5 per cent. (Ashton and Mackay 1979)"

In this version the figure is not for 20 mph but for "less than 20 mph," averaging in the collisions at five or ten miles an hour, and similarly for "less than 30 mph." Comparing "less than 30 mph" to "At 40 mph" makes the difference between 30 and 40 look a lot larger than it actually is. On the other hand, some other pages citing the figures give them as "20 mph," "30 mph," "40 mph"—with the same survival rates. The poster also strikingly exaggerated the survival rate associated with 30 mph, but that's not terribly surprising since he, like the earlier poster whose information he was attempting to clarify, was presumably working from memory. I gather the figures have been extensively used in the U.K. in attempts to persuade drivers to drive more slowly.

Which gets me to the real point of this post. Googling around, it seemed clear that the figures are routinely used by people who want to persuade other people to drive more slowly, hence people with an obvious interest in claiming that mortality rates increase rapidly with speed. I conclude that the "under 30 mph" version is probably the real one, since converting that to "at 30 mph" makes the argument look stronger. I also note that the figures are from research done nearly thirty years ago, which is at least mildly suspicious; it suggests that the people quoting those figures, with no explanation of how they were calculated, may be selecting the study that best supports what they want others to believe, not the most recent or best study. I have not been able to find any webbed version of the study itself; if a reader has actually seen it, I would be interested to know just how the figures were calculated.

Finally, I did come across one interesting bit of actual data relevant to the question:

"in Zurich, the urban area speed limit was lowered from 60 to 50 km/h [37 to 31 mph] in 1980 ... . In the year after the change in the urban speed limit there was a reduction of 16 percent in pedestrian accidents and a reduction of 25 percent in pedestrian fatalities (Walz et al, 1983)."

That implies that fatalities/accident, the relevant figure for calculating the survival rate, fell by only about 9% when maximum speed went from 37 to 31 mph. It's hard to see how that can be consistent with the sort of drastic reduction of mortality that is supposed to be associated with speed reduction within the same range according the figures attributed to the Ashton and Mackay paper.

When deciding whether to believe what someone says, it is worth first asking why he is saying it and how strong his incentives are to know whether it is true. Readers who have hard data on either side of the question are invited to submit it. Why should I do all the work?

---

After writing the above, I discovered that another Usenet poster, better at using Google than I am, has tracked down what appears to be the original paper. It contains no numbers corresponding to those cited, only a couple of figures with hand drawn graphs, one showing the frequency and one the cumulative frequency of various levels of injury as a function of speed. Trying to estimate numbers from the graphs on Figure 1, the survival rate if hit at 30 mph appears to be about 75%, at 40 mph between 20% and zero—the latter figure is very uncertain since at that point the width of the line is a significant fraction of its height above the axis.

Looking at the cumulative distribution (Figure 2), it appears that it is indeed the source for the 30 mph figure, since the ratio of fatalities to all injuries is about 45%, implying a survival rate of about 55%. So it looks as though the comparison being made is between the survival rate at under 30mph and the rate at exactly 40 mph, as I conjectured.

There is, however, a small problem. The first graph shows a considerably higher survival rate at 30 than the second shows at under thirty; since survival rates are falling as speed increases, that is impossible. Either I have misread the graphs—readers are invited to explain how—or the paper all of these numbers are supposed to be based on gives results that are striking inconsistent, indeed impossibly so.

In looking at the graphs, note that speeds are given in kph not mph.

### Obesity, Caloric Restrictions, and protecting children

Recent news stories report on a case in England where it was seriously proposed that an eight year old boy be taken away from his mother because she had let him get too fat; eventually the social workers and the mother reached an agreement by which she got to keep her son, apparently in exchange for some sort of promise to mend her ways. The argument for taking the boy away was his welfare, presumably based on evidence that being very much overweight reduces one's life expectancy. Googling around, I find that "Additional research has shown that people who are severely obese — with a BMI greater than 45 — live up to 20 years less than people who are not overweight."

One of the better supported results in the study of aging is that, for almost all species for which the experiment has been done—fruit flies are the exception—caloric restriction increases life expectancy. Keep mice or rats on a diet at the bottom edge of adequate, hungry but healthy, and they live considerably longer—almost twice as long in the first such experiment (with rats), a more modest 20-30% longer according to summaries I was able to find. While we do not know for certain that the same will hold for humans, it seems likely.

If so, then virtually every parent on the planet, including those who proposed taking Connor McCreaddie away from his mother, is guilty of child abuse by the same standard by which Connor's mother was. She provided him a diet which, arguably, reduced his life expectancy to 20-30% below what it would have been if she had followed the social workers' advice. We provide our children with a diet which, very probably, reduces their life expectancy by a similar amount below what it would be with a suitably calorie restricted diet.

This leaves us with only one question. After all of our children have been taken away to protect them from their parents, whom do we give them to?

## Sunday, February 18, 2007

### Global Warming, Nanotech, and Who to Believe

I've recently been involved in an exchange with Mike Huben in the comments section of an earlier post to this blog, having to do with global warming, hurricanes, and Chris Landsea's pulling out of the IPCC—the group that does the "official" reports on world climate—a few years ago. Interested people may want to look at our exchange and at the web pages cited.

There is a more general issue that such disputes raise: How, in controversies where most of us do not know enough to form independent opinions, one should decide who to believe. One way is to look at the incentives various people have to express the views they do.

Let me start with the case of nanotech—specifically, whether it presents dangers that call for government regulation. I've been involved, at least peripherally, for a long time and know some of the people at the Foresight Institute, the group that pushed the idea of nanotech for many years before it became suddenly fashionable. One thing I know about them is that that their general political biases are libertarian. Hence when I observe them expressing serious concerns about the dangers of unregulated nanotech, I am inclined to take it seriously. They may be wrong, but they aren't believing it because they want to believe it.

Mike Huben, if I understand him correctly, wants to view criticism of evidence for global warming as the work of sinister interest groups, in particular energy companies. I suspect that to some degree he is right; clearly that are industries that will be injured if countries adopt the sorts of policies recommended by those concerned with the threat of global warming, and I expect such industries do their best to push arguments that it is in their interest to push.

On the other hand, a scientist such as Landsea, who apparently wrote a good deal of the relevant part of the previous IPCC report, has no such incentive—unless Mike can point to evidence that he is being secretly funded by the oil companies, which nobody seems to be claiming. It's hard to see any likely reason for his actions other than the belief that the scientific work of himself and others was being misrepresented in order to push a political agenda. And the followup articles—the ones Mike found and pointed out to the rest of us—suggest that in fact Landsea's view of the subject was correct and that his protest was one factor in pushing the IPCC, in its most recent report, to give a mostly accurate account of the current consensus. Their summary account reported that there was no clear evidence of a trend to more hurricanes. One of the authors of the relevant part of the report, decrying misrepresentations in the media, wrote that:

"We concluded that the question of whether there was a greenhouse-cyclone link was pretty much a toss of a coin at the present state of the science, with just a slight leaning towards the likelihood of such a link."

My current conclusion, looking over what I can see of the opinions of people who don't have an obvious axe to grind in either direction, is that global warming is probably real, is probably but not certainly anthropogenic, is probably not going to have large effects on size and frequency of hurricanes and is probably not going to have large effects on sea level. It is a real problem but not, on current evidence, an impending catastrophe.

Mike, and many other people, see it as a much bigger problem than I do. My reason for distrusting their conclusions is the same as Mike's reason for distrusting the conclusions of global warming sceptics: On the whole and with, I am sure, some exceptions, they appear to me to be believing what they want to believe.

I see it that way because:

1. Governments, and people in government, seek power for obvious reasons. Over the past fifty years the intellectual justification for the large expansion in government power from about 1930-1970 has largely collapsed. The belief that capitalism is inherently unstable and inefficient and must be fixed with large elements of governmental intervention and central planning is no longer taken very seriously by either the general public or economists.

Environmentalism in general and global warming in particular provide new arguments for expanded government power, new taxes, and the like. That does not mean, of course, that those arguments are wrong, but it does mean that there are a lot of people who have an incentive to support them whether wrong or right. That seems to me consistent with what I observe—what is probably a real problem being extensively exaggerated for political reasons, with a predicted sea level rise of up to 80 cm over 93 years being reported in terms of massive flooding around the world, converting the World Trade Center Site into an aquarium in the piece I commented on in my earlier post.

2. Global warming provides arguments for things that a lot of people, mostly left of center, want to do anyway—shift lifestyles away from automobiles towards mass transit, reduce consumption of depletable resources, and the like. Environmentalism is in part a real argument, in part a religion, in part an aesthetic; the second and third parts make people too willing to accept the first.

Which gets me to Mike's various queries about why I choose to align myself with the forces of evil and ignorance by expressing skepticism about the horrors likely to arise from global warming. Simply put, I am skeptical of conclusions that appear to go well beyond the scientific evidence, pushed by people who have reasons to want other people to believe them.

## Thursday, February 15, 2007

### Barack Obama, "Our Kind of Black," and Evidence on Discrimination

Listening to the radio on my way home, I heard an interesting discussion of Barack Obama centered on the idea that to some blacks he wasn't "our kind of black," not a descendant of sub-saharan Africans brought to the New World as slaves. One caller, a black woman, made the distinction more broadly. Blacks go to Harvard, appear in other high status contexts, but the ones who do, at least by her observation, are mostly of West Indian or post-slavery African descent. She viewed such people, including Obama, as a sort of compromise--black enough to establish the principle that blacks could succeed, could fill such roles, but not black enough to provoke white prejudice that would keep them from doing so.

One point nobody made, at least while I was listening, was the implication of this view of the situation for the conventional picture of racial prejudice. What is usually said or implied is that American blacks do, on average, worse than American whites because of discrimination based on skin color. That might be consistent with a pattern of lighter skinned blacks doing better, but it cannot explain blacks of non-slave origin doing better, especially since many of them, being recent migrants from Africa or the West Indies, are blacker than the average descendant of slaves.

The discussion reminded me of an argument Thomas Sowell offers in Ethnic America. Observing the success of West Indian immigrants to the U.S., he concludes that it provides evidence against both of the popular explanations for the current situation of Afro-Americans. It is evidence against the "official" explanation, which is racial prejudice, since to the eye the immigrants are at least as black as those already here. But it is also evidence against the view, surely widely held if not openly expressed, that the failure of Afro-Americans is due to genetic inferiority, since the West Indians are genetically "blacker," have a higher proportion of sub-saharan African ancestry, as well as visibly blacker. Sowell concludes that the difference is cultural, that the different nature of West African slavery resulted in a culture that produced individuals better able to succeed in our society than those produced by the culture that resulted from plantation slavery in the American south.

## Sunday, February 11, 2007

### Dum Vivimus, Vivamus

About six months ago, I got a phone call from the friend my first child is named after. He had been diagnosed with terminal liver cancer; the doctor had told him he had two months to live.

One of his many interests was vintage dance; a week or two later he spent a week at Newport teaching, as he did every summer. When I visited a little later, I got to watch him doing Civil War bayonet drills with a group of fellow enthusiasts, helped him organize his extensive collection of materials on the history of fencing, much of which he had arranged to have webbed. The friend who had taken charge of that project dropped over with his girlfriend; they were both given a lengthy and expert lesson in waltzing. Patri and his wife had done a draft of an article on how to put on a 19th century ball; he was working on revising it.

Patri's house in the Boston area is full of stuff--real 19th century clothing, replica clothing, material for making replica clothing, a wide variety of fencing artifacts, a private museum collection specialized in 19th and early 20th century fencing, dancing, garb and related subjects, accumulated over decades. He spent a good deal of the next few months organizing and making space. Other activities included helping his older daughter with math and putting on their annual Halloween party, complete with a crashed Quidditch player in front of the house. When we came to visit over New Years, he was working on transferring an Escher drawing to a sheet as a stage backdrop for one of his daughter's projects.

Since then we stayed in touch with regular phone calls. I consulted with him on the history of science, the field in which he has his doctorate, for ideas to use in the fantasy novel I am currently working on, read him new passages for his comments. "Insufficiently clever" has long been his favorite term of disapprobation, so I used him as my consultant on cleverness.

If all had gone according to plan, he would have spent the past week in Vienna, teaching vintage dance as he had done several times before. He didn't quite make it. In Heinlein's Glory Road, the hero's sword has a motto on it: Dum Vivimus, Vivamus. "While we live, let us live." Patri lived that motto up to the last day of his life.

## Saturday, February 10, 2007

### Reality Based Environmentalism

A colleague pointed me at a new blog he is involved with, which has a post that quotes an L.A. Times article discussing the various forms of irrationality that lead humans to fail to react adequately to the threat of global warming. My favorite bit:

“No one seems to care about the upcoming attack on the World Trade Center site. Why? Because it won’t involve villains with box cutters. Instead, it will involve melting ice sheets that swell the oceans and turn that particular block of lower Manhattan into an aquarium.

“The odds of this happening in the next few decades are better than the odds that a disgruntled Saudi will sneak onto an airplane and detonate a shoe bomb.”

On the current IPCC estimates, sea levels should rise about 20-30 cm over the next few decades. A little googling located a map showing the effect on Manhattan of storm surges from a hypothetical category three hurricane. The large blue areas are areas that would currently flood. The tiny red areas show the additional flooding if sea level were 37.5 cm higher than it now is. The almost invisible yellow bits show the further flooding at 47.2 cm.

Not that long ago, critics of the current administration were making fun of a statement attributed to an (unnamed) senior advisor to Bush, who supposedly described a critic as part of "the reality-based community." Judging by this particular example, if the advisor did say that, he was mistaken.

## Monday, February 05, 2007

### Time Inconsistency in MMORGs or The Case Against the Burning Crusade

In my previous post I argued that the incentives of Blizzard, the company responsible for the very successful massively multiplayer online role playing game World of Warcraft, were on the whole aligned with those of their customers: Blizzard wants to make the game more fun so that more people will pay them more money to play it. The purpose of this post is to discuss a possible exception, analogous to some familiar problems in economics.

The big news of recent months for World of Warcraft fans was the introduction of an expansion to the game, permitting players to explore new areas, get new and better gear, increase their characters' level above the old limit. While many players welcomed it, some did not.

To see why, consider the situation of a long term player just before the change. His character—for simplicity assume he has only one—has been at level sixty for a long time. Since no higher level was possible he has put his time into acquiring elite gear, armor and weapons and such that increase the wearer's strength, magical ability, endurance. Acquiring that gear involved spending hundreds of hours, some of it in twenty or even forty man raids organized and scheduled on a weekly basis and running over a period of months, designed to let the character gradually accumulate items and status that could be used to acquire very high grade equipment. Through these and other efforts—the game provided several different ways of getting high grade gear, all difficult—the character acquired equipment that both made him more effective and got him status with other players. When another player sees him and looks at his gear, most of it showing as elite purple, it is clear that he has earned it.

Now comes the new upgrade. In ten hours of play in the new area a level sixty character can acquire as quest rewards two or three pieces of equipment, each as good as something that used to take a great deal more work to get. The player who stopped playing his first character at level sixty and switched to developing a new character—or even, as is rumored to occasionally happen, stopped playing almost entirely—can now get his character to the same level of equipment as those who put in many months pushing their level sixty to higher and higher levels of achievement. This makes the game more fun for some of us, but for others it feels as though all the accomplishments of the past year have suddenly turned to ashes, their treasured purples no better than the new, easily acquired, high level greens. A priori, one cannot tell if the change is on net a gain or a loss.

The interesting point is that it might be a gain for Blizzard even if it were a loss for the players. Sunk costs, as economists say, are sunk costs; there is no way of getting back all the time you spent acquiring elite gear. Even if your past efforts have turned out to be worth much less than you thought they were, there are now lots of new opportunities for new efforts to get new elite gear far better than what you used to have—if you are willing to spend enough time doing it. It was your past playing time that took the hit, which may not be an argument against continuing to pay Blizzard to continue to play in the future.

Readers who are economists will see where this is leading. In a full information zero transaction cost world, Blizzard would take the hit in advance; the knowledge that they were going to devalue purples in 2007 would reduce the incentive to acquire them, and the fun of acquiring them, in 2005 and 2006, and so reduce Blizzard's revenue in those years, giving them an incentive to take due account of the cost the expansion imposed on some of the most active players. But in the real world with its real limits, players do not know—Blizzard does not know—what future expansions will be like. Once 2005 and 2006 are past, Blizzard can, in effect, double cross some of its old players by devaluing their items, and it may well be in its interest to do so.

The logic is very much like the logic of rent control. One of its undesirable consequences is to reduce the incentive to build new apartment buildings. One possible solution is for the Mayor to announce that new buildings will be uncontrolled. But once new buildings are built, they are old buildings; the same political incentives that led to the initial rent control will give the Mayor, or his successor, an incentive to expand rent control to the buildings constructed since. Unless the Mayor has some way of committing himself to a long term policy of keeping them uncontrolled, developers will discount his promise to take account of his incentive to later break it, with the result that fewer new buildings will be built. Interested readers may want to look at the relevant chapter of my webbed Price Theory.

### Computer Games, Dual Maximands, and Libertarian Paternalism

In the computer game _World of Warcraft_, each character has a quest log used to keep track of his current quests. Currently it has 25 slots, which means that if you have 25 quests you have accepted but not yet completed you cannot take another one until you deal with one of those.

Suppose someone proposes that the number be increased to 50. A possible—I think correct—argument against doing so would be that a larger quest log would make the game less fun, since the player would be less inclined to focus on a particular set of related in game projects. An obvious libertarian response to that argument would be that if a player found the game more fun with only 25 slots in his quest log he could always choose to leave the other 25 slots empty, thus imposing the old rule on himself.

Thus the restriction, assuming (as I do) that it cannot be justified in terms of the effect on other players, appears to be a form of paternalism, a restriction imposed on the player "for his own good." It is libertarian paternalism insofar as it is coming out of a voluntary transaction between the player and Blizzard, the company making the game. Yet standard libertarian arguments seem to imply that such a restriction cannot improve the game and might make it worse, hence that it should not be in the interest of Blizzard to impose it.

What is wrong with the argument is that the player is engaged simultaneously in two rather different maximization exercises. One exercise, within the game, consists of trying to play it as well as possible—to gain experience in order that his character will go up in level, to collect gold, to get useful artifacts, to achieve whatever he has decided it would be fun to try to achieve. The other exercise, seen from outside the game, is to have as much fun as possible playing the game.

From the in game point of view a policy of always leaving half the quest slots empty makes no sense, since there will be situations where accepting a 26th quest makes it easier to achieve the in game objective. Part of the fun of a game is trying to play it well, and such a policy is inconsistent with doing so. Hence, from the out of game perspective, having Blizzard limit the number of slots is a way of aligning the incentives of the in game version of the player—to play well—with the objectives of the out of game version of the player—to enjoy the game.

I took World of Warcraft as my example because it is the game I am currently involved in. The same point would be even clearer in the context of a single player game. It is, I think, obvious that such a game is sometimes improved by imposting restrictions on the player that the player could, in game, impose on himself—because imposing those restrictions on himself would make no sense from the in game perspective as player even though it would make sense from the out of game perspective of one choosing to play. And in the case of a single player game the question of effects on other players does not arise.

It may occur to some readers that the situation I have described in the context of a game is in some ways similar issue to the issue dual maximization in ordinary life. Some people argue, persuasively, that it makes sense to think of each individual as two people, a short run pleasure maximizer and a long run self interest (for economists, present value of the future utility stream) maximizer. The second person tries in various ways to control the first in order to get him to give up short term gains in order to get greater long term gains. Thus the long-term me makes a resolution not to have ice cream for desert until he has lost three pounds, in the hope of imposing a cost in shame on the short-term me that will persuade him to keep the resolution, and so give him a reasonably short term incentive to eat less.

One might argue—some do—that paternalistic government policies can be justified as ways in which the state supports the long-term me against the short term me, and are thus analogous to the "paternalism" exhibited by Blizzard towards its customers. While this makes some theoretical sense, there is one very obvious and important difference. Blizzard's incentives are in most ways aligned with those of their customers, so it usually pays them to impose restrictions only if they make the game more fun. The government's incentives are not aligned with the interest of its citizens, long run or short run, so arguments that justify government intervention in our lives as some form of paternalism can be used—repeatedly have been used—to mask policies that benefit most of us in neither the short run or the long run.

## Friday, February 02, 2007

### Capitalization, Grammatical Fine Points, and Trusting Readers

In writing fiction, I have several times had to think about the question of how much I can trust my readers.

In Harald, for example, "commander" was the title used in the Imperial army for an officer commanding a legion, an army, or a fortification; like "captain" or "commodore" in the British Navy two hundred years ago, it was something between a rank and a job description. But "the Commander" was also how one particular officer, widely regarded as the best, was routinely referred to, whether or not the speaker happened to be in an army that that officer was currently commanding. A modern equivalent would be "general" used as a rank, and "the General" used to refer to one particularly prominent general.

Strictly speaking, as I understand the conventional rules of capitalization, "commander" should be capitalized when it is the nickname of a particular officer but not in the more general context. Both I and my editor were concerned that readers who did not pick up on that distinction would assume we had simply been careless and inconsistent about when to capitalize. We ended up going with "commander," uncapitalized, in both cases.

I have encountered the same problem in Salamander, the (very different) fantasy novel I am currently working on. A central plot element is a sort of magical chain reaction that lets one mage get control over the magical power of a large number of others; I (and the mage who invented it) refer to it as "the Cascade." Since it is a proper name it ought to be capitalized. But the same word appears repeatedly in closely related contexts, describing the effect that the spell produces: "As the cascade started" would mean, not "as the spell was cast," but "as the chain reaction began to happen," and one can imagine a number of other such uses. I am again in two minds as to whether to capitalize selectively, which I think is grammatically correct, or uniformly, one way or the other.

In part this is a question of how much I trust my readers to understand what I am doing. But it also involves a different question: How much am I responsible for nudging my readers towards, rather than away from, correct grammatical usage.