Tuesday, February 17, 2009

Role Playing Games Considered as a Creative Writing Course

My casual impression is that successful authors of fiction do not have a very high opinion of college creative writing courses. Quite a lot of them, however, seem to have developed their skills in the context of Dungeons and Dragons or similar games, as dungeon masters and/or players. A role playing game is, after all, an exercise in collaborative story telling. The things that make it work or not work are, to a considerable extent, the same as the things that make a novel work or not work. And, unlike a creative writing course, the participants are doing it because they want to, for the fun of it, not because someone else has told them that this is what they have to do to learn to write.

I was reminded of this observation recently by conversation in World of Warcraft in which a player was discussing with others his problems in writing. It was clear, in context, that the writing was neither a school assignment nor an attempt at a publishable novel or short story, but posts to one of the World of Warcraft forums. Such posts often take the form of fictionalized accounts of things that actually happened in the game. I will not be surprised to discover, ten or twenty years in the future, that some of the new generation of authors, especially fantasy authors, got their start there.

My own non-fiction writing largely developed during the years when I was producing a monthly column for The New Guard, a conservative student magazine on which I was the token libertarian columnist. There too, I was writing not as a school exercise but because I actually had things I wanted to say.

All of which fits into my general views on the advantages of unschooling, education that takes the form of doing things one wants to do, learning things one wants to learn, over the approach to education embedded in the conventional K-12 curriculum and many college courses.

Saturday, February 14, 2009

Teaching Statistics with World of Warcraft

In an earlier post I proposed an economics course built around World of Warcraft. I have much less experience teaching statistics than teaching economics and I suspect the game is less suited for the former than the latter purpose. But it does occur to me that it provides quite a lot of opportunities for observing data and trying to infer patterns from it and so could be used to both explain and apply statistical inference. And I suspect that, as in the case of economics, application to a world with which the student was familiar and involved and to problems of actual interest to him would have a significant positive effect on attention and understanding.

Consider the question of whether a process is actually random. Human beings have very sensitive pattern recognition software—so sensitive that it often sees patterns that are not there. There is a tradeoff, as any statistician knows, between type 1 and type 2 errors, between seeing something that isn't there and failing to see something that is. In the environment humans evolved in, there were good reasons to prefer the first sort of error to the second. Mistaking a tree branch for a lurking predator is a less costly mistake than misidentifying a lurking predator as a tree branch. One result is that gamblers routinely see patterns in random events—"hot dice," a "loose" slot machine, or the like.

Players in World of Warcraft see such patterns too. But in that case, the situation is made more complicated and more interesting by the fact that the "random" events might not be random, might be the deliberate result of programming. In the real world it is usually safe to assume that the dice which you have used in the past will continue to produce the same results, about a 1/6 chance of each of the numbers 1-6, in the future. But in the game it is always possible that the odds have changed, that the latest update increased the drop rate for the items you are questing for from one in four to one in two, even one in one. It is even possible, although not I think likely, that some mischievous programmer has introduced serial correlation into otherwise random events, that the dice really are sometimes hot and sometimes cold.

A few days ago I was on a quest which required me to acquire five copies of an item. The item was dropped by a particular sort of creature. Past experience suggested a drop rate of about one in four. I killed four creatures, got four drops, and began to wonder if something had changed.

It occurred to me that the question was one to which statistics, specifically Bayesian statistics, was applicable. Many students, indeed many people who use statistics, have a very imperfect idea of what statistical results mean, a point that recently came up in the comment thread to a post here when someone quoted the report of the IPCC explaining the meaning of its confidence results and getting it wrong. My recent experience in World of Warcraft provided a nice example of how one should go about getting the information that people mistakenly believe a confidence result provides.

The null hypothesis is that the drop rate has not changed—each creature I kill has one chance in four of dropping what I want. The alternative hypothesis is that the latest update has raised the rate to one in one. A confidence result tells us how likely it is that, if the null hypothesis is true, the evidence for the alternative hypothesis will be at least as good as it is. Elementary probability theory tells us that, if the null hypothesis is correct, the chance of getting four drops out of four is only one in 256. Hence my experiment confirms the alternative hypothesis at (better than ) the .01 level.

Does that mean that the odds that the drop rate has been raised to one in one are better than 99 to 1? That is how, in my experience, people commonly interpret such results—as when the IPCC report explained that "very high confidence represents at least a 9 out of 10 chance of being correct; high confidence represents about an 8 out of 10 chance of being correct."

It does not. 1/256 is not the probability that the drop rate has changed, it is the probability that I would get four drops out of four if it had not changed. To get from there to the probability that it had—the probability that would be relevant if, for example, I wanted to bet someone that the fifth kill would give me my final drop—I need some additional information. I need to know how likely it is, prior to my doing the experiment, that the drop rate has been changed. That prior probability, plus the result of my experiment, plus Bayes Theorem, gives me the posterior probability that I want.

Suppose we determine by reading the patch notes of past patches or by getting a Blizzard programmer drunk and interrogating him, that any particular drop rate has a one in ten thousand chance of being changed in any particular patch. The probability of getting my result via a change in the drop rate is then .0001 (the probability of the change) times 1 (the probability of the result if the changed occurred--for simplicity I am assuming that if there was a change it raised the drop rate to 1). The probability of getting it without a change by random chance is .9999 (the probability that there was no change) x 1/256 (the probability of the result if there was no change). The second number is about forty times as large as the first, so the odds that the drop rate is still the same are about forty to one.

And I suspect, although I may be mistaken, that the odds that a student who spent his spare time playing World of Warcraft would find the explanation interesting and manage to follow it are higher than if I were making the same argument in the context of an imaginary series of coin tosses, as I usually do.

Wednesday, February 04, 2009

Incentives for Realtors

When you are looking for a house, one of the first questions the real estate agent is likely to ask is how much you can afford to spend. The prudent customer will think twice before giving an honest answer. Realtors, after all, are paid a commission based on the price of the house. If you tell her you can afford a four hundred thousand dollar house she may, if she is competent, find you the best four hundred thousand dollar house in the city. But she has little reason to look for a three hundred thousand dollar house that is almost as good.

I have a solution to this problem, although an imperfect one. What you want is not the best house you can afford but the house that gives you the maximum surplus, the largest possible difference between what it is worth and what it costs. If that is the house you want, perhaps you ought to reward the realtor on the basis of how good a job she does of finding it.

Under my system, each time the customer looks at a house he tells the realtor what is the highest price he would pay for the house. In order to keep the benchmark from changing during the process, that price should be defined on the assumption that the alternative is renting, since otherwise it will in part depend on the other houses he has so far seen. When the customer finally buys, the realtor's commission is a percentage of the difference between the highest price the customer said he would pay and the actual price paid.

This system has two advantages. First, it gives the realtor acting for the buyer a direct financial incentive to do as good a job as possible of helping the buyer bargain the seller down. Second, it gives the realtor an incentive to find the right house, the house that maximizes the net benefit from buying it.

It is an imperfect solution for at least two reasons—putting aside the difficulty of getting the profession to adopt it. The first is that the customer has some incentive to lie, to understate the value of each house to himself in order to reduce the commission he will have to pay if he buys that house. The second is that it ignores the cost of search to the realtor. As long as a dollar's worth of extra search produces more than a dollar's increase in net benefit, it is in the joint interest of realtor and customer to continue searching. But the realtor gets only a fraction of the net benefit as commission, so it is in her interest to stop searching well before that point.

One possible explanation for the present system is that people regard a house as a good investment—the interest is tax deductible, and everyone knows, or used to, that house prices always go up. If it is a good investment, it makes some sense to buy the most expensive house you can finance, even if the actual benefits of living in the house don't justify doing so.

That argument looks less convincing now than it did a year or two back.

Global Warming, British Snow, and Filtered Evidence

In a recent online exchange, one poster suggested that the unseasonably cold weather in Britain was evidence against global warming. The response from another was that it was actually evidence in favor, that global warming meant more energy in the weather system, which led to greater variability, hence extremes in both directions.

Neither I nor the poster who made that argument knows enough about the physics of climate to judge whether that claim is or is not true; I am not sure anyone does. Suppose, however, that it is. We then have a serious problem for the ordinary citizen who is trying to figure out from the information that reaches him how seriously to take worries about global warming, who to believe. To see why, consider a simple back of the envelope calculation.

There are at least two features of the weather likely to show up in news stories: temperature and rainfall. Under our assumption, either unusually cold or unusually hot weather, either unusually dry or unusually wet seasons, count as evidence for global warming. Casual evidence for global warming need not be, usually is not, global, nor need it concern an entire year. An unusually hot summer in Australia or North America makes the news, just as an unusually cold winter in Britain does.

So how many different chances are there, each year, to generate evidence in favor of global warming? We have two weather variables, each of which can go in two directions. We have winter, summer, spring, and fall in which it can happen—although stories about temperature in spring and fall will be less striking than winter or summer—and a single month might also generate its own story. For simplicity, let's say there are five relevant time periods in a year. For further simplicity, let's say there are twenty geographical regions sufficiently salient so that unusual weather in one of them will be noticed. Multiply it out and we have four hundred different opportunities each year for the weather to turn out—somewhere, sometime—in a way that will generate a news story seen as evidence in favor of global warming. Whether or not global warming is real and significant, weather is notoriously variable, so we can be pretty sure that some of those four hundred will happen, giving the casual observer reason to believe that global warming is affecting the weather.

What about evidence in the other direction? Under our assumption, unusual weather in any direction counts as evidence in favor, so for evidence against we need usual weather. There are lots of opportunities for that too. But usual weather is not newsworthy—I don't remember any stories last winter, or the winter before, reporting that Britain was having about the usual amount of snow. The news media, for obvious reasons, filter in favor of the unusual, of man bites dog not dog bites man. If all unusual weather counts as evidence on one side of the argument, that side is going to look much stronger than it is.

The problem is not limited to this particular controversy. For an older and arguably more important example, consider religion. If a mother prays for the recovery of her dying child and the child recovers, everyone she knows and many people she doesn't know hear about it. If she prays and the dying child dies, that is likely to get much less attention. Death, after all, is what usually happens to dying people. In this case too, the evidence is filtered in favor of the unusual—and the unusual, in almost any direction, is going to look like evidence in favor,of religion, evidence that something beyond us is intervening.