Saturday, May 30, 2009

Carbon Taxes, Cap and Trade, and All That

Suppose something some people do has substantial, widely dispersed, negative effects on other people; the standard example is air pollution. Since I am not bearing the cost of my actions, I may do things, such as light a fire in my fireplace, even when the net cost, to me and others, is larger than the benefit, and fail to do things, such as install a scrubber in the smokestack of my power plant or switch to low sulfur coal, even when the benefit of doing them would be larger than the cost. The obvious solution is direct regulation. Have some government agency tell power plants to install scrubbers, use low sulfur coal, and in various other ways modify their acts to take account of external effects.

There are at least two things wrong with this solution. The first is that the agency is very unlikely to know enough to do it right. How is it to figure out what the lowest cost way is of reducing pollution—scrubbers, low sulfur coal, both, or one for some power plants and some for others? How, indeed, is it to figure out when pollution ought to be reduced and when the cost of the reduction is more than the benefit? Carbon dioxide, after all, is said to be a pollutant via global warming; it does not follow that we should all stop breathing.

The second problem is that even if the agency had the information, it might not be in its interest to use it correctly, to require reductions of pollution if and only if they are worth their cost and by the least costly means. Such an agency is run, after all, not by philosopher kings but by bureaucrats appointed by elected politicians. If states that produce high sulfur coal are politically influential, it might be wise to impose regulations designed to discourage utilities from switching to low sulfur coal even when that is the lowest cost way of reducing the amount of air pollution they produce (a real world example, as it happens; see Regulation Magazine Vol. 14 no.4). If the steel industry has made generous campaign contributions to the majority party, perhaps it should be spared, in the national interest, the rigors of regulation of its emissions.

An alternative which many economists prefer is a "Pigouvian Tax," named after the economist who came up with the idea. Let the agency determine the external cost imposed by each ton of sulfur dioxide released into the atmosphere. Firms are free to pollute as much as they want, provided they pay for their pollution. It is then in the private interest of each firm to find the lowest cost ways of reducing its pollution and implement them, provided that the cost of the reduction is less than the benefit.

This does not eliminate all of the problems, since the agency still has to measure cost and still may be biased in its estimates of the differing costs of different effluents in different places by political considerations. But it looks like a considerable improvement on direct regulation.

A very similar alternative is currently being discussed under the label of "cap and trade." The government sets the total amount of effluent that may be emitted—say 80% of the pre-regulation level. Each firm is issued permits for its share of that amount, 80% of the sulfur dioxide it produced last year. A firm that reduces its emissions below that level can sell the excess permits to other firms; a firm that pollutes above the level it has permits for must buy additional permits. In effect, the price of the permit is the equivalent of the effluent fee under Pigouvian taxation. A firm that pollutes more than its share must pay that price for each additional ton of effluent. A firm polluting less than its share gives up, for each ton it emits, the opportunity to sell a permit for that price. Once the level of emissions is set, the market determines the cost of reducing emissions to that level and sets the required tax.

In a system run by philosopher kings, there is one important difference between the two alternative versions. A straight Pigouvian tax requires the agency to estimate the damage done by an additional ton of the pollutant. A cap and trade system requires it to estimate what the optimal amount of the pollutant is, how far it is worth reducing it. One can imagine some situations where the former information is more readily available, some where the latter is.

In a system run by elected politicians, there is another large difference—who gets the money. With an effluent fee, the tax goes to the government. With cap and trade, the payments are by one firm to another. Firms that can easily reduce their output of effluent end up richer than they would be without the scheme, since they can sell their excess permits to other firms for whom reduction is more difficult. Indeed, the regulated industry as whole may end up richer as a result of the regulation. Depending on the details of the situation, the increased income from the higher price for (say) electric power due to the higher cost due to the regulation might be more than the cost to the firms of reducing their emissions to meet the limit.

Which brings us to current proposals for cap and trade of carbon dioxide as a solution to problems of global warming. In a system run by philosopher kings, the only important difference between that and a carbon tax is the information required to set the level of emissions or amount of tax. In the real world, cap and trade has the (political) advantage of getting large parts of the regulated industry in favor of the regulation and thus eliminating a lot of potential opposition. The cost, possibly more than a hundred percent of it, is shifted to the customers, which is to say the general public—a dispersed and politically impotent interest group. And if the government retains considerable flexibility in just how emission permits get allocated, it can use some of them to buy political support from other groups or to reward them for past support.

Cap and trade has a political disadvantage as well, of course; a carbon tax would bring in lots of money. But that money would show up in the budget, get labelled taxation, and so make it harder for the administration to deny that it is raising taxes to pay for its programs. And much of it might end up spent to get the carbon tax passed by buying off organized interest groups that were potential opponents.

Think of the cap and trade version as eliminating the middle man.

Tuesday, May 26, 2009

The Rising Marginal Cost of Originality or What is Wrong with Modern____

Suppose you are the first city planner in the history of the world. If you are very clever you come up with Cartesian coordinates, making it easy to find any address without a map, let alone a GPS—useful since neither GPS devices nor maps have been invented yet.

Suppose you are the second city planner. Cartesian coordinates have already been done, so you can't make your reputation by doing them again. With luck, you come up with some alternative, perhaps polar coordinates, that works almost as well.

Suppose you are the two hundred and ninetieth city planner in the history of the world. All the good ideas have been used, all the so-so ideas have been used, and you need something new to make your reputation. You design Canberra. That done, you design the Combs building at ANU, the most ingeniously misdesigned building in my personal experience, where after walking around for a few minutes you not only don't know where you are, you don't even know what floor you are on.

I call it the theory of the rising marginal cost of originality—formed long ago when I spent a summer visiting at ANU.

It explains why, to a first approximation, modern art isn't worth looking at, modern music isn't worth listening to, and modern literature and verse not worth reading. Writing a novel like one of Jane Austen's, or a poem like one by Donne or Kipling, only better, is hard. Easier to deliberately adopt a form that nobody else has used, and so guarantee that nobody else has done it better.

Of course, there might be a reason nobody else has used it.

Sunday, May 24, 2009

Two Great Depressions

Current economic problems have brought back beliefs and arguments concerning the Great Depression. One account I have encountered goes roughly as follows: "After the stock market crash, Hoover cut taxes and government expenditure in order to revive the economy. Instead of reviving, things got worse and worse until Roosevelt came in, introduced the New Deal, and so eventually ended the Depression." The obvious conclusion is that we should deal with present problems, as the administration is dealing with them, by vastly expanding government expenditure.

The first half of this is a simple factual claim, easily checked. The current Statistical Abstract of the United States is conveniently webbed, but does not go back far enough to provide figures for expenditure under Hoover; Historical Statistics of the United States, so far as I can tell, is webbed only in a for-pay version. But earlier volumes of the Statistical Abstract are also webbed. The 1935 volume contains figures both for federal expenditure and for prices in the relevant years. Federal expenditure was:

1929 3,298,859
1930 3,440,269
1931 3,779,868
1932 4,861,696

Far from being cut, it increased every year.

These figures greatly understate the increase, because during this period prices and national income were falling. Measured in purchasing power rather than in dollars, federal expenditure roughly doubled over the final three years of Hoover's administration. Relative to GDP, it nearly tripled.

The second half of the story is trickier, because it depends on assumptions about what would have happened without the New Deal—that the Depression would have continued forever, or at least for much longer. That seems implausible, judging by other depressions and recessions; the Great Depression was unusually long, not unusually short. Arguing that since the economy eventually recovered the New Deal was a success is like arguing that if the doctor bleeds the patient and the patient survives and eventually recovers, the treatment was a success.

We can learn a little more by looking at a different Great Depression—the one that didn't happen. From 1920 to 1921, the consumer price index fell by 10.8%, more than in any year of the Great Depression; it fell another 2.3% in the next year. Unemployment rose to about its 1931 level. Looking just at that data, it's obviously the start of a depression.

Harding did what Hoover is supposed to have done, reducing taxes and government expenditure. By 1923 the recession was over. It was the Great Depression that didn't happen.

Thursday, May 21, 2009

Fairness in Sports: A Modest Proposal

To increase agricultural output on a fixed amount of land, you increase labor and fertilizer. If a nuclear limitation treaty limits the number of missiles, a nation that wants to maintain its arsenal does it by increasing the size of warheads, the number of warheads per missile, or both.

Consider the same principle in the context of sports. A football team faces an artificial constraint on the resources used to produce victory—it is only allowed eleven men on the field at a time. It responds, naturally enough, by making them big men. Very big.

The obvious solution to the manifest unfairness of this result—blatant bigism—is a small change in the rules. Instead of limiting the number of players on the field, limit their total weight. A team is allowed to have up to 2400 pounds of players on the field at one time. That can be a ten man team averaging 240 lbs, a fifteen man team averaging 160 lbs, or any other mix that produces the same total. For basketball, perhaps the constraint should be on total height instead of total weight.

It should be interesting.

Tuesday, May 19, 2009

Does Free Love Promote or Impede Successful Marital Search?

Listening to satellite radio while driving, I occasionally come across the Cosmo channel. The assumed target audience seems to consist of women engaged simultaneously in a search for a long-term partner—a "keeper"—and a dating pattern involving a good deal of relatively casual sex. This raises an interesting question: does the latter make success in the former more or less likely? It is an old question, but the sexual revolution may provide new evidence.

On theoretical grounds, the answer is unclear. There are two obvious arguments against the modern pattern. The first is that, in a society where sex is readily available without marriage, the incentive to engage in a protracted search for a long term partner, costly in time, effort, and emotion, is much lower than in a society with more traditional mating patterns. The second is that sex, at least in humans, has emotional consequences—you feel differently about someone after sleeping with her (or him) than before. Arguably, in humans, one consequence is to reinforce emotional bonds that promote long term pairing—plausible both on subjective evidence and on obvious evolutionary grounds, given that human infants require extended care. That makes it at least plausible that the bonds forged with your fiftieth sex partner will be weaker than the ones forged with your first or second, making marital stability, when the keeper is finally found, less likely.

There are two obvious arguments in the other direction. The first is that if individuals very much desire sex and cannot get much of it outside of marriage, there is an incentive to marry in haste and repent at leisure, wed the first moderately acceptable partner willing to say yes. The second is that a successful sexual relationship makes a successful marriage more likely, so the parties are less likely to regret their choice if they try before they buy.

My question is whether anyone has produced good empirical data to tell us which side of the argument turns out to be more important. It is not enough to observe that divorce rates have gone up along with rates of non-marital sex. That might be due to other causes, most obviously easier divorce. Better evidence would be some sort of large scale longitudinal study, aimed at distinguishing the success in establishing marital (or long term non-marital) partnerships of people as a function of their prior willingness to engage in relatively casual sex. Even that, of course, has problems, since we don't have any way of creating a true controlled experiment—but an ingenious researcher might find a natural experiment that came close.

Monday, May 18, 2009

A Healthcare Idea: Insurance That Doesn't Pay

One obvious, arguably ideal, solution to the problem of paying for healthcare is for the consumer to pay for ordinary costs out of pocket, just as we pay for food and housing, while using insurance to cover extraordinary costs, just as we use it in other contexts. That is not the usual pattern of medical insurance in the U.S., however—typically it covers a large part of the cost of annual doctor's visit, ordinary dental care, and the like. That arrangement has some obvious disadvantages. The customer has no incentive to hold down costs, making it necessary for the insurance company to play a major role in the decision of what services the customer gets from what providers.

One reason for this arrangement is that health care you pay for directly is paid with after-tax dollars, health insurance via your employer with before-tax dollars, giving a substantial advantage to the latter. But there may be another reason as well.

From time to time one sees accounts of a patient who was billed for something and discovered that the cost was much more than what the hospital charged an insurance company for the same treatment. Such anecdotes are not, by themselves, very strong evidence, and I do not know how common the pattern is—I have also seen stories in which the discrepancy was in the other direction, after some bargaining by the patient. But it does make a certain amount of sense. The insurance company is a well informed purchaser bargaining in advance on behalf of lots of customers, so it wouldn't be surprising if it were able to get better terms than the hospital offers to an individual who requires treatment.

This suggests an obvious possibility—"insurance" companies whose main function is not insurance but negotiating prices. Within the current framework, that would mean a policy with a very large deductible, large enough so that the customer ended up paying for all ordinary medical care, giving him an incentive to hold down its cost as best he could. In the limiting case it would mean a company that provided no insurance at all, simply the service of negotiating for its customers, in advance, prices for medical service, and perhaps offering customers advice as to what they ought to purchase from what provider.

To what extent do such arrangement already exist? Is there a problem with them that I am missing? They don't deal with the implicit tax subsidy issue, but they would seem to solve the other problem.

Wednesday, May 13, 2009

Sea Ice II: Reading Graphs

My previous post contained a graph of northern hemisphere sea ice from which it seemed clear that the past trend of declining area, rather than continuing as the JPL claimed, had reversed. The first comment on that post claimed that I had cherry picked my data, offering as evidence another graph presenting the information in a different form which, the commenter said, supported the JPL claim.

As I pointed out in my response, his graph did not support the claim either, although it did not contradict it as strikingly as mine did. That raises the question of why two graphs from the same web page presenting the same information looked so different. While it might be possible to answer it by searching out additional information on the web site, I thought it would be more interesting to see what one can figure out simply by looking at the graphs. I will refer to his graph as graph A and mine as graph B. Both are shown below.

Graph A--------------------------------------------------
Graph A shows swings within each year over a range of about ten million square kilometers, presumably reflecting expansion and contraction of the area of sea ice with the seasons. The variation in Graph B within each year is much smaller and less regular. Pretty obviously that means that the data in Graph B is seasonally adjusted. The figure for, say, January of 2009 shows how much higher or lower the area of sea ice was than the average for Januaries over the reference period, eliminating the seasonal variation. That fits the title. The graph is showing, not the area, but the anomaly, the difference between the area at that time and its previous average.

That way of drawing the graph makes it easier to see trends, since changes over the years aren't masked by the much larger changes within each year. A more subtle difference is that, looking at graph A, one loses almost all of the relevant data. The month to month difference is so large that one cannot tell by eye whether sea ice area in January of 2008 was more or less than in 2009. All you can see is that sea ice area in the month when it was largest was about the same in 2008 as in 2009, and sea ice area in the month when it was smallest was a little larger in the second year. You are left trying to interpret trends based only on the extremes of the cycle—data from two observations a year instead of about twelve.

Graph B, by eliminating the seasonal effect, gives a much clearer picture. If sea ice area was larger in ten months of the second year than in the corresponding months of the first year and lower in two, that will show as a rising trend with a couple of downticks—even if the low months happened to also be the extremes. What the commenter viewed as noise on Graph B—the shifts up and down month by month—is information, information obscured by the large seasonal variation on Graph A.

[To see the webbed graphs, which are larger and clearer than the images here, click on the Graph A and Graph B links]
-----------------------------------------------
As a result of a comment to this post, I found a graph showing April sea ice cover from the National Snow and Ice Data Center, the source that the JPL article cites. Here is the graph. It shows ice extent rising for the past two years.

(Later addition)

The data from the same source for May are now in. May sea ice extent has risen for the past three years, bringing it back to about its level in the late 1980's early 1990's.

On the other hand, poking around the same source, I found the graph for March, which provides at least a little support for the JPL comment I have been attacking, which was published in April. It shows March sea ice rising a lot for two years, but falling a little in the most recent year. To describe that as "continues to shrink" strikes me as clearly misleading, but it's an exaggeration to describe it as a flat lie.

Tuesday, May 12, 2009

Global Sea-ice, Deceptive Reporting, and Truthful Lies

"The latest Arctic sea ice data from NASA and the National Snow and Ice Data Center show that the decade-long trend of shrinking sea ice cover is continuing."

That statement, from the JPL, is dated April 2009. The actual data for northern hemisphere sea ice, measured as the deviation from its 1978-2000 mean, are shown below. The source is "The Cryosphere Today," a web site of the Polar Research Group, Department of Atmosphere Sciences, University of Illinois at Urbana-Champaign, not a site devoted to critics of global warming. If your browser doesn't show the complete graph, click on the image to get to the webbed original.

Looking at the graph, the pattern is pretty clear. For about ten years, from 1997 to late 2007, the area of sea ice was decreasing. That trend then reversed, and the area has now been increasing for more than a year. The claim quoted from the JPL is a flat lie.

Reading further in the article, one finds: "that this winter had the fifth lowest maximum ice extent on record. The six lowest maximum events since satellite monitoring began in 1979 have all occurred in the past six years (2004-2009)."

That's a nice example of how to lie while telling the truth. The trend has reversed—whether temporarily or permanently we don't know. But since the area had been trending down for a decade and has only recently started to trend up, the current figure is still low relative to the past.

Except for the recent past, which is all that is relevant in judging whether the current trend is continuing.




Monday, May 04, 2009

Other Worlds and Wasted Talents

Over the course of my life, I've spent a good deal of time in what I think of as "other worlds"--The Society for Creative Anachronism, World of Warcraft, interacting online in Usenet news groups, and the like. One thing that strikes me about such worlds is the presence of competent, energetic, talented people whose life in the ordinary world reflects little or none of that.

I was reminded of this recently when someone I know in WoW as an unusually competent and charismatic leader, organizer, and player, mentioned the problem of "parental agro." He is apparently a college student, possibly a graduate student, living with his parents. Older examples are friends in the SCA of whose abilities and energy I think highly, who made their living as school teachers or secretaries or the like—respectable jobs, but not particularly high status or high paying ones.

The pattern is not entirely surprising. It makes sense that an energetic individual who doesn't find much outlet for his energies in his current career will direct them towards his hobbies. Adam Smith long ago observed that, in the British universities of the time, a professor got no benefit by doing a good job of teaching, since the professors were on salary rather than, as in at least some of the Scottish universities, paid by the students. He concluded that if the professor were naturally energetic, he would spend his energies doing something that might be of some benefit to him rather than doing his job, which would not. Nowadays we call it "consulting."

At the same time, it seems a terrible waste. Starting a business, running a restaurant, doing scientific research, any of a myriad of "real world" activities, have the same potential for employing human talents as organizing a guild in WoW or an event in the SCA. They also produce other benefits, most obviously the opportunity to combine fun and profit in a way rarely possible with one's hobbies. I am glad that these people spend the time and talents they do in worlds we share, since I benefit from their doing so. But I wonder what keeps them from employing the same talents more successfully in the part of the world where they spend forty hours a week making a living.

Is it that they prefer for that part of their life to make fewer demands on them? Or is it rather a case of wasted talents, a failure of current institutions to do as good a job as they might of letting potentially productive individuals find suitable employment for their abilities?

Sunday, May 03, 2009

More Harald Podcasts

For anyone interested in my novel Harald, I have now webbed some more recordings of chapters. If you enjoy them let me know and I'll record more.

Current fiction projects are Salamander, which is complete but can use more revision, and its sequel Eirick, which is a bit more than half written. I also have written the beginning of a sequel to Harald but am not currently working on it. If you want to see and comment on any of them, email me.

Labels: , , ,

The VP and the Power of Tie Breaking

In a recent online exchange, a poster trying to defend Biden's mistaken claim that Article I of the Constitution put the VP in the executive branch argued that the VP's role in the Senate, which is what Article I actually describes, is a trivial one, since all he gets to do is to chair the Senate and break ties, and that giving him a trivial role in the legislative branch impled that his real role was in the executive. That started me thinking about tie breaking.

On the face of it, the power to break a tie isn't worth much, since votes are rarely ties. But that way of looking at it is misleading. An ordinary vote is only decisive, after all, if it either breaks a tie or creates one, which is also, in most contexts, a rare situation. The question is how much less power a vote that can be used only to break a tie provides.

Consider a motion in an assembly. For simplicity assume that, absent my vote, the probabilities that the motion will fail by one vote, tie, or pass by one vote—the situations in which my vote might affect the outcome—are equal. Call the common probability p. Assume there is a chairman with the power to break ties, and assume, again for simplicity, that he is equally likely to vote either way on the motion and never abstains. Finally, assume that I want the motion to pass.

If without my vote the motion would fail by one, then I can vote for it and make it a tie. That affects the outcome only if the tie breaker also votes for it. So the probability that I will be able to make it pass by making it into a tie is p/2. If without my vote it would be a tie, then I can vote for it making it pass, but that only affects the outcome if the tie breaker would not have himself voted for it. That gives us another probability of p/2. If without my vote it would pass by one, then my support is not needed and has no effect on the outcome. So the total probability that my support for the bill will affect the outcome is p.

What about the situation from the point of view of the tie breaker? Assuming that a motion fails on a tied vote, he can only affect the outcome if the vote is a tie and he supports the motion. Probability p/2.

What this simple analysis suggests is that the voting power of the VP, if he chooses to exercise it, is half that of an ordinary senator. Whether that plus chairing the Senate, as I gather VP's used to routinely do before LBJ, adds up to more or less than the equivalent of one senator is a question I will leave to others who know more about the subject.

Labels: , , ,

Saturday, May 02, 2009

How to Lie With Statistics—Sometimes Without Even Trying

Some time back, there were news stories reporting on studies of several communities that showed smoking bans to be followed by reductions in heart attacks. There are now reports of a much larger study done at the NBER which finds no such effect. How can one explain the discrepancy?

The simple answer is that in some communities heart attack deaths went up after smoking bans, in some they went down, in some they remained more or less unchanged. Hence a study of a single community could find a substantial reduction even if it was not true on average over all communities.

How did the particular communities reported in the early stories get selected? There are two obvious possiblities.

The first is that the studies were done by people trying to produce evidence of the dangers of second hand smoke. To do so, they studied one community after another until they found one where the evidence fit their theory, then reported only on that one. If that is what happened the people responsible were deliberately dishonest; no research results they publish in the future should be taken seriously.

There is, however, an alternative explanation that gives exactly the same result with no villainy required. Every year lots of studies of different things get done. Only some of them make it to publication, and only a tiny minority of those make it into the newspapers. A study finding no effect from smoking bans is much less likely to be publishable than one that finds an effect. A study finding the opposite of the expected result is more likely to be dismissed as an anomaly due to statistical variation or experimental error than one confirming the expected result. And, among published studies, one that provides evidence for something that lots of people want to believe is more likely to make it into the newspapers than one that doesn't.