Tuesday, November 27, 2012

In Defense of Utilitarianism

" Utilitarianism certainly seems as though it gives us a firm decision procedure for deciding between actions and/or institutions. But I think this is largely an illusion, produced by assigning precise numbers to things that we aren't really in any position to quantify."

(Matt Zwolinski, in a comment on my blog post on left-libertarianisms)
I am not a utilitarian, for reasons I have discussed elsewhere, but I think utilitarianism comes a great deal closer to being a moral theory with real world content and implications than what I have so far seen of Bleeding Heart Libertarianism. Hence this post.

There are multiple versions of utilitarianism—rule vs case, average vs total. For the purposes of this post I will consider the version that advocates taking those acts that maximize total utility.

The first objection that can be raised is that we cannot observe utility. That is not true. We can observe both the ordinal and (Von Neumann) cardinal utility functions of a single individual by observing his choices. It is possible that the observed parts of the function are radically different from the unobserved parts, or that the function changes radically from day to day, making yesterday's observation irrelevant today, but we have introspection of our own preferences and observation of the behavior of others to tell us that that is unlikely and to suggest likely guesses about preferences that we do not directly observe. We cannot create a precise description of someone else's utility function, but we can and do know a good deal about it with a high degree of probability.

That leaves the problem of interpersonal comparison: How do I decide whether a gain for me does or does not outweigh a loss for you? We do not have as good a way of solving that problem. Yet we routinely do solve it, at least approximately, when deciding how to divide our limited resources among other people we care about. If I were truly agnostic about interpersonal utility comparisons I would have no opinion as to whether giving ten cents to one person was or was not a larger benefit than giving a hundred dollars to another and similar person. We are human beings, we have a good deal of experience with other human beings, and that is enough to make reasonable, approximate, guesses about interpersonal utility comparisons.

Further, as Alfred Marshall pointed out long ago, in many cases we do not need detailed information about individual interpersonal comparisons in order to form a reasonable opinion about which option leads to greater total utility—because differences average out. Consider the question of tariffs. Economic theory tells us that if we do interpersonal comparison on the (surely false) assumption that everyone affected has the same marginal utility of income, a tariff, under almost all circumstances, results in a net loss of utility.

There are two ways in which one could accept the standard economic argument and yet claim that a tariff produces a net gain in utility. One is to reject the assumption implicit in the economic analysis that what matters is the actual effect of the tariff on the economic opportunities of those affected, reflected in the prices they must pay for what they buy and can receive for what they sell. One could, for instance, argue that many people's utility function includes a large positive value for the existence of a tariff, independent of its effect. Such a claim is, however, implausible given what we know, from introspection and observation, about human tastes. And if true, it suggest a testable implication—that many individuals will support a tariff even though they are fully informed about its economic consequences, and even though the economic consequences for them are negstive. I do not think that implication is consistent with casual observation of the politics around tariffs.

The other possibility, and the one Marshall considers, is to argue that the gainers from a tariff have a substantially larger marginal utility than the losers, hence that the net effect is positive measured in utiles even if negative measured in dollar value. To support that claim one would need evidence. Gainers and losers represent a large and diverse group of people, so we would expect individual differences to average out. That is not true for all arguments about dollar value vs utile value; the obvious exception would be a policy where gainers were much poorer than losers. But there seems no reason to expect that for the tariff case. 

Hence we have good reason to conclude that a tariff lowers total utility. It is good reason short of certainty, but that is true of virtually all of our conclusions. Similar arguments could be made to show that many, although not all, of the standard arguments that imply that one choice is superior to another on conventional economic grounds, leads to greater economic efficiency, also imply a good reason to think that it results in greater total utility and so should be preferred by a utilitarian.

I think these arguments are sufficient to demonstrate, not that utilitarianism is true, but that it is not empty—that it has real world content and real world implications.

23 comments:

Gordon said...

I think that Matt overstated his case with his "produced by assigning precise numbers to things that we aren't really in any position to quantify", but, if he had said, "produced by assigning precise numbers when we only have somewhat fuzzy approximations", he would have been closer to the view you express. And, since he said "*largely* an illusion", my rewording is not, I think, excessively charitable. You have a good point with the Marshall/tariff example, but one could concoct, e.g., many cost-benefit analysis problems in which jumping from the lower to the upper bound on the dollar value of a human life would change the project decision from "go" to "no go" (or vice versa). But, let's suppose that even the dreaded utilitarianism provides more guidance about what is just that BHL. Matt's more serious objection to utilitarianism is the "separateness of persons" objection raised by none other than Rawls himself - an objection that you seem to agree with.

Matt has been clear (at least now, if not in the Cato discussion), that the special concern BHL gives to the least advantaged can take shape in a variety of ways: maximin, sufficiency, prioritarian, etc. So BHL is really a family of theories. And, of course we can disagree over what is, say, "sufficient", just as we can disagree over the dollar value of a human life.

David Friedman said...

I wasn't arguing that utilitarianism was correct, I was arguing that it had content.

I agree that BHL is a family of theories. The best defined one is Rawls, but I regard both its derivation and its implications as indefensible, for reasons I sketched in the Cato exchange. Matt has not, so far, given any description of his version that has as much content as utilitarianism, or provided anything that seems to me a plausible argument for it.

Hence my complaint.

William H. Stoddard said...

My utility changes a lot more often than day to day. A few minutes ago, a quick visit to the smallest room in my apartment had very high utility for me; now its utility is far lower. A few hours ago, a bowl of homemade chicken soup had high utility, but its utility would not be so high now, if there were any left. The utility of watching a DVD varies according to whether my girlfriend is free to watch with me; the utility of reading Lord of the Rings or "As Easy as A.B.C." depends on when I last read one of them, and my desire to listen to particular pieces of music varies from day to day or even hour to hour. The idea that there is any real internal data structure that is constant long enough to be measured by the accumulation of statistical data makes little sense to me.

Patrick said...

I think David has explained well enough why maximin is indefensible, and I agree with his conclusion. Gordon also mentions the theories of sufficiency, with which I am not familiar, and prioritarianism, with which I have a different problem: from my point of view, prioritarianism is indistinguishable from utilitarianism.
As I understand it, prioritarianism states that we want to maximize the sum of individual utilities, weighted by some function which varies with the utility of the individual. An individual's weighted utility should increase with increasing utility, but that rate of increase should fall off as utility rises (thus, we weigh increases to the poor man's utility more heavily that increases to the rich man's). I assert that the product of the weighting function and the utility it weights, what one might call a "weighted utility function", behaves just as a utility function does in economics.

Mathematically, a function U(x) qualifies as an economic utility function if it satisfies two properties: 1) for any good x, dU/dx > 0. 2) d^2U/dx^2 < 0, the principle of diminishing marginal utility. Our weighted utility function, F(U) takes a utility function as an argument, and follows rules 3) dF/dU > 0 and 4) d^2F/dU^2 < 0. So how does F vary with x? Well, dF/dx = (dF/dU)*(dU/dx) which must be greater than 0 by axioms 1) and 2). And d^2F/dx^2 = (d^2F/dU^2)*(dU/dx)^2+(dF/dU)*(d^2U/dx^2), where both terms, and thus the sum, are less than 0 by axioms 1), 2), 3), and 4) together. F thus maps utility functions to utility functions. Since F(U(x)) follows the same rules as a utility function, why not just call *that* the utility function? After all, utilitarianism with utility function F(U(x)) gives all the same answers as prioritarianism.

Prioritarianism thus devolves into little more than a bald assertion that utilitarianism is right, but people's real utility functions are shaped slightly differently that they intuitively think. Have I misunderstood prioritarianism? Is there some deeper insight that the theory offers that I have missed?

Gordon said...

"I wasn't arguing that utilitarianism was correct, I was arguing that it had content."

I agree. But to say that Matt's view - call it "BHL as sufficiency" - does not have as much content as utilitarianism is not especially damning if (1) utilitarianism has quite a bit of content (which seems to the point of your tariff example), and (2) utilitarianism is incorrect. That we might disagree over what exactly is "sufficient" does not mean the view has *no* content.

So, does it have "enough" content? And, with that content, is is "sufficiently" libertarian? The answer to that will depend in part on how satisfied you are with alternative justifications for libertarianism, such natural rights and self-ownership. I think the BHLers see much more promise in a neo-Rawlsian justificatory framework (which is much more than Rawls's use of maximin.)

"I agree that BHL is a family of theories. The best defined one is Rawls."

Is Rawls's theory in the BHL family? That is an interesting question. I think that Tomasi, for example, would deny it that status because Rawls's first principle exclude core economic liberties.

Gordon said...

Patrick,

I am not sure your statement of prioritarianism is correct. Suppose we have two distributions, A=(100,0) and B=(60,20), where the numbers are some measure of utility for two persons. Utilitarianism ranks A over B. As I understand it, prioritarianism ranks B over A. This ranking is done by the *theories*, not by the utility function used to generate the utilities in A and B. In other words, prioritarianism is not saying that we "really" get more utility from B than from A. It is in fact sacrificing some total utility to make the worst-off person better off.

Patrick said...

Gordon,
My point about utility functions is that we can only directly observe them for individuals. They consist of an ordering of preferences, but not a direct measurement of how fast utility increases. So if you say that I have a utility function of x, where x is in units of some good or goods, I could equally say my utility function is sqrt(x). These two utility functions will produce the same predictions about my preferences. They are indistinguishable by measurement, and are thus in a sense the same utility function.

We cannot directly measure interpersonal utility, so we cannot know if the correct group utility function should be x+y or sqrt(x)+sqrt(y). In your example, the original utility functions 100 > 60+20, so total utility is maximized in case 1. But under my alternate utility functions (which are equally valid in every measurable parameter), we see that sqrt(100)+0 < sqrt(60)+sqrt(20), so the total utility is best in case 2. Of course, it doesn't have to be square root. It could be log, or any of a countless number of functions that have the right shape. (Annoyingly, prioritarianism fails to tell us even which weighting function to use, so (100,0) may still be better than (60,20).)

The point is that both functions are utility functions, and each provides the same predictions for individuals' preferences. The difference comes when you sum--since we are free to remap our individual utility functions in this way, we should choose utility functions such that an increase in A's utility is equally good as an increase in B's.

As David has pointed out, we can and do make judgments about the comparative utilities of others. Economics tells us something about allocational utility but not about distributional utility. For example, free trade allows us to determine whether, for goods A and B, (A,B) is better than (B,A), but not about the relative values of (A+B,0) and (0,A+B). So social utility functions must ultimately involve intuitive guesses. At best, it seems that prioritarianism just says that our intuitions are somehow categorically wrong, without even telling us how to correct them.

David Friedman said...

I don't know if Patrick has misunderstood prioritarianism, but I think he has misunderstood utility, at least cardinal utility as economists use it.

Von Neumann demonstrated that, if choices under uncertainty met some simple consistency requirements, it was possible to cardinalize the utility function by requiring that the utility of a lottery, a probability mix of outcomes, be its expected utility. That leaves the utility function arbitrary up to a linear transformation—zero point and scale are not fixed--but determined other than that.

Prioritarianism as Patrick describes it bases social judgement not on expected utility but a different function, based on utility but with its marginal value diminishing more rapidly as utility increases. So it is distinguishable from utilitarianism--sounds,indeed, rather like a hybrid of utilitarianism and egalitarianism.

David Friedman said...

Patrick writes:

"So if you say that I have a utility function of x, where x is in units of some good or goods, I could equally say my utility function is sqrt(x)."

Thus confirming my earlier guess about what he, from my standpoint, gets wrong about ordinal utility. The two utility functions he describes produce different predictions about what choices people will make among gambles.

David Friedman said...

And for the link between Von Neumann utility and utilitarianism, note that Harsanyi came up with Rawls' veil of ignorance a couple of decades before Rawls, and, unlike Rawls, got it right. The conclusion was that, behind the veil of ignorance with an equal chance of being anybody, the individual would choose that society that maximized the average of (Von Neumann) utility.

Patrick said...

David,

I don't think my statement results in misidentifying people's choices among gambles. Utility is the value I assign to the state of the universe as I perceive it, including probabilities. And since the mapping I described is one of utility functions, the gambles/tradeoffs are already taken into account inside of the original utility function.

For a simple concrete case, consider a system with two states, which I value differently. It is in state 1 with probability p and state 2 with probability 1-p. That variable p describes all the possible information about this simple universe. I can then write my utility function as U(p). But the utility function sqrt(U(p)) will give the same ordering of preferences, and is thus the same utility function. It will, for example, be maximized at the same value of p as my original utility function is maximized.

I think what I am describing is an isomorphism of Von Neumann utility functions, though I may be misusing that word slightly. Since Von Neumann utility is only something defined and measurable for each rational actor separately, I see no reason why it is illogical to call the total utility function of society the sum of these isomorphisms. In my example, let's say p represents the chance that something of Von Neumann utility 1 is owned be person A, and 1-p is the chance it's owned by identical person B. A's utility is maximized by p=1, B's by p=0. Both social utility functions agree on that. But the ordinary utility function says that total utility is U = p+1-p = 1, and so is independent of p, while my function says that U = sqrt(p)+sqrt(1-p), giving a maximum at p = 0.5.

I personally agree with you (and Harsanyi)that the correct way to add up utilities is directly adding Von Neumann utilities, and that our intuitive understanding of each other's utilities is sufficiently accurate to make them more or less linear. However, since society is not itself a rational agent, we can't exactly rule out alternative formulations. If I were a person on the outside of society and I my utility was equal to the sum of the square roots of each person's utility, in what sense is my utility function not a valid societal utility function?

David Friedman said...

Patrick:

The problem with your example is that the utility of p is not pU(state 1) + (1-p)U(state 2). Or, if it is, that relationship will no longer hold when you replace with with U'=Sqrt(U).

The point of Von Neumann utility is that it is defined in such a way that that relation is always true. The utility of a lottery is the expected value of the utility of the outcomes.

That constraint gives you an ordinal utility function arbitrary only as to zero and scale.

Patrick said...

David,

I think maybe I can explain what I was saying better. You can see the individual utilities of the people as goods in a societal utility function. If our hypothetical society values the utility of each member proportional to the square root of his own perceived Von Neumann utility, then our society will have a utility function U = sum(sqrt(u_i)) where u_i is the Von Neumann utility of the ith individual. The terms that our society views as interchangeable in Von Neumann terms are the square roots of people's utilities. so (u_1,u_2)=(100,25) is interchangeable with (121,16). Likewise, p*(100,25) + (1-p)*(121,16) is interchangeable with both.

Basically, we have a society that sees its members utilities as goods, with diminishing marginal utility, rather than as bare linear terms in its own utility function. The society ends up being more risk averse than its members, so it is in this sense a strange and arbitrary societal utility function. For this reason, I find Harsanyi's solution more elegant and more probably correct. But I do not see prioritarianism as giving an inconsistent utility function.

David Friedman said...

Patrick:

I thought your original claim was that prioritarianism was a version of utilitarianism. It was that claim I was disputing, not the claim that prioritarianism of some form could define an objective.

Patrick said...

David,

Is your claim that utilitarianism consists only of maximizing the sum of people's Von Neumann utilities? That is, is a person's Von Neumann utility necessarily equal to the utility function that utilitarianism maximizes? I thought Von Neumann utilities merely identified a preference ordering.

Why can't I say that the utility of a society is equal to the sum of the square roots of its members' Von Neumann utilities? For any particular outcome distribution, both theories agree that every Pareto improvement increases societal utility. They only disagree about things that are not Pareto improvements, which is precisely the kind of change whose utility we cannot measure.

However, I will gladly concede that if we define utilitarianism to be strictly maximizing the unweighted sums of people's Von Neumann utilities, then you are correct by definition. (Although there is still the issue of what constant multipliers to use for each person.) In this case, my problem was that I was seeing prioritarianism as just another version of utilitarianism, whereas in fact it is utilitarianism which is just a special case of prioritarianism (in which the weighting functions for each person are some arbitrary constant).

David Friedman said...

"Why can't I say that the utility of a society is equal to the sum of the square roots of its members' Von Neumann utilities?"

You can. But I think it's a stretch to call that utilitarianism. Utilitarianism holds that the proper maximand is the sum (or average in another version) of individual utilities. If one is going to talk about cardinal utilities, which that definition requires you to do, I don't think it makes sense to use the term for two different things.

astrolabio said...

speaking of utilitarianism,I would really like to read something about the Sen paradox by prof. Friedman. it's often referred as the demonstration that free market doesn't work in a pareto/utilitarian perspective.

David Friedman said...

Re the Sen paradox.

Assuming it's the argument of Sen's I remember, it assumes that freedom requires that there be some pair of choices with regard to the entire world, for each person, with regard to which that person has complete control.

But that's wrong. Suppose I have complete control over my life, but you watch me, and let your actions in part depend on mine. There is then no pair of complete descriptions of the world over which I have complete control, since the complete description includes what you do. But I'm still free.

Sen is confusing my having complete control over some features of the world with my having complete control over the entire world with regard to some choice. If you weren't watching me I could satisfy Sen's requirement, since I could choose between "what everyone else is doing and my doing X" and "what everyone else is doing and my doing Y." But if you are watching me, you can decide a change in what everyone else (which includes you) is doing dependent on my change, so there is no pair with regard to which I have absolute authority.

That's by memory from a long time ago, when I could not figure out why people took the argument seriously.

astrolabio said...

Thank you

"why people took the argument seriously."

I guess everything that in some way disprove the invisble hand is taken very seriously by non libertarian intellectuals, if the argument has some "mathematical formalism aura" is even better.

Eric Rasmusen said...

The Sen Lewd-Prude example doesn't imply that free markets are bad. In it, the independent action equilibrium is that Lewd reads a dirty book and Prude is offended. They both feel worse than if Prude were to read the book and Lewd were not. Prude doesn't want Lewd to be corrupted, and Lewd gets a kick out of Prude reading the book.

One conclusion from this is that utilitarianism implies that the government should force Lewd not to read teh book and force Prude to read it.

But another conclusion is that Lewd and Prude will not act independently; they will make a deal that helps both of them. The courts or some private mechanism must then enforce the contract, so if you don't believe in force even to protect property rights, it's a non-libertarian conclusion. But I would think most libertarians would favor the deal.

Eric Rasmusen said...

The Sen Lewd-Prude example doesn't imply that free markets are bad. In it, the independent action equilibrium is that Lewd reads a dirty book and Prude is offended. They both feel worse than if Prude were to read the book and Lewd were not. Prude doesn't want Lewd to be corrupted, and Lewd gets a kick out of Prude reading the book.

One conclusion from this is that utilitarianism implies that the government should force Lewd not to read teh book and force Prude to read it.

But another conclusion is that Lewd and Prude will not act independently; they will make a deal that helps both of them. The courts or some private mechanism must then enforce the contract, so if you don't believe in force even to protect property rights, it's a non-libertarian conclusion. But I would think most libertarians would favor the deal.

Eric Rasmusen said...

In reading the comments, I conclude that a big topic to address is whether quantification is legitimate at all. It's the same issue as whether it is meaningful for me to grade an essay as 86/100 instead of just "Pass". Many non-economists argue that you can't quantify hard-to-measure things. Economists argue that you can--- the only problem is accuracy, which arises even when using a ruler to measure distance--- and that everybody does it---we all make decisions that depend on fine distinctions, e.g., how much pleasure I would get from one more raspberry.

Steven Bollinger said...

Against utilitarianism: http://thewrongmonkey.blogspot.com/2013/02/against-utilitarianism.html