## Sunday, April 13, 2014

### AGW, Considered as a Black Swan

In my previous post I took issue with Lovejoy's claim to show, by statistical analysis of global temperature data since 1500, that the probability that natural processes would produce the amount of warming observed in the period 1880 to 2008 was less, probably much less, than one in a hundred. My complaint was not with his conclusion, which might well be true, but with his argument.

In order to calculate the probability that what happened would happen as a result of natural causes of temperature change, Lovejoy needed a probability distribution showing what the probability was of a natural cause producing any given temperature change. He could estimate that distribution by looking at changes over the period from 1500 to 1880 on the (plausible) assumption that humans had little effect on global temperature over that period. But that data could not tell him the probability distribution for events rare enough to be unlikely to show up in his data, for instance some cause of warming that occurred with an annual probability of only .001.

His solution to that problem was to assume a probability distribution, more precisely a range of possible distributions, fit it with the data he had and deduce from it the probability of the rare large events that might have provided a natural cause for 20th century warming. That makes sense if those events are a result of the same processes as the more frequent events, just less likely versions of themâ€”just as flipping a coin and getting eight heads in a row is a result of the same processes that give you four, five, or six heads in a row. But it makes no sense if there are rare large events that are produced by some entirely different process, one whose probability the observed events tell us nothing aboutâ€”if, for instance, you got four heads in a row by sheer luck, forty heads in a row because someone had slipped you a two headed coin. The forty heads, or the hypothetical rare cause of large warming, would be a black swan, an event sufficiently rare that it had not been observed and so was left out of the calculation.

It occurred to me, after considering a response by Lovejoy and a comment on the Google+ version of my post, that not only was such a black swan event possible in the context of climate, one had occurred. AGW itself is a black swan, a cause of rapid warming whose probability cannot be deduced by looking at the distribution of climate change from the period 1500 to 1880.

If the point is not clear, imagine that Lovejoy wrote his article in 1880. Since warming due to human activity had not yet occurred, there would be no reason for him to distinguish between causes of warming and natural causes of warming. He would interpret the results of his calculations as showing that the probability of warming by a degree C over the next 128 years was less, probably much less, than .01. He would be assuming away the possibility of some cause of substantial warming independent of the causes of past warming, one whose probability could not be predicted from their probability distribution.

That cause being, of course, greenhouse gases produced by human action.

At 5:12 AM, April 14, 2014,  Tibor said...

Since you've mentioned problems with the tail events (what Taleb calls the "black swans"):

I've always wondered what robust statistics was about. My impression is that it deals with exactly these kinds of problems (although, curiously, Taleb does not mention it at all...at least not in one article which is meant to summarize his Black Swan book...at least as far as I can tell, I have yet to read the book itself). In the second last semester of my MS degree, there was a course on Robust statistics, I wanted to take it, but it collided with something else. And since I am really focused much more on probability theory than statistics, I've never got back to it again (instead, currently, I am mostly improving my knowledge of functional analysis which is not that much related to most, even mathematical, statistics). But if those are indeed methods that deal with the "black swans", then I might just find the time to read about it sometime, since it is a very interesting topic.

I know that some robust methods deal with ill-posed problems, i.e. the situation where a slight deviation from the assumption produces very large differences in the conclusions (the regular central limit theorem is an example of such an ill-posed problem), but if I remember correctly, it also (or mostly) deals with problems arising from tail observations.

If I am wrong about robust statistics, are there any good statistical methods to deal with "black swans"?

At 7:00 AM, April 14, 2014,  John said...

@Tibor Mach There are two main ways you can do robust statistics, either robust estimators or by having fat-tailed error distributions.

Robust estimators might give you better estimates of the location or scale of a distribution in the presence of outliers (median instead of mean, for instance).

The second might be more applicable to Taleb's black swan thesis. It might include using a t or general error distribution instead of a normal distribution for regression errors. However, Taleb has often favored distributions that are hard to estimate parameters for (like stable paretian, which has many cases where moments are undefined). He might say that you can't know when the black swans' distribution is like, especially if thinking about the future. Others disagree. Some people think extreme value theorem can be quite effective in capturing empirical distributions with outliers.

At 10:03 AM, April 14, 2014,  Tibor said...

John: Yeah, I thought of using, say, Cauchy distribution where one would ordinarily use Normal, to get something with heavier tails. But that hardly struck me as a "robust method" as it is fairly obvious. And also, it just mitigates the problems, but really does not get rid of it entirely. Also, I now remember the median/mean as a simple illustration given to me by a classmate who acually attended the lecture. My master thesis was called Robust filtering, but it was a particular application of Bayesian filtering on stochastic processes with a measurable parameter (and most of it really consisted of constructing a stochastic integral with a parameter, so the actual application was a rather minor part of the work) and the word robust was used in the sense that the method is applicable to a relatively rich class of models.

I guess these things (such as ill-posed problems or black swans) ought to be taught even at basic statistics courses for non-mathematicians (who are the ones who actually use it most in practice anyway). For example the mentioned robust sv. nonrobust CLT. People use CLT mechanically and can arrive at complete nonsense just because some of the assumptions are not met, but are "almost met", so it is not so straightforward to find that out (so, in reality, what they have would either converge to some kind of a stable distribution or something else yet). There was again an interesting lecture on the limit theorems during my MS studies, sadly the professor was Russian and could not really speak Czech. He tried though, by sort of mixing the few Czech words he knew with a kind of mutated Russian (his best guesses of Czech words) it was nearly incomprehensible at first and only got a little better after a few lectures, so I quit that class (we convinced him to speak English at one other class, which was a lot better, but he insisted on trying speaking Czech at this one). But it at least gave me a notion of how careful one should be when using these probabilistic/statistical tools on real data (as long as one is not entirely sure of the nature of the data).

Anyway, thanks for the reply, I guess the best thing for me to do is to read Taleb's Black Swan (and perhaps some more rigorous works of his that deal with that) and have a look at the extreme value theorem :) I always found statistics to be less interesting than probability, but these things are actually quite interesting.

At 10:41 AM, April 14, 2014,  John said...

@Tibor Mach: I completely sympathize with difficulty understanding advanced math and statistics profs!

Taleb's popular books are usually a quick read, but they usually don't contain much in the way of practical advise. If you're interested in applying robust techniques, might instead pick up a book on the subject.

Sometimes it's just more convenient to transform a problem into something more manageable. I tend to do most of my statistical analysis of financial time series. In financial time series, the extreme events tend to be clustered. Often the fat tails tend to be less fat when you adjust for something like stochastic volatility.

At 1:16 PM, April 14, 2014,  Tibor said...

This comment has been removed by the author.

At 1:20 PM, April 14, 2014,  Tibor said...

John: Well, usually it is fine, but professor Klebanov (that was his name) not only couldn't speak Czech, he also had the horrible habit of working out equations not by writing them down in a series of lines on the blackboard, but by using one line which he would keep partially erasing and changing...making the lecture almost incomprehensible as one could rely neither on what he said (I understood one word in three after I had already got used to his strange language) nor what what he wrote on the blackboard (unless you managed to copy the blackboard into your notebook as fast as he was writing it down and erasing it) :) Actually, when I took an exam from the second course he held, he asked me a couple of things, then he said something I did not understand. And I was suddenly worried he wanted me to talk about something I skipped by mistake and really had no idea of what that was supposed to be. So I sat there for a while numbly, trying to remember that part of the lecture...it turned out he was saying "excellent" (i.e. the grade A)...but his accent was so terrible I could not understand that word in his "Czech", I thought it was a name of some Russian guy and a theorem named after him or something :)

I imagine the Black Swan might be an interesting read to the train (I regularly spend about 6 hours on the train about 2-4 times a month which makes it an ideal time to read books) and then perhaps I can get into some real maths on the subject (I suppose that Taleb, being a statistician, has written some rigorous stuff on that topic...or of course, there will be others).

If I understand it correctly (I have never encountered the term before), accounting for stochastic volatility essentially means having a random variance. Is that correct? In my thesis' application part, I had a process of a stock market price which was an It^o process whose drift and diffusion were also Ito processees. So would that be considered a model adjusted for stochastic volatility?

At 1:44 PM, April 20, 2014,  Will McLean said...

Forgive the off-topic comment, but I thought you might enjoy this if you hadn't seen it:

http://www.smbc-comics.com/index.php?id=3333#comic

At 8:50 AM, April 22, 2014,  TGGP said...

I suppose then that Lovejoy has demonstrated you need a Black Swan to avoid the conclusion that we simply lucked into one of those improbably long strings of coin flips coming up heads. And that's something.

At 10:27 PM, April 25, 2014,  David Friedman said...

TGGP: I agree with your reading of Lovejoy's result. The problem is the assumption that the only black swan out there is AGW, when there might be one or more others.