Home > climate, Deconstructing Watts, Statistics > Eschenbach’s Poisson Pill: Take Two, Call Me In The Morning

Eschenbach’s Poisson Pill: Take Two, Call Me In The Morning

2012 August 2


Eschenbach: I have tried to fit it myself with a number of other distributions, without success … but you guys are better theoreticians than I am, I await your answers.

I’m no statistical theoretician, but I did sleep in a Holiday Inn Express once. So before I forget it all, I want to look at this one more time.

To review, the question on the table is “What is the chance that thirteen hot months in a row will occur” if a hot month is defined as being in the upper third for all such months across all years (eg … one of the 33% hottest Junes on record).

A simple binomial distribution is wrong because hot months are more likely to follow hot months. The chance of one month being hot is not independent of the state of the previous month and binomial distributions assume independent events.

Willis Eschenbach fitted a Poisson distribution to a data set derived from the NCDC CONUS wherein he took a running count of the number of hot months in an overlapping series of thirteen across the full range of the data. The Poisson distribution, however, extends across all the positive integers from 0 to infinity, which in this case means that there is a finite chance that for getting 14 hot months in a series of 13. Obviously that makes no sense – so what is going on?

Eschenbach used all the data available to develop his “Poisson model” to the answer the question regarding the likelihood of a thirteen month run of hot months, but it has been pointed out that it is preferable to exclude the actual event to be “predicted” from the data being used to develop the model intending to predict the rare event. By including the last two “hot years” with the modeling data, Eschenbach fattens the tail of the data distribution, one reason that he has overestimated the Poisson lambda.

In my previous post, I suggested that we could just look at the data and find what the correlated probability of a hot month following a hot month would be and from that, explicitly calculate the chance for 13 hot months. Excluding 2011 and 2012 from consideration, that correlated probability is .4368. From that we can calculate the chance of a run of 13 hot months: (.3333)(.4368)^12 or about 1 in 62000.

But what about other runs? What is the chance of 12 in 13 … or 6 in 13 … or 0 in 13? To find those answers, we can build a matrix of all possible permutations (order matters) of 13 months either hot or not and then use a set of conditional probabilities to calculate each permutation. We need four such conditionals. To do this, we need only calculate the additional probability that a not-hot month follows a not-hot month similar to how we calculated the two hot months previously. This gives the following 4 probabilities:

P(H|H) = 0.4368 which is the probability of two hot months in a row
P(H|N) = 1 – P(H|H) which is the prob that a hot month follows a not month
P(N|H) = 1 – P(N|N) which is the prob that a not month follows a hot month
P(N|N) = .7146 which is the probability of two cold months in a row.

The probability for each permutation of hot and not months can then be calculated. The sum of all such permutations sums to 1 by definition and is verified in the code to make sure that I ensure the permutation matrix was created properly. Then we can bin the resulting sample space into the number of hot months and summing those probabilities. This gives the probability distribution displayed above as “Modified Binomial.”

This is a better fit to the sample data (pre-2011) then Eschenbach’s truncated Poisson with lambda 5.2 by either the unweighted variance (which fits the visual PDF curve) or the probability-weighted variance (which fits the actual data occurrence).

One thing to note about this “modified binomial” (dependent events) is that it is wider and flatter than the “unmodified binomial” (independent events). The autocorrelation makes the tails, the unlikely events, more common and, therefore, reduces the probability of the more common events.

However, the “modified binomial” is not fat enough in the tails and is too tall in the middle. This is likely due to the data being correlated for longer than one month. Accounting for two month lag or three month lag will likely make the modified binomial even flatter and fatter.

But we can also fit the data to a truncated Poisson using the mean of the number of hot months (4.35) found in the actual sample of hot months (pre 2011), and we get an even better fit than either my modified binomial with one month correlation or Eschenbach’s Poisson. I’m not surprised that Eschenbach’s Poisson is too fat in the upper tail. This is partly due to including the recent, rare event in his data set and I suspect that his ‘fit’ process was to find the least variance in this overly fat tail. I am surprised that a different Poisson (using the data mean 4.35) beats the modified binomial developed with a one month correlation. As noted in the previous paragraph, I believe that this is due to the correlation extending deeper than 1 month.

So what does the Poisson from the data mean tell us? For one, my objection regarding the range of the Poisson is negligable – we can pretty much ignore the possibility of events where the number of months in a series of 13 will exceed 13. In the Poisson from the data mean, this will occur only once every 5500 times. In Eschenbach’s fatter Poisson, this occurs once every 1000 times. In the Poisson from the mean, the chance of 13 in a row is once in every 2400 events, far more frequently than in the dependent modified binomial. Perhaps coincidentally, this comes close to matching Lucia’s HK estimate of 1 in 2000.

But why does the Poisson fit the data so well? Fundamentally, it has to do with the autocorrelation of hot/not months flattening the height and fattening the tails of what would be a binomial distribution if we could assume independent events. Canonically, Poisson can be used as an approximation to the binomial when n is large and p is small, such that lambda is less than 5 or so (larger np can be approximated using normal distributions). In this case, we have a mean of 4.35 derived from the data, although n is not large nor p small. Dividing the mean by n=13, we can calculate p = .335 – which would be very close to correct if the hot months were independent. So the Poisson approximation, flatter and fatter than the binomial, mimics the behavior of an explicitly calculated PDF using the probabilities for (autocorrelated) dependent events. Pretty nifty, all in all.

Code here