Richard L Smith: The role of statisticians in public policy debates over climate change
Dr Smith, author of Elementary Reconstruction of the Hockey Stick Curve is not a newcomer to the field. John Mashey, in a comment at Our Changing Climate, points out this article in the Spring 2007 issue of American Statistical Association Newsletter Section on Statistics & the Environment.
… This controversy was the central feature of a late-breaking session entitled “What is the Role of Statistics in Public Policy Debates about Climate Change?” that was organized jointly by Edward Wegman (George Mason University) and myself at the 2006 Joint Statistical Meetings. The session took place in front of a standing-room-only audience and was chaired by Doug Nychka (National Center for Atmospheric Research).
The three speakers were Ed Wegman, J. Michael Wallace of the Department of Atmospheric Sciences, University of Washington, and myself. Ed and Mike both talked about the hockey stick reconstruction. Ed focused on statistical flaws that, in his view, render much of the current literature on this subject of doubtful validity. Mike presented the broader findings of a recent NRC panel that, while acknowledging the statistical issues of Wegman’s report, defended the hockey stick curve based on a broader scientific context. The final talk of mine was on a different subject, how another important climate controversy had recently been resolved, where I also offered some personal perspectives on the role of statisticians in this kind of review.
Wegman is of course all the buzz on the climate blogs this week. Smith the author of a recent pair of papers on the “Hockey Stick.” Both Nychka and Wallace were members of the Committee on Surface Temperature Reconstructions for the Last 2,000 Years, National Research Council (aka North’s report).
Smith summarizes Wegman’s description of the MBH98/99 use of PCA as follows…
At the core of the controversy is an incorrect use by Mann et al. of principal components (PCs). Ed gave a brief overview of PC analysis, which uses the eigenvalues and eigenvectors of the sample covariances of a data matrix X. However as most commonly applied in large data sets, the actual calculation begins with a singular value decomposition of X itself, after subtracting the sample mean vector. A typical analysis by Mann et al. used a complete data record from 1902-1980 as a training data set to reconstruct temperatures from proxies for 1400-1995. However, the sample means they subtracted were based only on the data from 1902-1980, instead of the full series 1400-1995. This induced a bias in the first PC, and also biased the variances in a direction which gave greater weight to the first PC than a correct analysis would have done. To illustrate the point, a number of simulations were performed in which the true temperature series was represented by a stationary time series with no trend but red-noise autocorrelations, and the Mann et al. technique applied to estimate the trend in these series. There was a strong tendency for the simulations also to show the hockey-stick shape, mimicking the actual curves produced by Mann et al.
I leave the rest of the session report for the interested reader:
The role of statisticians in public policy debates over climate change