Home > Research Papers > McShane-Wyner: Death of a Hockey Stick?

McShane-Wyner: Death of a Hockey Stick?

2010 August 18

We have the WUWT crowd declaring the death of the hockey stick. Lambert declares it more of a hockey stick than the original. How do the hockey sticks stack up … literally? We take a quick peek below the fold.

mcshane-wyner
McShane-Wyner Fig 16

First up, Mann 99 (Northern Hemisphere) …

mcshane-wyner-mann-99

Next, Mann 08 (Northern Hemisphere)

mcshane-wyner-fig16-mann08

Next, Mann 08 (global)

NOTE: Messed up the vertical scaling on this and don’t have time to fix it until tonight.. (Fixed and updated)

mcshane-wyner-fig16-mann08-global

As far as the range of of the reconstructions, there is an obvious divergence of trends from about 1450-1500 or so. On the other hand, the Mann’s reconstructions and McShane-Wyner don’t seem to diverge to the extent that they are incompatible. There is a large overlap in the uncertainties.

McShane and Wyner are well aware of this and present Fig 17, their own superimposition on previous paleo-reconstructions. There is a difference of about 0.6C over a period of 500 years between MW and the other reconstructions. About the same amount of warming we have seen over just the last 100 years.

mcshane-wyner
McShane-Wyner Fig 17

A STATISTICAL ANALYSIS OF MULTIPLE TEMPERATURE PROXIES: ARE RECONSTRUCTIONS OF SURFACE TEMPERATURES OVER THE LAST 1000 YEARS RELIABLE?

Advertisements
  1. PolyisTCOandbanned
    2010 August 18 at 8:24 am

    I think where you really end up is a view that the recent warming is anamolous and some sort of semi-static view of the past. Because of various issues (matching two long trends, not matching elaborate wiggles…this has been pointed out by Zorita and Burger previously, is not a new finding by Mcshane and Wyner; some amplifciation/selection issues) with Mike’s recons, the uncertainty bands are larger than Mike describes.

    Basically we kinda have crap for good proxies. Mike uses more and more high powered techniques to peer into the noise. But he is peeering into noise.

    At the end of the day, the reasonable Bayesian hunch is recent warming, driven by CO2. Previous centuries more placid and similar to beggining of AGW regime. But floor to ceiling uncertainty bands, because the data just blows…stupid crappy proxies. Only ones I really like are ice cores and Kim Cobb’s corals. 😦

  2. PolyisTCOandbanned
    2010 August 18 at 9:30 am

    I find myself more and more down on PNAS and Science and Nature. The space limitations mean that the paper is more about the headline, the recon itself, than the methodology. But this is a really tricky problem and people have been hacking at it for a while. Advances are still possible, but they are non-trivial. Papers need to be about methods.

  3. 2010 August 18 at 9:54 am

    A commentator at Deltoid points to Cherry Blossoms in Kyoto. I am intrigued and find it charming.

    http://onlinelibrary.wiley.com/doi/10.1002/joc.1594/abstract

    https://www.cfa.harvard.edu/~wsoon/MiyaharaHiroko08-d/AonoKazui07-Aug23-KyotoSpring.pdf

    I take it that the proxies number in the thousands. An emphasis on proper handling of them would not be misplaced.

    But I think I would like to take a stroll through the available proxies to get a sense of the variety, quality, and spatial-temporal distribution of them. Where is the link for the guy with the link-enabled html image map?

  4. RickA
    2010 August 18 at 3:24 pm

    When I look at figure 17, I see that the thick yellow line was mostly descending from 1000 AD to around 1850 and then mostly ascending from 1850 to the present.

    I also see that the thick yellow line was higher in 1000 AD than at present.

    I also see large gray error.

    So while it probably has been getting warmer since around 1850 – maybe it is not warmer now than at any time in the last 1000 years or so (according to this paper).

    Does that seem reasonable?

  5. 2010 August 18 at 5:35 pm

    That seems reasonable. But we don’t need to eyeball. The authors give us their own interpretation:

    Another advantage of our method is that it allows us to calculate posterior probabilities of various scenarios of interest by simulation of alternative sample paths. For example, 1998 is generally considered to be the warmest year on record in the Northern Hemisphere. Using our model, we calculate that there is a 36% posterior probability that 1998 was the warmest year over the past thousand. If we consider rolling decades, 1997-2006 is the warmest on record; our model gives an 80% chance that it was the warmest in the past thousand years. Finally, if we look at rolling thirty-year blocks, the posterior probability that the last thirty years (again, the warmest on record) were the warmest over the past thousand is 38%.

    – McShane and Wyner, 2010

    80% chance the 1997-2006 is warmest decade in a thousand years.
    38% chance that 1980-2009 is warmest 30 year period.
    36% chance that 1998 was warmest year.

    Eyeballing, I would have to say that nowhere is the rate of change as great as it is today. Maybe that is an issue with the temporal resolution of the proxies. Or maybe that is just the way it is.

    As to the “large gray area”, that is the true point of the paper. Despite the trash-talk (Brigg’s phrase), MW10 doesn’t slay the hockey stick. It envelopes it like a Gelatinous Cube and spreads the uncertainty out (mostly upwards).

  6. PolyisTCOandbanned
    2010 August 18 at 11:28 pm

    For rate of change there is a good paper by Von Storch and Rybeck (sp?). Basically the current rise is anamolously high.

  7. toto
    2010 August 19 at 9:11 am

    But they spend a whole section to show that the model used in this figure is rubbish, right? At least that’s how I understand Figure 15 and Section 5.3.

    More generally, a lot of the paper seems to say, “we fit some models using proxy data or random processes as an input, we show that all our models are equally bunk at predicting hidden temps, and somehow we conclude that the problem lays with the proxies rather than our model building methods.”

    They use an “interpolation task” for validation which may or may not be problematic (guesstimating the middle part of a given function is not exactly what the proxies are used for).

    I guess there will be some interesting responses by the time the thing is finally published.

  8. PolyisTCOandbanned
    2010 August 19 at 12:01 pm

    If the proxies can’t guess those shorter periods than they are very weak proxies. All you end up with is a degree or two of freedom if you’re only matching two long trends versus able to wigglematch. A better test would be wigglematching of local temps. Of course then Mike’s methods don’t really do that, either.

    BTW, this concern about the proxies is not new nor trivial. Good guys like Burger and Zorita have noted the issue.

  1. 2010 August 21 at 9:20 am
Comments are closed.