Home > Deconstructing Watts > Steve Goddard's Snowjob

Steve Goddard's Snowjob

2010 February 27


In the last several weeks, Steve Goddard has posted a series of threads on WUWT on Northern Hemisphere snow cover extent. (see here and here and here and here.) In one entitled North American snow models miss the mark – observed trend opposite of the predictions, Goddard uses data from Rutger’s Global Snow Lab to claim that the latest 22-year trend for Winter (Dec, Jan, Feb) in the Northern Hemisphere invalidates the CMIP3 modeling of snow extent as presented by Frei and Gong in 2005 in their paper Decadal to Century Scale Trends in North American Snow Extent in Coupled Atmosphere-Ocean General Circulation Models. This paper is summarized at a Columbia University web page Will Climate Change Affect Snow Cover Over North America?

One immediately obvious problem in Goddard’s comparison is that he is showing snow extent for the Winter (Dec, Jan, Feb) in the Northern Hemisphere while the charts shown for the paper (and linked by Goddard) are January snow extent in North America. There is no reason for this apple-to-oranges comparison since the Rutger’s data is available for January North America. Although it is worth noting that Frei and Gong report a similar response across all months.

This is the January snow extent for North America for the full range available in the Rutgers GSL data set.

Goddard selected the last 22 years (Winter, NH) to demonstrate an increasing trend. No reason is given for that particular data range, although to the casual observer it appears it was chosen to maximize the observed trend. Here we select the same 22 years in Jan NA.

Goddard provides no discussion of the statistical significance of the trends, claiming that such treatment is irrelevant in analyzing observations of historical, physical data. Here I have included a two-standard-deviation box around the 22 year trend for Jan NA.

Here I have included a trend and 2-sigma envelope for the whole 44-year data set.

The data set that Goddard seeks to compare the historical data with is derived from nine models included in the CMIP3 models as run for IPCC AR4. Nine models are displayed with an obviously wide-ranging set of values for the actual snow cover extent. But Frei and Gong show that all nine sets predict decreasing SCE trends in the 21st Century. Here I use an image processor (GIMP) to overlay the historical data. The historical data is in gray with a 9 year 2-sided running average (per Frei and Gong) in black.


Here I zoom into the data range from 1960-2020 to better display the current data.


If it stood alone, the 2.15 t-value for the 22-year trend would demonstrate statistical significance. But it does not stand alone and Tamino at Open Mind demonstrates that when you cherry-pick the trend with the greatest t-value out of a 44 point data set, you need a t-value of 3.75 to achieve statistical significance.

Simply eyeballing the historical trend in the overlays is insufficient to invalidate the predicted trends. Nevertheless, with the actual modeling data, some progress might be made in measuring the skill of the models in the opening decade of the 21st Century. I have requested access to the original CMIP3 data for this purpose.

Update: The following chart is from Zeke Hausfather who calls the Yale Forum on Climate Change and the Media home. He mentions in the first comment that he has done a trend analysis based on the data from Frei. This is his chart.

Well-crafted analysis on NA SCE by Chad @ Trees for the Forest
North American Snow Cover

  1. 2010 February 28 at 1:35 am

    Nice timing, I was just working on this issue last week before I got distracted by shiny temperature graphs.

    I emailed Frei and he sent me the data in those charts for A1B; you can find it here: http://drop.io/0yhqyon/asset/modelsnow-csv

    I also did a first pass at a observation/model trend comparison here: http://drop.io/0yhqyon/asset/snow-cover-1967-2010-band-png

  2. 2010 February 28 at 2:16 am

    You’re all over it Z. A one man mathematical, code cranking, chart churning, statistical machine!

    I thought of emailing Frei as well, but I guess I’m a day late and a dollar short. 😉

    Nice job as usual.

  3. Doug proctor
    2012 February 13 at 5:42 pm

    Doesn’t this post just demonstrate that the models may or may not reflect historical data, and that they may or may not (therefore) be predictive?

    In connection with this, Goddard really says nothing about the models, except that the data to support the models is very limited and not at all conclusive. If your comfort or decision-making is in the modelling, fine. But if your comfort and decision-making is said to be in the data – “settled” and “certain” require data, not models – then you have just demonstrated that a priori reasoning is what motivates you.

    Me, with the tax dollars and concepts of individualism and freedom of choice hanging in the balance, I want a posteriori reasoning.

  4. 2012 February 16 at 5:43 am

    The problem, Doug, is Goddard’s selective use of a subset of data to advance a position not supported by the full set of data.

    Your arguments of ‘settled science’ and political rights have no bearing on the above analysis. Take it elsewhere.

  5. 2012 February 26 at 10:36 am

    I have reposted this on Really Sciencey as I though it appropriate to events there. I hope you don’t mind but I felt I needed to alter the opening paragraph to indicate that the information was from 2010.

  1. 2010 March 4 at 5:26 am
  2. 2010 August 11 at 2:48 pm
  3. 2010 September 28 at 4:35 am
  4. 2011 April 7 at 7:20 pm
  5. 2012 February 23 at 6:29 am
Comments are closed.