Home > Statistics, ushcn > NCDC DS-9640: CONUS Temperature Anomalies and a couple of ax-grinders

NCDC DS-9640: CONUS Temperature Anomalies and a couple of ax-grinders

2012 May 12

For some reason, two posts at The Blackboard, It’s “Fancy,” Sort of … (Shollenberger) and “To get what he wanted”: Upturned end points. (Lucia), seem to be having difficulty understanding the mechanics of another post at “Open Mind”, In the Classroom (Tamino). But there is nothing unusual or difficult about the methods Tamino used to create the charts which have generated so much smoke and apparent frustration at The Blackboard – resulting in an outbreak of mcintyretude: scorn, derision, insults, and the questioning of motives. Since this is mostly a quick walk through of some code to clear the smoke, I will leave the charts generated to post at the end.

Most will recognize this as R code. First, load some common libraries. “Reshape” is used for restacking the monthly columns as individual rows. “CairoPNG” is a graphics library I use on Windows and the headless CentOS webhost.

library("reshape")
library("Cairo")

Next, read the data. Tamino included a pointer to data portal which held the ftp link to the data. Lucia found it without issue; as did I. While it is not absolutely necessary to use the same data set for verifying general findings, it is necessary when exploring the finer details of another’s methods, such as analyzing end-point behavior.

# read data
fd964="ftp://ftp.ncdc.noaa.gov/pub/data/cirs/drd964x.tmpst.txt"
fw=c(3,1,2,4,7,7,7,7,7,7,7,7,7,7,7,7)
d964col=c("code","div","element","year","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec")
d964=read.fwf(fd964,fw)
colnames(d964)=d964col

The state.README file in the FTP directory provides the information to find the TOB-adjusted, area weighted national data series, coded as ’110′. Since I missed defining the NA values in the read command, I define them now.

# get lower 48 / CONUS
# station id = 110 (see state.README)
# unique(d964[d964[,1]==110,c(1:3)]) 
conus=d964[d964[,1]==110,c(4:16)]
conus[conus==-99.90] <- NA

The column format makes it easy to find monthly anomalies. The anomaly is just the difference from the mean of each month for all the years, excluding 2012 since it has only partial data.

# anomalize
anoms = t(conus[,2:13]) - colMeans(conus[1:117,2:13],na.rm=T)
conus[,2:13] = t(anoms)

Reshape the column matrix into a row series, one row per year-month. Convert the month labels into a numeric value (yyyy.monthfrac). Sort into time order. Convert to celsius. Nothing fancy here.

# transpose months from columns to rows
conus_ym=melt(conus, id=c("year"))

# convert to numeric values
# merge year and months into y.m value
conus_ym[,2] = conus_ym[,1]+match(conus_ym[,2],month.abb)/12 - 1/24

# resort into time order
conus_ym = conus_ym[order(conus_ym[,2]),]

# convert to celsius
conus_ym[,3] = conus_ym[,3]*5/9

The monthly chart is very noisy. To better see the behavior of the data, calculate and plot the 12 month averages. But we run into a common data issue: the last year, 2012, only has data for the first four months of the year. How inconvenient. What should we do? There are three rather obvious methods of dealing with that.
1. Drop the data (and lose four months of information)
2. Use the four month mean of the data (Jan12-Apr12) for the annual mean.
(‘dangerous’ with high variability and such few data points)
3. Use the previous 12 months (May11-Apr12) as a proxy for 2012 (uses some data twice)

I usually choose option 1, mostly because I am lazy and I don’t know better. I sometimes use option 2 based on my since of %complete and variability. I have heard of option 3, but don’t recall a time where I have used it. Don’t know how common it is, either, but using data more than once is generally a no-no. Rule-of-thumb stuff and I don’t know the technical merits of when each is most appropriate. I am sure that there are more sophisticated treatments for incomplete data on end points (prediction from trend models for instance), but these are the easy ones.

Did Tamino chose a ‘sophisticated’ method for dealing with the incomplete data for 2012? Nope. He did the same as I would, discarding the data. The lazy man’s way.

# busy monthly chart
plot(conus_ym[,3]~conus_ym[,2], type="l")

# 12 month averages
ave12 = aggregate(conus_ym[,3],list(conus_ym[,1]), FUN=mean, na.rm=T)

# But 2012 only has 4 months of data
# We could drop the year (and only lose 4 months of data)
# or we could display the mean of the 4 months (blue)
# or we could use the mean of May2011-Apr2012 (red)
idx=1408 # Apr 2012
jan2012apr2012 = ave12[118,2]
may2011apr2012 = mean(conus_ym[((idx-12):idx),3])
# drop the last year
ave12 = ave12[1:117,]

I am going to produce two charts here. The first is a near replica of Tamino’s yearly average as found here (option 1 from above). The second is what happens if you choose to use option 2 (red) or option 3 (blue) from above.

CairoPNG("conus-loess-ave12-a.png", width=600,height=400)
plot(ave12[,2]~ave12[,1],type="l",
	xlab="",ylab="Temperature Anomaly (deg C)")
points(ave12[,2]~ave12[,1],pch=20)
title(main="USA48",sub="http://rhinohide.wordpress.com     20120512",
	font.main=1,cex.main=2,cex.sub=.9)
dev.off()

CairoPNG("conus-loess-ave12-cd.png", width=600,height=400)
plot(ave12[,2]~ave12[,1],type="l",ylim=c(-1.25,3.2),xlim=c(1895,2012),
	xlab="",ylab="Temperature Anomaly (deg C)")
points(ave12[,2]~ave12[,1],pch=20)
points(2012,jan2012apr2012,pch=20,col="blue")
points(2012,may2011apr2012,pch=20,col="red")
title(main="USA48",sub="http://rhinohide.wordpress.com     20120512",
	font.main=1,cex.main=2,cex.sub=.9)
dev.off()

Tamino then moved to a chart of 5 year average of the monthly data, further reducing variability. He also introduced a lowess smooth of the (monthly) data, presumably to demonstrate that these two common smoothing methods produce essentially the same result.

So we are once again faced with the incomplete data issue. The five year bins begin in 1895. The first five year bin then ends in 1899. And so on and so forth until we reach the second to last bin 2005-2009. But the last five year bin is incomplete and only has 28 months of data in it: from Jan 2010 to Apr 2012. We have the same three options as above.
1. Discard the data (and lose 28 months of information)
2. Use the mean of Jan10-Apr12 as the last ‘five year’ ave
3. Use May07-Apr12 as the last five year ave

Did Tamino use an unusual method of dealing with the last 5 year bin? Nope. He did exactly what I did in the previous post regarding Canadian oil sand production. In that post, I only had 9 months but wanted an annual average. I took the took the nine month average and ran with it, but did put a note in the post that I had approximated that data point. That Tamino did not do so in his post is the only valid criticism of his post that I have read.

But that leaves one other problem. What the heck is a lowess smooth? I have poked at them some when trying (and failing) to reconstruct a chart of David Archibald. Tamino apparently uses a hand-coded lowess function, but kindly pointed out that you could use the default R lowess function to generate a similar fit. Unfortunately, the default lowess also uses a default window of 67% of the data points – a huge window that overly smooths this data. To get a better fit, you need to decrease the window. Tamino suggests a window of about 20%, he also notes that a window of 15% will cause the lowess to dip in the final few years – the result that Brandon is searching for to match his intuition. Obviously, the smaller the window the better the ‘fit’. But which is the better ‘smoothed fit’? At this level (off-hand blog posts), it seems pretty subjective. If you want a ‘close fit’, you want smaller values (how small is too small?) but you loose the smoothing. If you want broader, more long term behavior, you want larger windows and with greater smoothing. Since the range of the data is over a century, I think we should be looking at the longer windows, not the shorter ones.

I’ve plotted both on the chart below. For comparison, here is Tamino’s original chart.

# 60 month bins
idx60 = as.integer(1+(conus_ym[,2]-conus_ym[1,2])/5)
conus_ym = cbind(conus_ym,idx60)
ave60 = aggregate(conus_ym[,3],list(conus_ym[,4]), FUN=mean, na.rm=T)
ym60 = (1:24)*60 - 30

# error info
sem = function(x) sqrt(var(x,na.rm=T)/length(!is.na(x)))
sem60 = aggregate(conus_ym[,3],list(conus_ym[,4]), FUN=sem)

# we have the same issue as above
#   do we drop 2010-2012 (which loses 28 months of data)
#   use the mean of those 28 months
#   or use Apr 2012 + 59 previous months (which uses some data twice)
alt_last_60 = mean(conus_ym[idx-60:idx,3])

CairoPNG("conus-loess-ave60.png", width=600,height=400)
plot(ave60[,2]~conus_ym[ym60,1],type="l",ylim=c(-.9,1.4),
	xlab="",ylab="Temperature Anomaly (deg C)")
points(ave60[,2]~conus_ym[ym60,1],pch=20)
# R default f = .67, the smaller the f, the smaller the window, the less smoothing
lines(lowess(conus_ym[1:1408,2],conus_ym[1:1408,3],f=0.15),col="blue")
lines(lowess(conus_ym[1:1408,2],conus_ym[1:1408,3],f=0.20),col="red")
lines(c(1895,2012),c(0,0),col="gray")
arrows(conus_ym[ym60,1],ave60[,2],
	conus_ym[ym60,1],ave60[,2]+sem60[,2]*2,
	angle=90,length=.03)
arrows(conus_ym[ym60,1],ave60[,2],
	conus_ym[ym60,1],ave60[,2]-sem60[,2]*2,
	angle=90,length=.03)
title(main="USA48",sub="http://rhinohide.wordpress.com     20120512",
	font.main=1,cex.main=2,cex.sub=.9)
dev.off()

Approached as a learning activity, this is a neat little exercise. I got a quick look at the NCDC DS-9640 dataset. I also get to read the caveat in the README that the last few years of data are subject to change. While I attempted to use a lowess once before to replicate a chart of Archibald, this is my first post of one and got to play with sliding windows. I don’t recall binning data into 5 year chunks before. I know I have never charted the whisker like error bars before. So there is a lot of opportunity to use Tamino’s post as a touchstone to start an exploration of the data space and analysis methods.

You can find the R script here.

About these ads
  1. 2012 May 12 at 8:55 am

    Ron,

    You may have the word of the day:

    http://neverendingaudit.tumblr.com/post/22906659745

    Congratulations!

    PS: Beware labeling. It can be used against you.

  2. 2012 May 12 at 3:32 pm

    Ron–
    I stopped by before, wrote a really long comment and was foiled by the moderation. (I always forget I should log into twitter, wordpress of facebook *first*. I rarely log into any of those, and for some reason, hitting “back” after trying to comment results in a blank comment window. Basically, the whole thing was lost.)

    I ended up posting a brief response at my blog. (Family things made it difficult to write much after noon, and will continue to do so. Mother’s day and all…. I will admit I am not drinking wine so don’t want to compose anything long. ) I’ll be back though possibly not until Monday. (Sorry. )

  3. 2012 May 12 at 6:37 pm

    Willard: Two front page quotes. Guess long quiet periods are fertile quote generation times. :D

    Lucia: Caught your comment at The Blackboard. Thanks for the runmean tip.

    Been looking at some ACF/PACF of the lowess residuals.

    I’ve been bit by the login “bug” at other sites. Not happy with it. Pretty sure it rises from our need to trace blog comments back to badguys.

  4. 2012 May 12 at 7:02 pm

    Out of curiosity, is there a “best” way to pick values of f for LOWESS? As far as I can tell, discussions suggest it’s an art as much as anything else. Mind you– I think ‘good’ values of f could be easy to agree on if there was a large separation between the time scales of the ‘noise’ and the ‘signal’. But I think there isn’t in the cases that interest us.

    So, for example, suppose we were arguing about the ‘best’ f — and both of use were just expressing POV absent this snit. Let’s consider the volcanic eruptions. Is the dip after Pinatubo “real” or “noise”. I tend to think it’s “real” in the sense that we would see it in replicate samples of model runs. If it’s “real” at least in some instances we would want to pick a value of “f” that shows the dip after the eruption of Pinatubo.

    On the other hand, this dip isn’t really “important” to the determination of AGW. So, maybe in some other instance, someone would want to take it out. I wouldn’t; I’d just say show it and explain it. But I could imagine the argument made in good faith. This would argue for a larger value of ‘f’.

    But is one “right”? (FWIW, the higher range for values of ‘f’ that show the dip for Pinatubo show a dip at the end of the smooth temperature plot. Of course, very small value of ‘f’ show a rise because they don’t smooth and so follow the most recent temperatures. The very high values of ‘f’ also don’t show the dip because it is warmer than 20 years ago. Since you’ve got LOWESS programmed you can eyeball that and see. )

    Pretty sure it rises from our need to trace blog comments back to badguys.

    Oh. Absolutely. Because I self host, I can see server logs. What I can see you might not imagine. I have at least 10 IPs a day try to log in. I see numerous hacking probes. And then there are the attempts at spam. I now don’t let people comment if their browsers don’t accept cookies. This is not so much because I want to set cookies, but I discovered an oddity. Quite a few bots present cookies, but don’t let the blog set them. There is a reason why a spam bot might do this– it makes the bot cheap to program and often manages to get spam posted. But real browsers can accept cookies if the users will permit them to do so. I set a very, very stupid cookie that records the time and vanishes when you log off.

    As for WP’s spam filtering: The annoying thing is if I click “post” and I haven’t logged in, for some reason the comment vanishes and I can’t get it back by clicking “back”. The same thing happens at TAV, and lots of WP blogs.

    At least for me, on my blog, if I click “back” I get to review my comment and can cut and paste into a text editor while I do whatever I need to do to be able to comment. (I will now copy out my whole comment just in case. :) )

  5. 2012 May 12 at 7:06 pm

    It occurred to me– on the comment ‘bug’ a similar things happened when I was going to post on the death threat emails at Stokes’s. It was probably just as well.

  6. Jim
    2012 May 13 at 11:19 am

    Ron, thanks for a good explanation of the sort-of dispute. It is very helpful to outsiders when there is illumination instead of augment.

    jim

  7. Ned
    2012 May 14 at 5:50 am

    For those who like to use Excel, there is a nifty (and free) add-in to do LOESS in Excel, available here:

    http://peltiertech.com/WordPress/loess-utility-awesome-update/

    As it happens, the data set they use as an example is … GISTEMP.

    I use LOESS fairly often. I wish I had some great insight to offer, about how to decide on a smoothing scale, but I don’t. Most people using LOESS just seem to qualitatively decide on the shortest time scale over which variance occurs that they’re interested in preserving, and go with settings that reproduce that.

  8. 2012 May 14 at 8:46 am

    the result that Brandon is searching for to match his intuition

    First, it’s worth noting that to some extent, Brandon’s intuition was based on

    a) comparing the LOWESS smooth to the annual average data Tamino chose to show along with his LOWESS smooth. In that graph, Tamino doesn’t show his final annual average data point– nor the final 4 months data. If the author of a post or creator of the graph is silent on the matter, it is not unnatural for a person to think the LOWESS smooth is a smooth of the discrete data shown. But in Tamino’s case, it appears he retained the 4 months of data for his LOWESS but did not include it in his annual average. Nothing is said.

    b) thinking about the LOWESS smooth terms of 5 year averaging windows. This is not unnatural given that Tamino prodded the reader to compare his LOWESS smooth to the 5 year running average. Tamino did not state his value of ‘f’ either in the blog post nor later in his response to Brandon’s question.

    Looking at the graphs Tamino provided — and the prod of 5 years, ‘intuition’ would suggest that the smooth should dip at the end. One’s intuition could be easily prodded to imagine the shape of Tamino’s LOWESS graph if Tamino has mentioned the time scale for smoothing in in his LOWESS — that binning window is 20 years — or 4 times the width seeded in the readers mind by the overt discussion of a 5 year binning window. Or intuition could be prodded by mentioning the LOWESS is based on data through April — or generally by providing a sufficient number of details to permit the reader to have a clue what Tamino actually did.

    So, I think with regard to the communication problem the important questions aren’t really whether f=0.15 is better or worse than f=0.20 but rather: Given the structure of the article, what would a reader be likely to image as the window size based on the contents of the article? And given the information in the graphs, is the data used in the 1 year average supposed to be the same as that used in the 5 year average? ( So, for example the author is silent, is it unreasonable for the reader to imagine the same “drop/keep” rule was used for computing 1 year and 5 year averages? That is: Drop final data in both cases? Or keep final data in both case? Tamino dropped for the first and kept for the second and did not say. On a second example: if an author tells you you should have confidence in the LOWESS because it agrees with a simple 5 year average, is it reasonable for a reader to assume the window width for LOWESS is somewhat near 5 years — or are they supposed to intuit they are being asked to compare a 5 year running average to a LOWESS with a 20 year wide window? )

    With respect to the question Tamino seeded in the readers mind — which is that someone ‘out there” might suspect his smoothing choices were influenced by his desire to create a graph he “liked”— well given the fact that the choice of f=0.15 vs. f=0.20 does switch the end points from turning “down” to “up”, and given that Tamino has spent quite a bit of time explaining how the notion warming has slowed or reverse is wrong — why wouldn’t someone think he might have picked f=0.20 because he “likes” that better than f=0.15? After all: How does one pick the “right” value of f using LOWESS? (How does one pick the “right” window length for running means? I think the answer to both is that there is rarely a “right” value.

    Now, on to end point treatments — which is more interesting!

    1) There are actually scads of end point treatments many of which have been used in climate. In fact, there are potentially so many that any analyst could potentially select among them to make the end points conform most closely to what he “wants”. (There is a limit if the data won’t cooperate — but really, there are quite a few choices.) We discussed a while bunch that have been used in climate in this post http://rankexploits.com/musings/2009/source-of-fishy-odor-confirmed-rahmstorf-did-change-smoothing/ .

    You mentioned my favorite: Avoid computing the ‘smooth’ outside the region where a normally symmetric filter becomes asymmetric. Failing that, recognize these are provisional in the sense that the ‘smooth’ values will change when fresh data are added and admit those end points don’t mean much. So, for this reason, I would prefer to not show a 5 year average computed using less than 5 years worth of data. (This doesn’t mean I never ever shown smoothed in regions where data run out. But I prefer not to and try very hard to remember to make it very clear that something about the smoothing changed at a certain point on the graph. Certainly if the point of the graph or discussion it supports has nothing to do with the end-points, I would avoid showing ‘smoothed’ data outside the region where a symmetric window becomes asymmetric.)

    My second favorite: use the mean over a smaller and smaller window toward the end. With 13 month averaging, this means that the 7th from the end is computed using the final 13 months, the 6th from the end is computed using the final 12 months… the last is computed using the final 7 data points. This is a very common method and pre-programmed in runmean in R– and many packages. I’d say this is sufficiently popular that provided one says they used it and use it consistently, people aren’t going to suspect you of using it because you “like” the answer. (Note: your use of 9 out of 12 months makes your end point less noisy than using this method. In contrast, someone using fewer than 1/2 the data points is showing a noisier point– and I would avoid ever doing that. If there was some reason I felt I “needed” to show that smooth, I would use the ‘mean’ treatment in run mean and be sure to mention that I was using it. Also note: Use of this method has some ‘history’ in climate communication– see the discussion of Rahmstorf paper.)

    Many other methods involve extrapolating. This can be done a number of ways:
    a) assuming the temperature is frozen at its final value and computing the 13 mean. This gives the end point lots of weight. ( CRU did something like this then changed their mind about the validity of the method in 2008 when the year started out cold.)

    b) assuming the data will revert to some pre-existing trend. If computing 5 year averages, the analyst might assume the trend will revert to the trend computed over the final 5 years, or they could assume it reverts to the trend over the full period in question. For example: it could revert to the 1895-april, 2012 trend. Something similar to the former has been done. In the post I linked Aslac Grinstead discusses a version of this method.

    c) Mannomatic smoothing where you assume the future data will be a mirror image of previous data– flipping both horizontally and vertically. (Order doesn’t matter.) This takes an element of assuming the past trend will persist into the future, but also specifically forces the curve through the final point. The latter features creates graphs that may subliminally communicate the idea that the final point somehow magically managed to contain no noise this year.

    So really, there are many things one can do with endpoints even with “simple” box-car averaging. Any analyst should try to remember to say what he did and certainly if asked about endpoints should say what he did — in some detail. Being parsimonious with information and then complaining that others not accepting (or even knowing) what you did is… well… Taminoesque.

    Interestingly enough, decisions about smoothing at end points aren’t limited to simple running averages. For example: In ordinarily LOWESS begins to use an asymmetric window near the edges. That’s discussed here: http://www.mathworks.com/help/toolbox/curvefit/bsz6ber-1.html . This means the value shown at the end points is provisional and will change as more data arrive.

    Now… that means that I think ideally, one should try to indicate the point where the window because asymmetric. But at least if one uses the standard smoothing and mentions the choice of ‘f’ (or the number of data points associated with that choice) people familiar with the method have some hope of knowing how to interpret the meaning of the smooth at the end points. The interpretation includes: Even if the analyst retains the same size binning window, near the ends the appearance of the smooth curve will change when more data are available.

    If instead, one uses a ‘modified’ end treatment in LOWESS without explaining the modification and in addition one does not mention the value of ‘f’ used and is a bit parsimonious with other information, then truly no reader can possibly gauge what the end points might mean, whether the behavior is provisional, how the appearance of the smooth should relate to the underlying data etc. In this case, if the readers “intuition” about the relationship between the discrete data and the smooth is poor, I think that’s the fault of the author/analyst who failed to provide sufficient information to permit the reader to understand what the graph even communicates. The author/analyst might, of course, remedy this by giving full and clear answers to questions asked. Or, he might make things worse by giving overly parsimonious answers and delivering them in a rather snippy tone. That’s of course, the choice of the author/analyst.

  9. 2012 May 14 at 8:56 pm

    Good lord, Lucia! I think that’s longer than any post I’ve ever made. :D

    Thank you for the thoughtful comment.

  10. 2012 May 15 at 4:19 am

    Ron–
    I often post long things. :)

  1. No trackbacks yet.
Comments are closed.
Follow

Get every new post delivered to your Inbox.

Join 27 other followers