Home > CRUTEMP, Deconstruction, GHCN, GIStemp, GSOD > D’Aleo at Heartland: An Apple a Day

D’Aleo at Heartland: An Apple a Day

2010 May 24

Joseph D’Aleo made an appearance at the Heartland Institute’s recent conference. This post is my turn at the cracker barrel, a response mostly to D’Aleo’s Slide #50

Claim: 75% of the stations disappear, many from colder higher latitudes and elevations and in stable areas of lower latitudes

Response: If it is D’Aleo’s intent to imply that losing ‘colder stations’ necessarily imparts a warming trend, this is a mathematical fallacy. D’Aleo provides no estimates on the change in trend due to the station drop. There have been several independent attempts to quantify a change in trend due to the factors that D’Aleo outlines:

Zeke Hausfather, January 21, 2010 posts a graph comparing the simple average temperature anomaly from 1,017 thermometers with data available through 2000 to 402 that stopped providing data sometime between 1970 and 2000. He finds no significant difference between the two traces suggesting that station drop out is not an important source of bias.

Ron Broberg, Feb 1 2010 posts an analysis of the effect of excluding high latitude, high altitude, and rural (low pop density) stations from a globally gridded average anomaly using GHCN v2.mean (raw) data.

Roy Spencer Feb. 20, 2010. Roy Spencer computes trends using data drawn from the NOAA-merged International Surface Hourly (ISH) dataset, a ground based thermometer record. Using area weighting, he compared land based temperature anomalies for the northern hemisphere computed thermometers in operation from 1986-2010 to trends published by CRU (which may also affected by the GHCN station drop). He finds no difference in trend – although the monthly data from the ISH dataset appears noisier.

Tamino, Feb 23, 2010 presents preliminary GHCN temperature analysis comparing area weighted temperature anomalies for the Northern Hemisphere based on “cut-off” thermometers series and data from thermometers that remained in the record to the present time. He finds no significant difference between the two traces.

Clear Climate Code, Feb. 26, 2010 compares GISTEMP type calculations of global surface temperature anomalies based on the “full” and “cut-off” thermometer set. They find no major differences between the two traces.

Ron Broberg, March 3 2010, repeat the high alt, high lat, and rural analysis with GISTEMP.

Lucia Liljegren March 5 2010 starts short series using a small model (spherical cow) to demonstrate the effects of station loss. There is no loss of global trends, although she notes that there can be a loss of information when trends are associated with features that are related to the stations themselves.

Claim: Missing months increase tenfold, most rural and in winter

Response: D’Aleo again seems to imply that losing ‘colder stations’ necessarily imparts a warming trend. Still a mathematical fallacy. However, I am unaware of any attempt by D’Aleo or others to quantify the effects of this observation.

Claim: Urban adjustment removed or non-existent even as population grows 1.5 to 6.7B and most peer review finds significant contamination

Response: Urban adjustments are explicitly made in GISTEMP by comparisons with neighboring rural stations. In 2010, the use of nightlights was extended to include the ‘rest of the world’ as well as the US.

The urban adjustment in the current GISS analysis is a similar two-legged adjustment, but the date of the hinge point is no longer fixed at 1950, the maximum distance used for rural neighbors is 500 km provided that sufficient stations are available, and “small-town” (population 10,000 to 50,000) stations are also adjusted. The hinge date is now also chosen to minimize the difference between the adjusted urban record and the mean of its neighbors. In the United States (and nearby Canada and Mexico regions) the rural stations are now those that are “unlit” in satellite data, but in the rest of the world, rural stations are still defined to be places with a population less than 10,000. The added flexibility in the hinge point allows more realistic local adjustments, as the initiation of significant urban growth occurred at different times in different parts of the world.

The urban adjustment, based on the long-term trends at neighboring stations, introduces a regional smoothing of the analyzed temperature field. To limit the degree of this smoothing, the present GISS analysis first attempts to define the adjustment based on rural stations located within 500 km of the station. Only if these stations are insufficient to define a long-term trend are stations at greater distances employed. As in the previous GISS analysis, the maximum distance of the rural stations employed is 1000 km.

This homogeneity adjustment should serve to minimize the effect of nonclimatic warming at urban stations on the analyzed global temperature change. However, as discussed by Hansen et al. [1999], it should not be assumed that the adjustment always yields less warming at the urban station or that it necessarily makes the result for an individual urban station more representative of what the temperature change would have been in the absence of humans. Indeed, in the global analysis we find that the homogeneity adjustment changes the urban record to a cooler trend in only 58% of the cases, while it yields a warmer trend in the other 42% of the urban stations. This implies that even though a few stations, such as Tokyo and Phoenix, have large urban warming, in the typical case, the urban effect is less than the combination of regional variability of temperature trends, measurement errors, and inhomogeneity of station records.

For the record, I believe that there may be better measures and methods for detecting and dealing with UHI. But D’Aleo’s claim that no method for dealing with it in the surface records ignores GISTEMP.

Claim: ‘Modernization’ instruments had warm bias or increased uncertainty

Menne 2009 documents that modernized instruments have raised daily minimum temperature readings but have lowered the daily maximum with an overall cooling effect of daily averages.

The pairwise results indicate that only about 40% of the maximum and minimum temperature series experienced a statistically significant shift (out of ~850 total conversions to MMTS). As a result, the overall effect of the MMTS instrument change at all affected sites is substantially less than both the Quayle et al. (1991) and Hubbard and Lin (2006) estimates. However, the average effect of the statistically significant changes (−0.52°C for maximum temperatures and +0.37°C for minimum temperatures) is close to Hubbard and Lin’s (2006) results for sites with no coincident station move.

In addition, a number of sites (about 5% of the etwork) converted to the Automated Surface Observation System (ASOS) after 1992. Like the MMTS, ASOS maximum temperature easurements have been shown to be lower relative to values from previous instruments (e.g., Guttman and Baker 1996).Such results are in agreement with the pairwise adjustments produced in HCN version 2; that is, an average shift in maximum temperatures caused by the transition to ASOS in the HCN of about −0.44°C. The combined effect of the transition to MMTS and ASOS appears to be largely responsible for the continuing trend in differences between the fully and TOB-only adjusted maximum temperatures since 1985. On the other hand, while the effect of ASOS on minimum temperatures in the HCN is nearly identical to that on maximum temperatures (−0.45°C), the shifts associated with ASOS are opposite in sign to those caused by the transition to MMTS, which leads to a network-wide partial cancellation effect between the two instrument changes. Undocumented changes, which are skewed in favor of positive shifts, further mitigate the effect of the MMTS on minimum temperatures.

Zeke Hausfather, April 8 2010 notes a slight cooling bias introduced by the shift from Liquid in Glass (LiG)/Cotton Region Shelters (CRS) measurement instruments to maximum-minimum temperatures system (MMTS) instruments.

Claim: ‘Modernization’ led to putting 90% stations in inappropriate locations where they have a distinct warm bias

Response: Menne 2010 looked for warming bias in USHCN due to station locations and found none.

Recent photographic documentation of poor siting conditions at stations in the U.S. Historical Climatology Network (USHCN) has led to questions regarding the reliability of surface temperature trends over the conterminous U.S. (CONUS). To evaluate the potential impact of poor siting/instrument exposure on CONUS temperatures, trends derived from poor and well-sited USHCN stations were compared. Results indicate that there is a mean bias associated with poor exposure sites relative to good exposure sites; however, this bias is consistent with previously documented changes associated with the widespread conversion to electronic sensors in the USHCN during the last 25 years. Moreover, the sign of the bias is counterintuitive to photographic documentation of poor exposure because associated instrument changes have led to an artificial negative (“cool”) bias in maximum temperatures and only a slight positive (“warm”) bias in minimum temperatures.

Claim: Homogenization and other adjustments blend the good with the bad usually cooling off early warm periods, producing a warming where none existed

Response: Either you correct for changing instrumentation or you don’t. Either you correct for location changes or you don’t. Homogenization helps correct those stations which experience induced sudden changes due to instrument changes, method of observation changes, station location changes, or station environment changes.

Homogenization algorithms dealing with ‘knees’ of discontinuity have a choice to raise one leg or lower the other. As I have read it, the choice to use the older leg, thus minimizing the changes in the most recent one, was deliberate to avoid confusion by creating the largest changes in the most recent data.

Claim: Each ocean estimate (changing inputs, Wigley’s cooling ‘1940s warm blip’, and removing cool satellite data) enhance ocean warming

Response: I have no comment on this claim since I have not studied sea temperature records.

Claim: Each version of the NOAA/NASA data sets warmer than the prior

Response: In this claim and the previous one, D’Aleo seems to imply that the since the adjustments have increased the calculated warming trend, those adjustments must be erroneously or intentionally biased. But NASA does not use the homogenized GHCN (NOAA) data (v2.mean_adj) but rather the relatively unprocessed ‘raw’ data (v2.mean). It is true that NOAA/NCDC does perform quality control checks on GHCN v2.mean data, but this falls more into the ‘toss out the outliers’ category rather than homogenization adjustments.

——–

D’Aleo raises some valid issues but is unable or unwilling to drive those issues to a resolution. D’Aleo quotes Dr. Judith Curry.

In my opinion, there needs to be a new independent effort to produce a global historical surface temperature dataset that is transparent and that includes expertise in statistics and computational science….The public has lost confidence in the data sets …Some efforts are underway in the blogosphere to examine the historical land surface data (e.g. such as GHCN), but even the GHCN data base has numerous inadequacies.”

Dr Curry should be encouraged by several technical bloggers who have created their own surface-record global anomaly programs to examine these issues. These include Zeke Hausfather, Nick Stokes, JeffId/RomanM and Chad. These independent methods have confirmed the general trends presented by CRUTEM and GISTEMP. Indeed, they tend to show slightly more warming than the ‘official’ surface-records.

As to the single source of surface-record data (GHCN derived from CLIMAT) issue, an effort is underway to bring the Global Surface Summary of the Day (GSOD derived from SYNOP) data into the fray.

  1. carrot eater
    2010 May 25 at 5:47 am

    For the timeline, so far as the ‘mathematical fallacy’ goes, Dr. N-G also gave a conceptual level explanation of how anomalies calculated using the RSM (http://tinyurl.com/yec3ads) and CAM (http://tinyurl.com/ykfy8aa) are not anything like the means of absolutes that McKitrick, EM Smith and d’Aleo have been showing. Most everybody else didn’t bother because it’s a pretty straightforward point, until Zeke, Tamino, etc got quantitative with it.

    I’m hoping that GHCN v3.0 fills in some of the gaps, which will settle the matter for good.

  2. carrot eater
    2010 May 25 at 6:17 am

    Specifically what ‘inadequacies’ of the GHCN does Curry mean? It’s hard to know how to respond to that, without knowing what she had in mind. Is she talking about inhomogeneities in the underlying data due to station moves, or incomplete spatial coverage, or what? Vague talk is cheap, and it lets the d’Aleos of the world draw conclusions without doing any analysis to see what matters, and by how much. Is more context given by her?

  3. steven Mosher
    2010 May 25 at 8:32 am

    WRT Dr. Curry’s statement, In most of my discussions with her about this issues and the inadequacies of GHCN, those issues came down to the lack of a traceable, transparent audit trail.

    Just broadly speaking there are these POSSIBLE areas of dispute. These are not my areas of
    concern, but rather possible areas.

    1. Bias/accuracy of the temperature data
    2. Bias accuracy of the metadata
    3. Computational methods
    4. Misleading presentation of results.

    I think #4 is not an issue and when people share their results in digital form, then mistakes and subterfuge is easily handled.

    #3: Computational methods. There have been a variety of questions about HOW things were done in GISS and HADCRU. The simple solution to this complaint was to release the code. Its not
    a total solution but it does move folks closer to closure. Now, with multiple solutions to the
    same problem, I think we can argue quite effectively that
    A. there is some small uncertainty due to choices of method.
    B. those uncertainties are small ( zeke chart showing the spread of trends is nice)
    C. the work of the climate scientists is in line with the “independent” solutions given
    by the blogosphere.
    E. No Skeptic has provided their own approach for averaging temperature data over time.

    #3. is not an issue. There is no credible challenge to the methods employed by the scientists, there may as always be improvements, but the code they have released produces answers that
    are not biased. That code has been examined in detail. It has been refactored. and the results it produces have been emulated by several parties working.

    #2. There are a few issues in metadata as Ron has pointed out. Some issues, like the lack of historical information, can never be remedied. Other issues can be remedied with some engineering and some diligence about collecting the data ( getting the right LAT/LON) and
    openness in sharing the data. Following Ron this past couple of months has been very informative. There is a lot of metadata that could be brought to the problem. The issue
    that metadata is required for is obviously UHI studies.. and questions about station histories.

    #1. The data itself. basically, you want to see the whole audit trail. where the data came from, contact info, archives of originals, what processing is done on the data ( all the code ). The whole
    thing.

    You can probably find the thread on Lucia’s where Dr. C and I discussed the rational for an independent BODY to do the work. The goal was basically to turn the data collection, analysis, storage, etc over to a body that wasn’t wrapped up in doing modelling. taking a bit from McIntyre’s suggestion that this be like a l board of government statistics job. Like CBO.

  4. 2010 May 25 at 8:47 am

    Currently that body is the “World Data Center(WDC) for Meteorology.”

    http://www.ncdc.noaa.gov/oa/wdc/index.php

    Which operates under the guidance of the “International Council for Science”

    http://www.icsu.org/index.php

  5. 2010 May 25 at 8:53 am
  6. steven Mosher
    2010 May 26 at 2:16 am

    problem is it’s NOAA.

    Lets see….

    http://www.sab.noaa.gov/Working_Groups/standing/data/members.html

    Also, I have a 2 year old FOIA into NOAA. I just got a call about it. They ‘lost it’

    Dr. Peterson, I believe. hmm also lost one of mcintyres for a year or so. or maybe that was karl.

    In any case, Noaa would be at the bottom of my list

  7. 2010 May 26 at 5:32 am

    problem is it’s NOAA.

    So you think that NOAA is too close to the wrong Tribe? You don’t like some of the people on an advisory board?

    I take it that you understand that you are proposing on restructuring a voluntary international effort. That the changes you wish to make are at the WMO level?

    Grandiose and delusional. The world does not revolve around McIntyre and Mosher.

  8. steven Mosher
    2010 May 26 at 11:12 am

    I think you have it wrong. Dr. Curry asked me for my opinion on what I thought would be the best path to regain trust. That’s my advice. I would say that the current public opinion about climate change is diverting from mine. I think its a problem. More and more people are thinking that its not a problem. Their mistrust is primarily a mistrust of politicians but has expanded to scientists. I think the mistrust in the latter is LARGELY unfounded, but in certain cases specifically warranted. Generally speaking, when scientists speak out about policy, they are unfairly tarred with the same brush that people tar politicians with. Complaining about this unfairness is quaint.
    Simply, if scientists want to regain trust they need to understand that trust is not a scientific problem. Its a perception problem.

    Given that climate change is important, given that political will is modulated by popular support, given that popular support is waning, I made recommendations. What would I do? Those recommendations are informed by some history in crisis management and public relations. Those recommendations are also informed by experience in industry with how data collection should be separated from the modeling organizations.

    So the recommendations included: not having groups that collect and provide data be in the same reporting structure. This arrangement gives rise to the perception of cooking the data to fit the models and makes reanalysis of data suspect. To restore trust I suggested an independent ( not voluntary) body. Also a body that had no ties to the issues under question. For example, the WMO, NOAA, NASA, CRU. So, when NOAA reads that one of its advisors on data archiving and data access, admitted to not being the best record keeper and asked people to delete mails, the best course of action was to relieve him of his duties. They didn’t. That does not send the right message.Its a perception problem, not a science problem. In short, Dr. Curry asked me what I thought it would take. I offered my suggestions. I fully expected that the community would take the course it took. Hold shallow investigations, attack the attackers, hunker down and hope things get better. Well, things have not gotten better as recent polls in the UK show. So, I don’t think I was delusional. I think I gave advice that was in line with actions taken by many organizations when trust in them wanes. Was it impractical advice? I dont know. It would appear to me that if we that global action on climate is POSSIBLE, then its not delusional or grandiose to suggest that an independent body be set up to collect and audit the global temperature index. After all, we manage to collect CPI data without too much trouble or scandal or hint of political motive. But, clearly if setting up an organization to do this is too hard, then clearly acting globally on climate change would seem impossible. Anyways, as the scandal unfolded, before the book was written, I had long talks with Tom and some with Steve about what we thought the path forward was going to be. We had a chapter entitled ‘what needs to change’ and that got scrapped. Primarily because it was my belief that the community would not and could not take the necessary steps. Looking through the mails and the PR advice they were getting, it was clear they were on a path that they could not get off. It was clear that they wouldnt take early decisive action to address the trust issue, and clear that the trust would erode further as a consequence.
    That was december. I don’t think that prediction was too far off.

  9. 2010 May 26 at 11:26 am

    Its a perception problem, not a science problem.

    You just posted that YOU would not trust NOAA and pointed to Scientist X Y or Z’s participation on a NOAA advisory board as evidence supporting your mistrust. You cannot just hand-wave and claim that there is a ‘public trust problem’ while YOU are spreading the distrust. It’s called FUD. Fear Uncertainty and Doubt. It’s a well known strategem in the IT industry and you willing engage in it and then offer solutions to the ‘problems and issues’ that you have helped create. And it pisses me off.

  10. steven Mosher
    2010 May 26 at 11:29 am

    I’ll just add this. Trust, at its core, is a belief that words and actions go together. When you ask me to act on climate change, I have to trust your words. When I lose trust in your words, more words, will not and cannot restore trust. Practically impossible. It’s quite illogical to believe that words can restore trust. one thing might. Guess.

  11. steven Mosher
    2010 May 26 at 11:45 am

    Well, actually I didnt say that I didnt trust NOAA. I said that they would be at the bottom of my list. That means if you want to restore trust, you have to take some action. I will give you an example.
    My friend works selling pharma. One day, two of his co workers asked a doctor on their advisory panel to recommend a drug they sold. They stepped over FDA rules. They were fined and lost their jobs. My friend was in the building the same day as the badge reader confirmed. he was on his computer at the time the meeting took place, as the IT logs confirmed. nevertheless, since he said hello to the doctor that morning, the company took the action of putting him on a 6 month probation. That was the right course of action. People who work in positions of trust understand this. WRT spreading doubt. I telling NOAA that the right course of action to restore trust is to take actions to distance themselves from the source of doubt, doesnt spread doubt. failing to take action, deepens doubt. Throughout the mails you will see certain particular scientists in great fear about the skeptics selling doubt. other scientists were not so fearful. They expressed the position that the uncertainties must be fully discussed. they were over ruled. the course of events that followed created more doubt than any skeptic ever could. Holding half heart investigations, creates more doubt than there should be. Its the words, half hearted actions, and lack of actions that drives the doubt. Not my words. My words merely point out the facts: there is a decline in trust and the cause. You want to change that? take the right action. you want to deepen that lack of trust, attack the messenger. That’s just an observation.

  12. Judith Curry
    2010 May 26 at 12:03 pm

    Good discussion. In my opinion, there are two broad issues in the global surface temperature data record, that haven’t been addressed by the recent blogospheric reanalyses:
    1) the NOAA data set has a small fraction of the available land surface data, 30,000+ stations with time series exceeding 10 years.
    2) there are major discrepancies among the existing ocean surface temperature data sets especially prior to 1960

  13. 2010 May 26 at 12:20 pm

    1) the NOAA data set has a small fraction of the available land surface data, 30,000+ stations with time series exceeding 10 years.

    GHCN has a small fraction.
    GSOD includes many more.

    Per a request from Zeke, I’ve just begun looking at SST data. While I’ll never resolve such discrepencies, we can certainly place a range of uncertainty around the final global anomalies due to diffences in the SST data sets.

  14. 2010 May 26 at 3:11 pm

    Judy,

    Oddly enough, I’m working on just that project at the moment. I first needed to make sure I could replicate GISTemp (sans arctic interpolation):

    With that out of the way, I could see what happens if I keep the same stations used by GISTemp but change the ocean records:

    One of the big differences here, interestingly enough, is that the ocean data used by GISTemp (HadISST1/Reynolds) fills in missing past data via interpolation, while HadSST2 goes not.

  15. 2010 May 26 at 3:30 pm

    Quick caveat: the divergence pre-1910 in that graph is likely spurious and due to very sparse real SST data. Nick’s version of the same dataset (using a least-squares method more robust to fragmentary data than my CAM method) doesn’t exhibit the same issue

  16. carrot eater
    2010 May 26 at 3:46 pm

    Mosher, for whatever it’s worth, there are a good many people out there who think the CPI numbers are cooked, in order to somehow hide inflation or keep down COLA for various people. So that maybe isn’t the best example.

  17. carrot eater
    2010 May 26 at 3:59 pm

    Dr. Curry, thank you for expanding it out a bit.

    On older SST: I don’t keep up on that so much, but I get the idea it’s a work in progress. I doubt we’ve heard the last word on the transition from buckets to engine intake, and so on. Which is fine – if something is honestly a work in progress, so be it. Reconstructing SSTs using such measurements was always going to be a little difficult. But I rather doubt it’s anything about old SSTs that have caused the public to lose confidence, or whatever the wording was.

    As for other stations: the question has to be asked, so what? Suppose you added 30,000 stations, but somebody told you they had discovered 50,000 more. At what point can you say that adding more will just be adding redundant information? Sure, the uncertainty bounds will narrow, especially if the new stations are in previously undersampled areas. But is that really what we’re talking about?

    Anyway, to massively increase the number of stations that get used for real-time climate information, they’ll have to get off of the CLIMAT format. But I get the idea that CLIMATs by nature are more checked for QC than METAR or SYNOP, so you lose something there.

  18. Judith Curry
    2010 May 26 at 6:06 pm

    Zeke, its worse than that. Go back to ICOADS (original data), and then compare with what you get from the analysis products. We’ve done this exercise, its very worrisome (major disagreements among the three datasets in locations where there are actual observations.)

  19. steven Mosher
    2010 May 26 at 10:14 pm

    yes carrot Im aware of that and aware of people who dont believe we landed on the moon. That doesnt relieve me of my obligation to make good faith efforts to convince the convincable.

  20. steven Mosher
    2010 May 26 at 10:17 pm

    yes there is a lot more data in the archives that can be brought to bear on the good faith and bad faith concerns that people have. Its boring mind numbing work. i just started last night on SST data. not a laptop do it yourself job. arrg.

  21. steven Mosher
    2010 May 27 at 1:51 am

    CE WRT more stations. I find two arguments particularly irksome. On the skeptic side the argument that there never will be enough stations. The weather here is fine, argument or liza argument over at Lucia’s. If you went to 30K they would say thats not enough even if the answer didnt change. 50K same argument. I dont think these types are convincable. But as you address their concern, the convinceable folks do see these stubborn folks as the ones being less than rational. We show the case at 100 stations, 500, 1000, 5000, 10K 20K 30K. pretty soon people see that the answer doesnt change. Sure you have hold outs. You ALWAYS have people who think we didnt land on the moon. The other argument that bothers me is the one that we should keep suspect stations. even though we know the planet is oversampled in places. Thankfully CRN and USHCN-M will address some of those concerns, but that takes time.

  22. carrot eater
    2010 May 27 at 4:00 am

    Mosh, with the number of stations in GHCN 2.0 + USHCN, you can already do subsampling and get the same results. We’ve been seeing this.

    Let’s just get more observations in that bare patch in southern Africa, and more in the Arctic. And yes, I wish other countries would consider setting up CRN type networks. in 20 years, they’ll be glad they did, even if they do nothing but confirm the other measurements.

  23. Judith Curry
    2010 May 27 at 5:16 am

    Carrot eater, at some point (which is not clearly understood at present), adding more land stations will not appreciably change estimates of global temperature. After all, land covers less than 30% of the earth’s surface (the ocean is the bigger issue). But more stations is very important for determining accurate regional temperatures.

  24. carrot eater
    2010 May 27 at 6:08 am

    Dr. Curry:

    In spots that are undersampled, and I identified a couple regions, yes, having more stations would help with regional temperature estimates. And to some extent, it would reduce the uncertainty limits on the global estimates, and nudge the global estimate around a bit here or there.

    But in terms of the global: you say we don’t yet have a clear understanding of when we’ll have sufficient sampling. I don’t know what counts in your book as a clear understanding, but I’d say there are things we can do – take subsamples of the existing set and see what happens, use model fields to estimate the degrees of freedom, etc. There’s a bunch of papers and now blogs that go into these issues, as surely you are aware.

    What I can say is that we surely don’t need 30,000 stations, but if you leave a blog comment mentioning that number without context, people might think that we do need that many.

    Again, hopefully GHCN v3.0 will add some data. We shall see what happens.

  25. 2010 May 27 at 6:50 am

    We’ve done this exercise, its very worrisome …

    Is there someplace we can look over your very worrisome results?

  26. 2010 May 27 at 7:28 am

    Carrot eater, at some point (which is not clearly understood at present), adding more land stations will not appreciably change estimates of global temperature.

    This statement puzzles me some. Is the problem the number of stations? Or is it more along the lines of spatial distribution? Would adding stations to the CONUS improve the anomaly graph – or is CONUS effectively saturated? At what point did it become effectively saturated? I think there are knowable answers to these questions.

  27. 2010 May 27 at 8:41 am

    And re # of stations, see Stokes “60 stations”

    http://moyhu.blogspot.com/2010/05/just-60-stations.html

  28. Judith Curry
    2010 May 27 at 12:35 pm

    CONUS is probably saturated, also western Europe, but it is the rest of the global land areas

  29. Judith Curry
    2010 May 27 at 12:40 pm

    carrot eater, ideally a larger number of stations would be used to assess sampling and uncertainty issues. again, in terms of global mean temperature, i think adding stations or whatever isn’t going to make much difference.

  30. pough
    2010 June 1 at 11:58 am

    steven Mosher :
    Well, actually I didnt say that I didnt trust NOAA. I said that they would be at the bottom of my list.

    Ha! Brilliant! And I’m not saying you’re ugly, just that you’d be on the bottom of my list sorted by looks.

  31. steven Mosher
    2010 June 2 at 11:51 pm

    Pough, you don’t get it. The issue is restoring TRUST. That requires me to identify those organizations that I believe have lost credibility, whether unfairly or not. For example, I was asked to write a criticism of NOAA. My response was: 1.NO, I write about what I want to. 2.NO, NOAA had done nothing wrong in my estimation. Organizations that would be off the list include: CRU, GISS, NOAA, WMO (probably). It’s really not that hard to understand. You can pretend that it is.

  32. pough
    2010 June 4 at 2:20 pm

    It’s pretty obvious that trust is an issue for you. It’s also an issue for me. I can’t say that I trust you.

    You see, every single issue brought up in “Climategate” initially sounded awful, until I went and read the emails for myself and researched what they referred to. Then they sounded like people were either purposefully blowing them out of proportion or else blinded by ideology/stupidity. Either way, anyone who’s made a big deal out of “Climategate” has lost my trust. And guess who’s written a book about it?

  33. Toby Joyce
    2010 August 8 at 11:16 am

    I agree with pough. We can engage in interminable discussions about adding/ dropping stations, climategate and hockeysticks but it does not change the science of global warming, only adds small %s to the confidence levels. You can’t help but feel prolonged “debate” and delay is in the interests of somebody.

  1. 2010 May 30 at 10:07 pm
  2. 2010 July 18 at 11:42 am
  3. 2010 August 5 at 11:37 pm
Comments are closed.
Follow

Get every new post delivered to your Inbox.

Join 27 other followers