Home > GHCN > GHCN Topo: Rocky Mountain High

GHCN Topo: Rocky Mountain High

2010 May 2

Introduction

Among the metadata in the GHCN station inventory file is a topographical landform classification of four types: flat (FL), hilly (HI), mountains (MV), and mountain tops (MT).

Topography. ONC make detailed orography available to pilots. We used this information to classify the topography around the station as flat, hilly, or mountainous. Additionally we differentiated between mountain valley stations and the few mountaintop stations that can provide unique insights into the climate of their regions

Peterson and Vose, 1997

European Joint Research Centre EU Soils Landform Classification

Topographical data sets exist for parameters such as elevation, slope, and surface roughness. However, there doesn’t appear to be many publicly available efforts to provide datasets which evaluate those parameters to determine ‘landform classification’ such as mountains, hills, and plains. Although this could be just due to my unfamiliarity. Two that I uncovered were Meybeck et al 2001 and Iwahashi and Pike 2007. These were included in the European Joint Research Centre EU Soils project. Both approaches invovled automated methods to transform DEM data (including roughness) into a preset list of landform classifications. Unfortunately, I could not find either Meybeck or Iwahashi as a downloadable global data set. Unless I was willing to attempt to recreate their methods as described in papers, I was stuck.

But I decided to take another route. Image processing.

Image Processing: Meybeck

Both the Iwahashi and Meybeck classifications are downloadable as a global JPEG image. The resolution is roughly 1/10th of a degree. The EUSOILS web site does offer a much higher resolution image through their ‘terrain viewer’ – but not as a single image.

Meybeck 2001

I used GIMP to open the image, crop the white space border, convert to greyscale and save to a new tiff file. For my initial cut, I eyeballed the line of the west longitude, guestimated the latitude range, and assumed a pixel resolution of 1/10th degree. With this information I was able to build a ‘tfw’ file and use gdal_translate to create the GeoTiff file to feed my data reader. I then located four islands on the greyscale image to use for calibration: Jarvis Island (W), Lord Howe Island (E), and unlabeled islands off the coasts of Antartica (S) and Svalbard (N). With these, I was able to adjust the metatags for the GeoTiff conversion (only the values in the right column are in the file)

Meybeck 2001 B/W

After creating and calibrating the GeoTiff file geometry, I had to calibrate the values in the file. First I took a copy of the image of the index on the ‘terrain viewer’ site. I then converted that to grey-scale and used the GIMP color picker to display the value for each landform classfication.

x-scale
rotation around y axis
rotation around x-axis
y-scale
x-ref point
y-ref point
0.100655
0.0
0.0
0.100564
-164.25
85.00
Meybeck Index Color Meybeck Index B/W
226 Plains
228 MidAltitude Plains
248 High Altitude Plains
165 Lowlands
_95 Rugged Lowlands
146 Very Low Plateaus
_92 Mid Altitude Plateaus
_28 High Altitude Plateaus
_44 Very High Altitude Plateaus
_57 Hills
108 Low Altitude Mountains
171 Mid Altitude Mountains
211 High Altitude Mountains
254 Very High Altitude Mountain

After running the GHCN station inventory through the DataReader, I then ‘binned’ the results and looked for any values that strongly correlated with a GHCN topography type. After these adjustments the classification code looked as follows:

int topo = Integer.valueOf(topoReader.getData(tmp.getLat(),tmp.getLong()));

if (topo > 0 && topo < 2) {
// from ghcn analysis
	tmp.setTopography("MV");
} else if (topo > 1 && topo < 5) {
// from ghcn analysis
	tmp.setTopography("HI");
} else if (topo > 4 && topo < 7) {
// from ghcn analysis
	tmp.setTopography("FL");
} else if (topo > 6 && topo < 13) {
// from ghcn analysis
	tmp.setTopography("MV");
} else if (topo > 12 && topo < 100) {
// high alt, plat, vh alt plat, hills, mid alt plat, rugged lowlands
	tmp.setTopography("HI");
} else if (topo > 99 && topo < 121) {
// low alt mountains
	tmp.setTopography("MV");
} else if (topo > 120 && topo < 140) {
// very low plat,
	tmp.setTopography("FL");
} else if (topo > 141 && topo < 170) {
// lowlands
	tmp.setTopography("HI");
} else if (topo > 169 && topo < 185) {
 // mid alt mnt
	tmp.setTopography("MV");
} else if (topo > 184 && topo < 186) {
 // from ghcn analysis
	tmp.setTopography("HI");
} else if (topo > 186 && topo < 188)
// from ghcn analysis
	tmp.setTopography("FL");
} else if (topo > 187 && topo < 218) {
// hi alt mnt
	tmp.setTopography("MV");
} else if (topo > 217 && topo < 251) {
// plains, mid alt plains, hi alt plains
	tmp.setTopography("FL");
} else if (topo > 250 && topo < 255) {
// vh alt mnt
	tmp.setTopography("MV");
} else {
	tmp.setTopography("xx");
}

Results

GHCN contains both “Mountain Top” (MT) and “Mountain Valley” (MV) classifications. However, there are only 61 of the MT types and for the purposes of this exercise, I have converted those to MV types.

TOPO GHCN MEYBECK MATCH
FL   2779  1326   48%
HI   3006  1705   57%
MV   1495  1049   70%
------------------------
     7280  4080   56%

Tne stacked comparison file is available here: v2.topo.compare.inv.txt

Discussion

I lucked out that the Meybeck image file used a ‘plate carrée’ projection. I avoided having to do a transformation on either the output tiff file or transforming the latititude during the lookup.

The Meybeck image unfortunately contains national boundaries. This is a narrow, very black line (values close to 0). I may be able to remove that and ‘smear’ the neighboring pixels to fill it in.

Like much of the metadata in the GHCN data file, the topography landforms (derived from manually interpreting ONC maps) may not be reproduced well by satellite based GIS processing. However, with only 1/10 deg resolution, I’m not in a position to make strong claims one or the other.

The match rate is not particularly good, but given the new (to me) image processing methods required to get there, I’m satisfied for now.

The Meybeck image file is unfortunately missing a 15deg swath on the west and east sides of the projection.

I hope that the much higher resolution data used in the ‘terrain browser’ is made available at some point.

While this landform classifiation is rather simplistic, just like UHI, the subject of topography is going to become more important in analyzing station history and modeled projections. See link: Science Daily: Topography of Mountains Could Complicate Rates of Global Warming

References

European Commission – Joint Research Centre
Institute for Environment and Sustainability
http://eusoils.jrc.ec.europa.eu/projects/landform/

Meybeck, M., P. Green and C. J. Vorosmarty (2001), A New Typology for Mountains and Other Relief Classes: An Application to Global Continental Water Resources and Population Distribution, Mount. Res. Dev., 21, 34 – 45.

Iwahashi, J. and R. J. Pike (2007). “Automated classifications of topography from DEMs by an unsupervised nested-means algorithm and a three-part geometric signature.” Geomorphology 86(3-4): 409-440.

Advertisements
  1. steven Mosher
    2010 May 3 at 11:46 am

    Nice work..

    I’m wondering if I should go to the trouble of getting this stuff into R..

    or maybe I can persuade you to post a Open version of your metadata research.

    1. minimalistic

    A. a v2.inventory.inv in a easy to read format ( like CSV or some fixed width format)

    For example take the GISS version of the inventory and add your fields:

    GHCNid, name, lat, lon, ele, etc etc….Newfeild1, Newfield2, Newfield3
    where the new feilds are things like GRUMP, GPW, the topo stuff, other nightlights.

    Nick stokes and other should be able to adopt to that.

    2. The above with the source code to generate it. There are some interesting GIS packages for R that I haven’t dug into ( GRASS i think) which would be cool to play around with.

    BTW if you know python clearclimatecode has a cool google earth piece of code, you could play with

  2. 2010 May 3 at 9:26 pm

    RE #1:

    What I’m working towards right now are two new files: my.station.inv and my.xxx

    my.station.inv will take any id, name, lat, lon and create a new v2.inv type file.
    my.xxx will be a time based file id, year, datum1 (datum2, …)

    For instance, pop density and DMSP/OLS brightness have time elements.
    On the other hand, the file used by in v2.inv is DMPS/RC and only one year is available. It goes into the station.inv.

    I’m pretty close now. Revisit R/S/U and see if I can pull nearby population, airports, and revisit DMSP/RC.

    Then I will be ready to “publish” some data files. Not that most of this data is used anywhere. Just R/S/U and associated pop, now replaced by brightness, in GISTEMP. I wanted to preserve the format for those who have built data readers around the v2.temperature.inv. That and just walking through the file to learn

    Your idea is good one; I’ll build a simpler csv version with headers.

    RE #2:

    I became aware of the R GIS libraries about 1/2 through this. Now that I understand GeoTiffs better, I need to see what the R libs have to offer.

    There only about 4 java classes I’ve been using in addition to the java-netcdf libraries. I’ll post the code.

    Don’t know python – yet. But once you learn to think in psuedo-code, most of the work in learning a new language is done.

  3. steven Mosher
    2010 May 3 at 11:42 pm

    Peter Oneill passed along some R for generating KML files, so I can pass that your way if you like.
    dead easy since hes done the tedious part

  4. 2010 May 4 at 5:42 am

    What does it do?

  5. steven Mosher
    2010 May 4 at 8:06 am

    Takes an inventory file and outputs kml. Basically you get a file that will put pushpins on the station locations. Load that into Google earth and then you can visit all the stations.. make a ‘tour’ of the site.. theoretically one could attach icons for other metadata features. I need to research a bit more, but I think with the right Google earth guy on the project we might be able to overlay other metadata.

    For example: I create a subset of the total inventory ( say rural by Grump ) I then run that list through peters program and I get a kml file.

    I drop the kml file into google earth. Then, I start by tour.. station by station, flying between each. When I get to a station, I can then toggle the nightlights “layer”. Currently nasa supplies a nightlights layer in Google earth, but it is very low res.

    So I’m thinking that your other metadata can be handled the same way. I dont know how GE does layers, but what we want is a layer for the “image based” metadata that has variable resoltion.. High res in the target area and “transparent” elsewhere.. if that makes sense..

  1. No trackbacks yet.
Comments are closed.