Main menu:

Site search


October 2020
« Mar    


Closed Contour Hillshading

Back in the old days of using paper USGS topographic maps the only indication of topography were the contour lines themselves. It could be difficult to distinguish between a gully and a ridge. Digital representations of topographic maps have improved the situation by adding hillshading, a visual cue that allows your brain to think about the topography in 3-D. In this post I’m going to present both my thought processes and technical details of how Closed Contour hillshading is done.

Let’s start with an example showing the USGS topo around Mt Morgan (S) unshaded (from State of California):

Unshaded USGS Topo

Unshaded USGS Topo

Now here’s the same spot shaded (from ArcGIS map viewer):

Shaded USGS Topo

Shaded USGS Topo

It’s much easier to get a sense of the topography at a glance with the shaded version. This led to one of my first cartographic decisions, how was I going to treat hillshading? I started by taking a look around the web for different treatments and wasn’t very happy with any of them. The shading shown above is pretty decent except for two things, the overall look of the map is dull and muted and there is no fine detail in the shading. It looks as if the shading were done with much lower resolution data than the contours would suggest.

Next, I turned to Google’s Terrain maps for the area around Mount Morgan (S) and Mount Tom:

Google Terrain Map

Google Terrain Map

Note how metallic and unnatural this looks. The contrast makes the mountains pop way too much for my taste.

Next up, I took a look at TopOSM, a remarkable effort by Lars Ahlzen to provide topographic maps for the entire country (far more ambitious than me!) based on OpenStreetMap data:



I definitely appreciate the ability to turn off all map decorations besides the relief but, again, we get the metallic feel. With a bit more dynamic range it actually seems a little worse than the Google Terrain maps.

The real problem with this metallic looking shading is that it is no longer just a subtle visual cue, it’s become a first class element of the map. I’m after more of a light backdrop where I can overlay different layers of data.

One last example I found on the web was from ACME Mapper:

ACME Mapper

ACME Mapper

This one is pretty similar to the first shaded example from above but not quite as dull although it shares the problem of not having a lot of fine detail in the shading. What’s strange about this one though is that the ridges are actually casting shadows onto the plains below. It has the same effect as traditional hillshading in that you can more easily visualize it in 3-D but I found that it is both confusing and needlessly darkens the valley floors.

Creating the Hillshade

After my survey of what was out there I had a pretty clear vision in my head of what I was looking for. The first step was to get the actual data. I’m using the 1/3 arc-second (roughly 10m) USGS Seamless National Elevation Dataset (NED) which the state of California has conveniently cut into one-degree square blocks. I used wget to grab the files:

wget -r -np
... plus a bunch more

At about 500MB per square degree this took a few hours. Next I used GDAL to combine the files and generate the hillshading background image. The first step is to use GDAL to create a VRT that will make all of the separate one degree block files appear as one large dataset:

gdalbuildvrt dem.vrt grdn38w119_13/w001001.adf {more files}

I need the final output to be in the Closed Contour projection so I used GDAL to warp the data:

gdalwarp -t_srs "+proj=tmerc +lon_0=120w +k=0.9996 +ellps=WGS84" \
        -r cubic -tr 10 10 dem.vrt dem.tif

At this point I have a single TIFF file that has elevations for the entire area of interest. This file is 36515 x 55915 32-bit (that’s 2000+ megapixels!) floats representing elevation, weighing in around 8GB.

Next I used GDAL to create a visual hillshading representation:

gdaldem hillshade -s 1 -z 0.7 dem.tif shade.tif

I experimented with a variety of ‘-s’ and ‘-z’ arguments but couldn’t really get what I was looking for.  Here’s the output from the above:

Direct GDAL Output

Direct GDAL Output

Uh-oh, we’re back to that metallic feel. I was a bit stumped at this point. I knew that I could take an image like this into Photoshop and tweak the curves to get the effect I was looking for but I wasn’t too excited about asking Photoshop to open up my 8Gb image (I was on a 32-bit machine at the time). Besides, I wanted this to be reproducible in a script.

I poked around with GDAL a bit more and noted that the output from the hillshade program was an 8-bit grayscale image. If I could just remap those bytes into my own palette then I’d be set. I thought this would be pretty straightforward but a bunch of googling later and I still didn’t have a solution. Further digging revealed that GDAL had all the answers. First, I created a VRT from the hillshade output:

gdalbuildvrt remap.vrt shade.tif

Open up remap.vrt in an editor and you’ll see this:

   <VRTRasterBand dataType="Byte" band="1">

It turns out you can manually create a custom color palette like so:

      <Entry c1="128" c2="128" c3="128" c4="255"/>
      ... 254 more color entries

I wrote a small script (which I have unfortunately misplaced, although just ask if you’d like the generated color ramp that I used) that mapped the gray values in such a way that the darkest became 128 gray and rapidly curved up to white.  The last step was to convert the VRT to an actual TIFF, which was as simple as:

gdalwarp remap.vrt final_hillshade.tif

This what I ended up with:

Remapped GDAL Output

Remapped GDAL Output

Ah, this is much better. Still a bit metallic when viewed in isolation, but once you start overlaying data it recedes into the backdrop and becomes just that 3-D topographic visual cue I was looking for. Here’s what this area of the map looks like in the latest version of the map:

Closed Contour Map

Closed Contour Map

Enjoy the Map

There are so many things I’d like to do with the Closed Contour SPS map.  Cleaning up data problems, better labeling of important landmarks for climbers, features for overlaying personal data, making the map prettier — I really could work on it forever.

But Closed Contour is just a hobby for me and it’s unclear when I’ll ever have time to do these things. But I don’t want to let my desire for a cleaner, prettier, more featureful map stand in the way of releasing what I think is a pretty cool map.

So I’ve decided that it’s time I start telling people about the Closed Contour SPS map. Hopefully visitors will find it to be both useful and aesthetically pleasing. I’ve certainly enjoyed making it and look forward to continuing work on it in the future.

v2 SPS Tiles Posted

I’ve posted new tiles with the following changes:

  • Whiter glaciers/permanent snow with blue contour lines. I talked about this in a previous post.
  • Change forest color depending on density (only in Yosemite and Sequoia/King’s Canyon NP so far). I also mentioned this in a previous post.
  • Non-SPS peak names.  Discussed earlier as well.
  • Pass names.
  • Trail names, mostly in the NPs.
  • Removed many bogus ‘lakes’ which were actually mischaracterized permanent snow.
  • Added styling for scree, talus, and meadow/marshes.
  • Changed font for SPS peaks to slightly larger, darker, and italic to set them apart from non-SPS peaks.
  • Not a tile change, but I added UTM coordinate display in lower right.

Here’s an image showing most of these new features:

v2 Sample

Note that I screwed up the land cover in Yosemite National Park by accidentally styling it with two sets of landcover data.  Obvious side effects of this are that the whole area is generally too green and the permanent snow sections look funny.  It will take me a few days to fix this.

Enjoy the new tiles!

Fixing GNIS Data

I’ve been working quite a bit on adding non-SPS peaks to the map.  It’s been a mix of learning some new skills and repetitive, tedious work.

This all started with California GNIS data.  After filtering down to just the summits, I converted it to a shapefile and threw it on the map expecting it to work out pretty well.  Not so much.  Here’s a good example of what I got:

Bad GNIS Locations

As you can see the GNIS summit locations of Columbine Peak and Isosceles Peak are nowhere near where they should be.  While this example is particularly bad, I’d say that almost all of the summits were unacceptably displaced from their true locations.

I knew early on that I’d want the ability to manually modify features on the map so this became the impetus to make that happen. First, I needed a back-end to store my modifiable point data.  Borrowing some ideas from OpenStreetMap’s schema, I went with a simple two-table design that stores points and key/value data.  I’d really like to have something like OSM’s revisioning system but at this point that’s overly complex.  Here’s my simple schema:

mysql> desc points;
| Field     | Type    | Null | Key | Default | Extra          |
| point_id  | int(11) | NO   | PRI | NULL    | auto_increment |
| latitude  | double  | NO   |     | NULL    |                |
| longitude | double  | NO   |     | NULL    |                |

mysql> desc point_tags;
| Field    | Type         | Null | Key | Default | Extra |
| point_id | int(11)      | NO   | PRI | 0       |       |
| tag      | varchar(255) | NO   | PRI | NULL    |       |
| value    | varchar(255) | YES  |     | NULL    |       |

Next I coded up some CRUD functions to bridge the gap between Python and MySQL.  This allows me to treat objects in my point database as simple Python dictionaries.  From here it was pretty trivial to import whatever GNIS data I wanted and stuff it in my database.

So far OpenLayers had proven to be pretty easy to use so I decided to give it a go as my editing front-end.  Next step was a layer for getting between the front- and back-ends.  A friend of mine suggested that writing a WSGI application would be about as simple as it gets and he was right.  After about another 100 lines of code I had a mechanism for loading and storing data from a web-page using JSON as the intermediary format.

After a bit of hacking on OpenLayers I had my end-to-end (albeit a bit fragile) solution.  I could select any point, drag it to it’s correct location (oftentimes referring to the USGS topos), and save it back to the server.  Getting to this point was fun as I got to toy around with several new technologies.  But now the grunt work…

The problem is that there are about 2000 named summits in the bounds of my map.  Ideally I would look at each one and decide where it should be plotted on the map.  I started this process and realized that it was going to take a while without help (want to help? send me an email at dan at closed contour dot com).  Narrowing my focus to summits in the vicinity of SPS peaks allowed me to actually finish the job without going insane.  In the process I realized that my existing SPS peak locations weren’t ideal (see below) so I fixed those too.  (Random side note: it turns out that there are only 11 SPS peaks that aren’t officially named in GNIS.)

Bad Mount Humphreys Location

With that work behind me I moved on to locating all of the GNIS ‘Gap’ entries — passes, saddles, etc. There are only a couple hundred of these guys so it wasn’t so time consuming.  I should have new tiles with these new point features up by the end of the weekend.

At this point I feel like I have a workable system for modifying point-based data on the map.  There’s a bunch of point features I want to add in the near future: trailheads, campgrounds, and glaciers (all13 of them).  Then, it’s on to line data…

Lessons from the Weekend

Last weekend I climbed Liberty Cap and Mount Watkins in Yosemite National Park with my brother and his wife.  It was a great trip, the waterfalls were going crazy, the weather was sunny and not too hot, and the nights not too cold.  A good (late) start to the season.

Mount Starr King from Liberty Cap

While we were making our way up Liberty Cap I realized that the coloring of the my map was too simple and somewhat misleading.  Here’s a snap of the Liberty Cap area from the v1 tiles (which are the published ones as I write this):

Liberty Cap (Closed Contour v1)

We left the trail around the ’6000′ contour label and headed up the gully between Mount Broderick and Liberty Cap.  We then curled around and went up the broad gully on the north side of the dome.  So what’s the problem with the map?  It shows most of the route up the north side as being in the forest when in fact it was easy traveling over mostly rock.  In coloring the land-cover I was ignoring the density of the vegetation.  I spent some time last night working on some styling for this data and ended up with this:

Liberty Cap (Prototype v2)

This is a better representation of reality and is superior for trip planning (i.e. not going through dense forest if possible).  As you can see there are other styling elements added as well.

The same problem arose on the Mount Watkins hike.  Here’s the current map:

Mount Watkins (Closed Contour v1)

It shows the Snow Creek Trail (the steepest trail out of the valley) climbing through green, which could indicate shade while in reality this route is mostly exposed to the sun.  Also, the cross country travel is pretty straightforward once leaving the trail on the east side of Snow Creek.  Take a look at the v2 prototype map:

Mount Watkins (Prototype v2)

Unfortunately I’ve only applied these changes to the Yosemite land cover data at this time.  It’ll take a little bit of work and a re-import of the vegetation data to do it across the whole Sierra Nevada, but I feel like it’s a big improvement in the usability of the map.

I’ll close this post with a few other photos from the weekend.  First, the top of Nevada Falls:

Top of Nevada Falls

Because of the late spring snow and cool temperatures the rivers are just now reaching their peak flow.  Next, a couple from the Mount Watkins hike:

Half Dome from Mount Watkins

Clouds Rest from Mount Watkins

I think the Snow Creek Trail has the best views of Half Dome in the valley.  And the summit of Mount Watkins has the best views of Clouds Rest.  Two highly recommended hikes.

Glacier Contours

I consider the USGS 7.5′ quads to be among the best topographic maps around.  That said, working around their shortcomings is the primary cartographic motivation for Closed Contour.  Right now I feel like I have a ways to go before I can claim that my maps are ‘better’ (how ever you define it) than the USGS quads.  Really this is a long-winded way of saying that I’m going to blatantly steal ideas and styling from the USGS quads until they don’t have anything more to offer!

Here’s a prime example: contour lines on glaciers (or, in California, permanent snow).  Take a look at the area around North Palisade:


And now the same area as rendered in the v1 SPS Map tiles:

Closed Contour

See the problem?  On the USGS map, contour lines on the Palisade Glacier are light blue, as is the outline.  Even though the glacier is white on the Closed Contour map you can barely see it.  I decided that my maps needed this feature, both for usability and aesthetics.

Another motivation for making this map was to learn some new technologies.  Sure, I’d written a little SQL before and I’m pretty familiar with geospatial software and techniques, but I hadn’t ever really used PostgreSQL and/or PostGIS.  This would be a great opportunity to try to do something useful purely in SQL.

A little background: at this point all of the map data (except the hillshade raster)  are stored in a haphazardly organized PostgreSQL with PostGIS database; each data source with it’s own table.  Actually, in the case of contours I have them separated into 3 tables, one for 40′, one for 200′, and one for 1000′ intervals.  This is an artifact of the contour generation/import process and something I’ll hopefully clean up.  There’s also a polygon land-cover table.

So all I need to do is to iterate over the snow/glacier land-cover polygons, find all contours inside them, slice them to be exactly inside the snow and mark them as such, and then subtract out the contours from the original data set.  No problem, right?  Here’s how I accomplished that in SQL:

CREATE OR REPLACE FUNCTION find_snow_contours()
    RETURNS  void AS $$
   contour_line RECORD;
   snow_poly RECORD;
   g geometry;
   -- Holding place for contours that are purely contained by snow/glacier
   DROP TABLE IF EXISTS temp_snow_contours;
   CREATE TABLE temp_snow_contours (like contours_40);

   -- Iterate over all snow polygons ('snow' is a temporary table with just snow polygons')
   FOR snow_poly IN SELECT * FROM snow LOOP
      -- Iterate over all contour lines that intersect this snow polygon
      FOR contour_line IN SELECT * from contours_40 WHERE ST_Intersects(the_geom, snow_poly.the_geom) LOOP

         -- Subtract off the snow section and save resulting lines as non-snow contours
         SELECT INTO g ST_Difference(contour_line.the_geom, snow_poly.the_geom);

         -- Delete the original contour line and replace it with the remaining segments
         -- It's common for there to be many segments after the difference, hence the 'ST_Dump'
         DELETE FROM contours_40 WHERE gid = contour_line.gid;
         INSERT INTO contours_40 (elev, is_snow, the_geom)
            SELECT contour_line.elev, 0, (ST_Dump(g)).geom;

         -- Add the intersection as snow contours
         SELECT INTO g ST_Intersection(contour_line.the_geom, snow_poly.the_geom);
         INSERT INTO temp_snow_contours (gid, elev, is_snow, the_geom) VALUES (contour_line.gid, contour_line.elev, 1, g);

      END LOOP;

   -- Finally put the snow contours back in the main table.
   INSERT INTO contours_40 (elev, the_geom, is_snow)
      SELECT  elev, (ST_Dump(the_geom)).geom, is_snow FROM temp_snow_contours;
   DROP TABLE temp_snow_contours;

I imagine that a PostGIS SQL-guru would look at this function and shudder… forgive me, I’m a newb. Anyway, this technique worked out nicely and I’m pretty happy with the result:

Before on the left, After on the right

(L) Before, (R) After

Not sure when I’ll generate them but I’ll definitely be rolling this feature into v2 of the tiles.

Mapping Challenges

I started working on this project about 3 weeks ago in my spare time.  The first few days I spent just trying to make some pretty pictures with some basic data, maybe a DEM, some contours, some summits, etc.  Once I convinced myself that what I had in mind was possible I began to work in earnest.  It quickly became clear that there were four very different aspects to making and sharing a map on the web:

  1. Data
  2. Tools and Infrastructure
  3. Cartography
  4. Publishing


Finding geodata for a specific area, especially a popular area like the Sierra Nevada, is generally not too difficult.  The problem I’ve had is piecing together a cohesive dataset from many sources.  Take hiking trails for example, I put together the hiking trails from three different data sources: a Yosemite trails shapefile from the NPS, a Sequoia/King’s Canyon trails shapefile from the NPS, and a generic trails ‘extract’ from the Forest Service.

Of course, the schema for these shapefiles were all completely different.  The Yosemite and Sequoia/King’s has trail names, the Forest Service stuff had none.  While the Forest Service data had large chunks missing in the National Park regions there was still considerable overlap that I manually cleaned up.  After a few hours of work I have a single shapefile that covers the trails of my region of interest but that’s just the beginning.  I want all of these trails (at least the big ones) to be labeled nicely and I’m going to have to go through and do that manually.

Similar story for just about every other substantive data layer:

  • TIGER road data is variable by year with some of the earlier stuff having more detail in the dirt/private roads that I care about and I still haven’t found a public data source that indicates whether a stretch of road is paved or not.
  • I found many sources of populated places, some have way too many places, some not enough, some with population, some without.
  • Vegetation, hydrology, GNIS — all have consistency and accuracy problems.

Hopefully I’ll get a chance to go into detail on each of the above data layers in separate blog posts.  Suffice it to say that data wrangling, processing, converting, munging, and fact-checking has taken a large chunk of time so far and will continue to do so for the foreseeable future.

Tools and Infrastructure

The next challenge is the ‘engineering’ part, dealing with all of the software to create approximately 220,000 map tiles.  The software I’m relying on:

  • Mapnik — the backbone of the operation, it generates the map tiles,
  • GDAL/OGR — for data munging,
  • Postgres/PostGIS — for storing my processed data for Mapnik,
  • Eclipse with PyDev — for writing my Mapnik scripts,
  • GlobalMapper — it may have a painful UI at times and it’s not free but it’s a quick way to view data and can convert just about any geodata format.

I feel like that, for the most part, the bulk of my work is done in this department.  I’ve got a decent setup where I can test out cartographic ideas on my home machine and do the processing ‘in the cloud’ (see below).


I think munging data will be the most time consuming piece of the process but that’s generally easy work.  The hard part is the cartography.  A map is only as good as each of the decisions that was made in creating it.

Should the hillshading be darker/lighter?  Should I use ‘Google Mercator‘ or something better?  Should I include 200′ contour lines at this zoom level?  At what scale should the river flow stroke get smaller?  How much should I accent the SPS summits versus other summits?  There are thousands of such decisions to make.  I plan on writing about these decisions and why I made them one way or another.


How do I go about creating my tiles and publishing them?  This falls in the ‘engineering’ category as well and has essentially nothing to do with cartography.  Personally I quite enjoyed learning about all of the potential options but I can easily imagine would-be cartographers not sharing their work because of the technological burden.

I’ll write more about it later but the bottom line is that I’m using virtual machines on Amazon’s EC2 to generate the tiles (takes about 6 hours on a medium instance) and storing them on S3 (takes another 6 hours to upload them) where they are served via CloudFront.  I then have a cheap web host to serve the static OpenLayers-based map viewer.  Time will tell how expensive this is…

So there you have it.  I feel like I have the infrastructure and publishing pieces solved so now I can focus my energy and time on the data and cartography and eventually produce a map that achieves my goals.


So, my friend Jeff tweeted about my site so I guess in some sense it’s now public.  There were a few things I wanted to clean up before I went public but nothing too major:

  • Tiles for the outer zoom levels look like crap right now.  I’ve got a somewhat cleaner look and feel for those now, just need to generate.
  • I’ve cleaned up my trails data a little bit and got Yosemite and Sequoia/Kings Canyon data in there.
  • I’ve added some populated places to the map.
  • Added more data to the DEM so hillshading will be better, especially zoomed out.

Anyway, I hope to have my new tileset up and published by the end of the long weekend.  Update: new tiles are up.

What is Closed Contour?

Simply put, Closed Contour is my attempt to create modern topographic maps using publically available data.  Here’s what I’m after:

  • User-specific data.  I’m into peak climbing and I want my maps to be focused on that.
  • Aesthetically pleasing.  A topographic map should look good at every scale and viewing distance.
  • Editable.  Ideally the map should be editable in a wiki style (like but I’ll settle for just me being able to make changes.
  • Consistent.  The units, contour interval, typography, coloring, scale, etc. should all be consistent across the entire mapped area.
  • Printable.  The map should be printable at an accurate scale.
  • Web interface.  This isn’t too much to ask, there’s plenty of good mapping interfaces (even open source) out there and you can already find the USGS topos in them.

There’s no single solution that meets all of my needs right now so I’m going to try to build one.  This blog will chronicle how that goes.