Methodology
Creating the slope-area plots required several steps, which are
described below. I used three different computer programs for the
analysis: ArcGIS to create the slope and contributing area data from
elevation datasets; Matlab to format the data so it can be plotted; and
Microsoft Excel to plot the data. A flowchart showing all of the steps
taken is attached at the end of this paper.
Step 1: Selecting Study Areas
The first step was to select specific study areas for the analysis. I decided that the previously glaciated landscape should be in an area of British Columbia that I was familiar with. For this reason I chose the area near the Hurley road north of Pemberton. The non-glaciated landscape had to be somewhere with a similar climate, especially in terms of precipitation and temperature, as water is one of the most important components in geomorphic processes. However, this turned out to be very difficult, as the elevation data in the most suitable areas of Washington contained a lot of void spaces (explained in section 4 Error Analysis). The only area of Washington that was generally void-free was the southwestern area (west of Olympia, WA and north of Astoria, OR), which has approximately three times the precipitation of the Hurley area (US National Climatic Data Center; Canada’s National Climate Archive). These are marked in Figure 2 below. Ideally I would have had more time to search for a better match in BC.

Figure 2. Map of the
Pacific Northwest showing study areas. The glaciated area near the
Hurley Road in BC is shown in blue, the non-glaciated area in
southwestern Washington is in red. The white line show the maximum
southern extent of glaciation, approximated from USGS data (image from
Nasa EarthObservatory:
<http://earthobservatory.nasa.gov/IOTD/view.php?id=1667>).
Step 2: Finding the Data
I acquired the elevation data for BC from the UBC Department of Geography (the datasets were created by DMTI). The datasets used were 30-metre resolution DEMs of the NTS 092J sheets 10, 11, 14, and 15. A higher resolution DEM would be more suited for this analysis, but I was unable to acquire any.
I downloaded the Washington elevation data from the USGS. These were raster elevation datasets in DTED format. The only areas of Washington available at 30-metre resolution were the mountainous Cascade regions and the west coast area. The rest of Washington was at 100-metre resolution. The Cascade region data was not suitable for this analysis as there were many voids (areas with no data) that affected nearly all of the drainage basins (Figure 3). I downloaded a similar dataset in BIL format but the voids were present in that dataset as well. I was therefore forced to use the coastal area dataset for the analysis. This was not ideal as it also contained some areas without data (although very few and avoidable) and this region receives much more precipitation than the Hurley area in BC.

Figure 3. This is a
section of the Cascade region elevation dataset. The white shapes are
areas with no elevation data. These voids render any GIS slope-area
analysis of the region impossible.
Step 3: Aerial Image Interpretation
The aerial image interpretation usually involves looking at stereo pairs of the study areas to make a preliminary description of the geomorphic process domains. The rough domains can be identified by looking for the bounding features such as channel heads, debris flow fans, and sudden changes in slope. Due to time constraints I only used Google Earth images. From these I could identify hanging valleys in the BC study area, but little else. I have therefore not included any of this analysis.
Step 4: GIS Analyses (ArcMap)
The next few steps follow the first section of the geomorphtools
procedure by Crosby et al. (2007). The rest of their procedure requires
the installation of the Stream Profiler Tool, which cannot be done on
the Department of Geography computers. The tool creation and
methodology by Crosby et al. was funded by NSF Geomorphology and Land
Use Dynamics.
DEM Mosaic:
The study areas both covered multiple map sheets, which meant multiple
DEM datasets. To combine these separate DEMs into one, I created a
mosaic and used the ‘Mosaic to New Raster Tool’ in ArcMap.
DEM Projection:
The USGS Washington elevation datasets came in a geographic coordinate
system (measurements in latitude and longitude) and needed to be
changed to a projected coordinate system for all units to be in metres.
I projected the newly combined dataset into the NAD UTM 1983 Zone 10N
projection. Zone 10N covers the western areas of BC, WA, OR, and CA.
The DMTI BC elevation data was already projected in this coordinate
system.
Slope:
To create the slope data for the slope-area plot, I used the ‘Slope
Tool’ in ArcMap. This calculates the slope gradient at each pixel in
the DEM. I enabled the option for slope gradient to be calculated as
percent rise (instead of angle). To then get slope gradient in metres
per metre, I divided the values by 100 in Excel before graphing the
data.
Contributing Area:
I used the ‘Fill Tool’ to elevate any sinks in the DEMs. Sinks are
pixels with elevation values lower than all those around it. Elevating
these is required as otherwise any water flow that reaches a sink stops
completely. Although in reality the elevation may be lower than all
around it, the water flow would collect and eventually overflow.
After filling the DEM, I used the ‘Flow Direction Tool.’ This is a
D8 single-flow algorithm (explained in more depth in 4 Error Analysis).
It creates a raster layer of water flow direction from each pixel. This
layer is used for creating the drainage basin layer and for the Flow
Accumulation algorithm.
The ‘Flow Accumulation Tool’ uses the flow direction layer to create a
new layer with an accumulation value for each pixel. This value
represents the number of pixels from which water will flow into it
(i.e. the contributing drainage area).
Drainage Basin Masks:
I used the ‘Watershed Tool’ to create raster layers of each watershed.
There are six small valley wall watersheds and four large watersheds in
the BC study area. There are four watersheds in the Washington study
area. Figures 4 and 5 are maps of the BC study area and the WA study
area, respectively. I also created two stream channel masks (one in
Large Basin #4, BC and one in Basin #1, WA) and their corresponding
hillslope areas above the channel heads. Ideally I would have done this
for all of the watersheds. I then used the ‘Extract by Mask’ tool to
extract all of the slope and area data for each drainage basin, for
both streams, and for the two hillslope areas. I exported each data
file as a .txt file to use in Matlab.

Figure 4. Map of the
BC study area drainage basins. The white circles show the transverse
basin numbers and the black circles show the large basin numbers. Basin
#1 is shown in black to contrast better with the stream channel. Large
Basin #4 is black to better contrast with the stream channel and
hillslope area used in the analysis.

Note: The maps in Figures 4 and 5 are not drawn at the same scale.
Step 5: Formatting Data (Matlab)
I used Matlab to reformat all of the data to be able to plot it. I
imported the .txt data into Matlab. I then reshaped the data from a
grid to a list and removed all -9999 values (areas with no data) and
all values of 0 for both slope and area, as 0-values cannot be plotted
on a logarithmic scale.
Step 6: Plotting Data (Excel)
Note: I could have used Matlab instead of Excel for all of the following methodology, however I am not very familiar with the program.
To plot the slope-area data, I had to change the slope values from present rise to m/m (i.e. rise/run) and area values from number of pixels to km2. This required dividing all of the percent rise slope data by 100 and multiplying the area values by the area of each pixel (0.0009km2). After these calculations I created the slope-area plots as scatter plots with logarithmic axes.
The larger drainage basins had an enormous number of data points to be graphed. Drainage Basin #1 (WA) had over 150,000 data points, which is around 5 times more data than Excel can plot for each series. Even with the 32,000 data point limit the plots had way too many points and it was very difficult to analyze anything from this (Figure 6). I decided to take a random sample of 2000 values for each of the large drainage basins. The plots looked much better after this. Figure 7 shows the minimal variation between two random samples for Drainage Basin #1 (WA).

Figure 6. Drainage
Basin #1 (WA) with 32,000 points graphed. There are too many data
points to properly analyze this graph.


Figure 7. Two random
samples of 2000 data points from Drainage Basin #1 (WA). Comparison of
the two shows there is little variation in the shape of the data.

Background image from: <http://originalbooner.wordpress.com/2011/10/11/backpacking-to-tenquille-lake-2/>