Data
The data was collected from numerous of sources. The first being hydrographic maps produced by Canadian Hydrographic socitey the which were found in the GIC and Koerner Library . The maps were first photocopied into a more portable size and scanned onto the computer. The second half of my primary data were locations extracted from 1951 fire insurance maps of the study area. These were to be my 'sources' of pollution. Lastly, for this project, I used the local regions layer provided by the City of Vancouver on the G Drive.
Methodology
First, I georeferenced the image of the scanned map on the local regions layers. Afterwards, I started to digitize the water depths of False Creek. First, I digitized the unique values and then the contour lines. With over 500 points, I started to manually inputing the values of each point. After this tedious task, I was able to turn this points into a DEM via IDW (inverse distance weighted), which is a form of interpolation. Along with the creation of the DEM, I also digitised my six point sources of pollution, which were conveniently chosen to be well spread out along the False Creek shore.
Afterwards, I reclassified my values and give them scores of 1, 2 or 3. The score of 1 being areas with shallow depth (0-2m) and the score of 3 being areas of deep depth (+5m). With this layer essentially acting as a cost surface, I began running some of my analysis. First, I looked at the paths of water pollution under low tide and arbitially chose a point at the west end of False Creek as the 'end point'. This was where all the pollution 'has to pass' to reach the ocean when flowing outwards. With my cost surface and my endpoint, I made made my cost distance layer and began to run least cost paths from each of my six point sources.
From looking at the low tide, I proceeded to look at the high tide scenario. Similarly for low tide, I also made an endpoint at the western end of False Creek as a point where the water 'will seek to reach'. Then I created another cost distance layer for this high tide scenario and with another six cost paths(Map). In both scenarios, these paths were given 10m buffers to roughly mimick the spread of the pollutant.
However, pondering on this matter, I realized that not all of these paths will reach the endpoints at the same time if water speed is constant. Furthermore, their paths might change as the tides reverse. So I started to account for this time factor. The first scenario was having the pollution move under high tide for about 100m and reverse direction afterwards to move another 100m under low tide. This is accomplished by intesecting the high tide path with a 100 meter buffer from the point source. I then added a point which would represent the origin of the reversed path. From this new origin another cost path would be laid down and by intersected by another 100m buffer coming from the new origin. The cumulative effort of these reversals is this.
This situation had a major flaw though as I later discovered. Looking at the travel distance of a controlled oil spill in NY City harbour., the oil had travelled a few km in the span of several hours. I had seriously underestimated the speed of the flow. So I decided to expand the buffers to 1 km buffers. In addition, to make things less repetitive, I added another tide into the scenario. So the pollution would be affect by high tide, low tide and high tide again. In addition, from what I gathered in the NY oil spill example, I also wasn't mimicking the diffusion process of the spill well enough with just the 10m buffer around the cost path. So I ended up buffering the paths by field.
First, I made edits to the line and split the line into three portions. Then I created a new column in the attribute table and input the values of the buffers that I wanted. For my scenario, the buffer started at 10m and increased by 5m every third of the line segment. This would carry on for the reversal to low tide and the second reversal back to high tide. The results are shown in the following.
High Tide
Low Tide
High Tide again