What is spatial interpolation?What is interpolation?
Why is interpolation required?
Data sources for interpolation
Classification of interpolation procedures
Different methods for interpolation
Example of comparison of different methods
Interpolation is the procedure of predicting the value of attributes at unsampled sites from measurements made at point locations within the same area or region.
- Spatial interpolation converts data from point observations to contiguous fields so as to enable spatial pattern comparison with other spatial entities.Why is it required?
- The rationale behind interpolation is that, on average, values of the attribute are more likely to be similar at points close together than at those further apart.
Conditions necessitating interpolation include disparities in data and requirements:
Sources of data for continuous surfaces include-
Classification Of Interpolation Procedures:
-global, deterministc, approximate method: These use 'soft' information and divide area into regions characterized by means, variances of the attributes. e.g soil and landscape mapping. This method assumes homogeneity within boundaries.
Trend surface analysis:
-global, deterministic(empirical), approximate interpolator: The surface is approximated by a polynomial fitting to data points. This is done by a multiple regression of attribute values versus geographical locations. The polynomial function is then used to estimate values of grid points on a raster or the value at any location. The elevation z at any point (x,y) on the surface is given by an equation in powers of x and y. Trend surfaces are susceptible to outliers in data and are smoothing functions.
- global, deterministic, approximate: This approximates the surface by overlaying a series of sine and cosine waves. The Fourier series can then be used to estimate grid values for a raster or at any point. This method is best for data sets which exhibit marked periodicity, such as ocean waves. But it is rarely incorporated in computing packages. It is particularly used in analysis of remote sensing data.
- local¸ deterministic, exact interpolator
- all values are assumed to be equal to the nearest known point
Thiessen polygons: This is best for nominal data and generates polygons with abrupt changes at boundaries. Area is divided into polygons which is determined by the configuration of points. Whole polygon gets the attribute value of the point falling within it.
Pycnophylactic methods: Not an exact method but conserves volumes. This is a continuous smooth interpolator that removes abrupt changes due to inappropriate boundaries. This is based on mass-preserving reallocation from primary data. Used for spatially contiguous variables like population densities, rainfall etc.
Moving averages and Inverse Distance Weighting:
-Local, deterministic, exact interpolator: Inverse distance interpolation methods combine the weighting by proximity of nearest-neighbour methods with the gradual change of trend surface approaches. Commonly used in GIS to generate raster overlays from point data.
-Local, deterministic, exact interpolator: These are mathematical equivalents of the flexible ruler. They are piece-wise functions which are fitted to a small number of points exactly and which also ensure that the joints between various pieces are continuous. Cubic splines piece together segments of a continuous cubic polynomial which has both first and second derivatives. B-splines are composed of the sums of lower order splines in a restricted area near the data points. Surfaces may also be splined with thin-plate splines which use a locally smoothed average.
-Local/Golbal, stochastic, exact interpolator: The basis of this technique is the rate at which the variance between points changes over space. This is expressed in the variogram which shows how the average difference between values at points changes with distance between points. Simple Kriging assumes that the surface has a constant mean, no underlying trend and that all variation is statistical . Universal Kriging assumes that there is a deterministic trend in the surface that underlies the statistical variation . In either case, once trends have been accounted for (or assumed not to exist), all other variation is assumed to be a function of distance.
Comparison of different methods:
Results of different methods as applied to interpolation of elevation
data -- Visual comparison (figures)
Burrough,P.A and McDonell,R.A, 1998. Principles of geographical information systems. Chapters 5 and 6.
Isaaks, E. H. and R. M. Srivastava. 1989. An Introduction to Applied Geostatistics. Oxford Univ. Press, New York, Oxford. (a very good introductory textbook)
Mitasova,H., Mitas,L., Brown,W.M., 1995. Modelling spatially and temporally
distributed phenomena: new methods and tools for GRASS GIS. International
Jour. Of , GIS 9:433-446.
http://www.geog.ubc.ca/courses/klink/gis.notes/ncgia/u40.html - Spatial Interpolation (Also see the references cited)
http://www.ncsa.uiuc.edu/Apps/CMP/lmitas/interp/interp.html - This site describes some applications and comparisons of different interpolation methods.
- A Comparison of Spatial Interpolation Techniques in Temperature Estimation