Monday, May 9, 2016

Lab 8: Spectral Signature Analysis & Resource Modeling

Goals

The goal of this lab was to gain experience in the measurement and interpretation of spectral reflectance signatures of various Earth surface and near surface materials captured by satellite images.  Additionally, we explored performing basic monitoring of Earth resources using remote sensing band ratio techniques.  For this lab, the study area was Eau Claire County and Chippewa County in Wisconsin. 


Methods

All exercises in this lab were completed using Erdas Imagine 2015 and ArcMap 10.3.1.

Spectral Signature Analysis
During the first part of this lab, we measured and plotted the spectral reflectances of 12 different surface features from a satellite image of Eau Claire in Erdas Imagine.  These materials included moving water, vegetation, dry soil, etc.  To locate the desired surface, I linked my Erdas Imagine Viewer to Google Earth (Figure 1).  Then, upon finding the desired surface, I utilized the Polygon tool to draw a square around the surface and opened the Signature Editor to view the polygon area's spectral signature mean plot.  After collecting the spectral reflectances of all 12 surfaces, I displayed all of the spectral reflectance signature mean plots together and completed analysis (Figure 2).  Differences between materials can be detected when examining their spectral reflectance signature mean plots; for example, when taking a look at the curves for dry and moist soil, there are clear discrepancies (Figure 3).  Dry soil shows an overall higher amount of reflectance due to its lesser moisture content, while moist soil displays lower reflectance levels due to its higher moisture content. 


Figure 1: spectral reflectance signature collection process for riparian vegetation;
Erdas Imagine (left), Google Earth (right)

Figure 2: all spectral reflectance signatures

Figure 3: spectral reflectance signature mean plot for dry and moist soil


Resource Monitoring
Figure 4: NDVI ratio
The second part of this lab involved resource monitoring of vegetation health and soil health through performing simple band ratios.  Firstly, we utilized Erdas Imagine to implement the normalized difference vegetation index (NDVI) on an image of Eau Claire County and Chippewa County to determine abundance of vegetation.  This involved using the NDVI function within Erdas Imagine, which carried out the NDVI ratio (Figure 4).  This produced a black and white NDVI image, with extremely white areas indicating high levels of vegetation and darker areas indicating less vegetation (Figure 5).  I then imported this image into ArcMap and created a map of vegetation health (Figure 8).  

 
Figure 5: NDVI image viewed in Erdas Imagine


Figure 6: ferrous mineral ratio
Next, I utilized Erdas Imagine to implement the ferrous mineral ratio on an image of Eau Claire County and Chippewa County to determine the spatial distribution of iron contents in soils within this area.  This involved using the Indices function under the Unsupervised tab in the Raster section of Erdas Imagine, which carried out the ferrous mineral ratio (Figure 6).  This produced a black and white image, with lighter areas depicting areas of ferrous minerals and darker areas showing areas with less amounts of such minerals (Figure 7).  I then imported this image into ArcMap and created a map of soil health (Figure 9).  

Figure 7: ferrous mineral map viewed in Erdas Imagine



Results

Below are the results of my methods, namely two maps depicting vegetation and soil health (Figure 8, 9).  As these maps show, carrying out simple band ratio analysis can be very helpful in monitoring Earth resources.  Such data may be especially useful to farmers or the DNR in terms of gauging the health of various landscapes and determining future plans of action. 

Figure 8: map displaying vegetation health

Figure 9: map displaying soil health

Sources


Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Monday, May 2, 2016

Lab 7: Photogrammetry

Goals

The aim of this lab was to develop skills in performing key photogrammetric tasks on aerial photographs and satellite images.  Specifically, this lab encompassed: 

  • calculation of photographic scales
  • measurement of areas & perimeters
  • calculation of relief displacement
  • stereoscopy
  • orthorectification of satellite images


Methods

Scale Calculation 
There are two main methods used to calculate the scale of a vertical aerial image: 
  1. Compare size of objects measured in the real world with the same objects measure on a photograph
  2. Find the relationship between the camera lens focal length and the flying height of the aircraft above the terrain
Figure 1: diagram depicting two methods of scale calculation


During the first part of this lab, we explored both of these methods.  First, I utilized a ruler and images to measure various distances on my computer monitor.  I then used math to compare this measured distance to a given actual distance (mathematical process shown in Figure 1).  After this, I applied the second method of scale calculation using given focal length and flying height values for a photo of Eau Claire (mathematical process shown in Figure 1). 


Area & Perimeter Measurement 
For this part of the lab, I utilized the 'Measure Perimeters and Areas' digitizing tool in Erdas Imagine to calculate the area and perimeter of a given photograph (Figure 2).  

Figure 2: calculation lagoon area and perimeter


Relief Displacement Calculation
Figure 3: relief displacement equation
Relief displacement occurs when objects and features on an aerial photograph are displaced from their true planimetric location.  Through applying an equation, this distortion can be corrected (Figure 2).  During this portion of the lab, I utilized given values to apply the relief displacement equation (Figure 3).   

Figure 4: diagram depicting effect of relief displacement
Stereoscopy
The aim of this part of the lab was to generate two 3D images first using a digital elevation model (DEM) and then using a digital surface model (DSM).  After this, the resulting anaglyph images were analyzed using polaroid glasses.  

Anaglyph Creation with DEM
For the first anaglyph image, I used an image of Eau Claire at 1 meter spatial resolution and a DEM of the city at 10 meter spatial resolution.  Then, using the Anaglyph function under Terrain on Erdas Imagine, I specified parameters (vertical exaggeration = 1, all other aspects default) in the Anaglyph Generation window and ran the model. This produced the anaglyph shown in the Results section (Figure 12).  


Anaglyph Creation with DSM
For the second anaglyph image, I used an image of Eau Claire with 1 meter spatial resolution and a LiDAR-derived DSM of the city at 2 meter spatial resolution.  Using the Anaglyph function under Terrain in Erdas Imagine once more, I specified the same parameters as I had for the first anaglyph.  After running the model, a second anaglyph was produced, shown in the Results section (Figure 13).    


Orthorectification
This part of the lab served as an introduction to the Erdas Imagine Lecia Photogrammetric Suite (LPS), which is used in digital photogrammetry for triangulation, orthorectification of images, and extraction of digital surface and elevation models, among other things.  As such, we used LPS to orthorectify images and create a planimetrically true orthoimage.
Two orthorectified images were used as sources for ground control measurements, namely a SPOT image and an orthorectified aerial photo.  To carry out the orthorectification process, I used this work flow:  

  • Create new project
  • Select horizontal reference source: SPOT image
  • Collect GCPs (Figure 5, 6)
  • Add second image to block file: aerial photo
  • Collect GCPs in second image
  • Perform automatic tie point collection (Figure 7, 8)
  • Triangulate the images (Figure 9) 
  • Orthorectify the images (Figure 10, 11)
  • Orthoimage analysis 

Figure 5:  GCP point collection

Figure 6: Point Measurement Box displaying GCPs collected from first image

Figure 7: Autotie point collection and summary

Figure 8: Point Measurement Box displaying GCPs (triangles) and tie points (squares)

Figure 9: Triangulation Summary

Figure 10: Orthoimage creation
Figure 11: close up of boundary between two orthoimages


Results

The results of my methods are displayed below.  

Anaglyph Creation
 When wearing polaroid glasses, the features on both anaglyphs appear more 3D, however, the second anaglyph image displays more of this effect (Figure 12, 13).  This could be because the first anaglyph was created using a DEM, which provides more general bare earth features, while the second was created using a DSM, which includes more details like vegetation, buildings, etc.  Additionally, the differences between the two images may be because the spatial resolution of the DEM used for the first anaglyph was at 10m and the DSM used for the second anaglyph was at 1m. 


Figure 12: first anaglyph created with DEM

Figure 13: second anaglyph created with DSM

Orthorectification
The orthoimages produced from my methods, when laid over each other, display a high degree of spatial accuracy in the overlap zone (Figure 14).  

Figure 14: final orthoimages overload

Sources

Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009. 

Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010. 

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau Claire County and Chippewa County governments respectively. 

National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009. 

National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005. 

Scale calculation image is from Humboldt State University Geospatial Curriculum, 2014. 

Spot satellite images are from Erdas Imagine, 2009.