Monday, May 9, 2016

Lab 8: Spectral Signature Analysis & Resource Modeling

Goals

The goal of this lab was to gain experience in the measurement and interpretation of spectral reflectance signatures of various Earth surface and near surface materials captured by satellite images.  Additionally, we explored performing basic monitoring of Earth resources using remote sensing band ratio techniques.  For this lab, the study area was Eau Claire County and Chippewa County in Wisconsin. 


Methods

All exercises in this lab were completed using Erdas Imagine 2015 and ArcMap 10.3.1.

Spectral Signature Analysis
During the first part of this lab, we measured and plotted the spectral reflectances of 12 different surface features from a satellite image of Eau Claire in Erdas Imagine.  These materials included moving water, vegetation, dry soil, etc.  To locate the desired surface, I linked my Erdas Imagine Viewer to Google Earth (Figure 1).  Then, upon finding the desired surface, I utilized the Polygon tool to draw a square around the surface and opened the Signature Editor to view the polygon area's spectral signature mean plot.  After collecting the spectral reflectances of all 12 surfaces, I displayed all of the spectral reflectance signature mean plots together and completed analysis (Figure 2).  Differences between materials can be detected when examining their spectral reflectance signature mean plots; for example, when taking a look at the curves for dry and moist soil, there are clear discrepancies (Figure 3).  Dry soil shows an overall higher amount of reflectance due to its lesser moisture content, while moist soil displays lower reflectance levels due to its higher moisture content. 


Figure 1: spectral reflectance signature collection process for riparian vegetation;
Erdas Imagine (left), Google Earth (right)

Figure 2: all spectral reflectance signatures

Figure 3: spectral reflectance signature mean plot for dry and moist soil


Resource Monitoring
Figure 4: NDVI ratio
The second part of this lab involved resource monitoring of vegetation health and soil health through performing simple band ratios.  Firstly, we utilized Erdas Imagine to implement the normalized difference vegetation index (NDVI) on an image of Eau Claire County and Chippewa County to determine abundance of vegetation.  This involved using the NDVI function within Erdas Imagine, which carried out the NDVI ratio (Figure 4).  This produced a black and white NDVI image, with extremely white areas indicating high levels of vegetation and darker areas indicating less vegetation (Figure 5).  I then imported this image into ArcMap and created a map of vegetation health (Figure 8).  

 
Figure 5: NDVI image viewed in Erdas Imagine


Figure 6: ferrous mineral ratio
Next, I utilized Erdas Imagine to implement the ferrous mineral ratio on an image of Eau Claire County and Chippewa County to determine the spatial distribution of iron contents in soils within this area.  This involved using the Indices function under the Unsupervised tab in the Raster section of Erdas Imagine, which carried out the ferrous mineral ratio (Figure 6).  This produced a black and white image, with lighter areas depicting areas of ferrous minerals and darker areas showing areas with less amounts of such minerals (Figure 7).  I then imported this image into ArcMap and created a map of soil health (Figure 9).  

Figure 7: ferrous mineral map viewed in Erdas Imagine



Results

Below are the results of my methods, namely two maps depicting vegetation and soil health (Figure 8, 9).  As these maps show, carrying out simple band ratio analysis can be very helpful in monitoring Earth resources.  Such data may be especially useful to farmers or the DNR in terms of gauging the health of various landscapes and determining future plans of action. 

Figure 8: map displaying vegetation health

Figure 9: map displaying soil health

Sources


Satellite image is from Earth Resources Observation and Science Center, United States Geological Survey.

Monday, May 2, 2016

Lab 7: Photogrammetry

Goals

The aim of this lab was to develop skills in performing key photogrammetric tasks on aerial photographs and satellite images.  Specifically, this lab encompassed: 

  • calculation of photographic scales
  • measurement of areas & perimeters
  • calculation of relief displacement
  • stereoscopy
  • orthorectification of satellite images


Methods

Scale Calculation 
There are two main methods used to calculate the scale of a vertical aerial image: 
  1. Compare size of objects measured in the real world with the same objects measure on a photograph
  2. Find the relationship between the camera lens focal length and the flying height of the aircraft above the terrain
Figure 1: diagram depicting two methods of scale calculation


During the first part of this lab, we explored both of these methods.  First, I utilized a ruler and images to measure various distances on my computer monitor.  I then used math to compare this measured distance to a given actual distance (mathematical process shown in Figure 1).  After this, I applied the second method of scale calculation using given focal length and flying height values for a photo of Eau Claire (mathematical process shown in Figure 1). 


Area & Perimeter Measurement 
For this part of the lab, I utilized the 'Measure Perimeters and Areas' digitizing tool in Erdas Imagine to calculate the area and perimeter of a given photograph (Figure 2).  

Figure 2: calculation lagoon area and perimeter


Relief Displacement Calculation
Figure 3: relief displacement equation
Relief displacement occurs when objects and features on an aerial photograph are displaced from their true planimetric location.  Through applying an equation, this distortion can be corrected (Figure 2).  During this portion of the lab, I utilized given values to apply the relief displacement equation (Figure 3).   

Figure 4: diagram depicting effect of relief displacement
Stereoscopy
The aim of this part of the lab was to generate two 3D images first using a digital elevation model (DEM) and then using a digital surface model (DSM).  After this, the resulting anaglyph images were analyzed using polaroid glasses.  

Anaglyph Creation with DEM
For the first anaglyph image, I used an image of Eau Claire at 1 meter spatial resolution and a DEM of the city at 10 meter spatial resolution.  Then, using the Anaglyph function under Terrain on Erdas Imagine, I specified parameters (vertical exaggeration = 1, all other aspects default) in the Anaglyph Generation window and ran the model. This produced the anaglyph shown in the Results section (Figure 12).  


Anaglyph Creation with DSM
For the second anaglyph image, I used an image of Eau Claire with 1 meter spatial resolution and a LiDAR-derived DSM of the city at 2 meter spatial resolution.  Using the Anaglyph function under Terrain in Erdas Imagine once more, I specified the same parameters as I had for the first anaglyph.  After running the model, a second anaglyph was produced, shown in the Results section (Figure 13).    


Orthorectification
This part of the lab served as an introduction to the Erdas Imagine Lecia Photogrammetric Suite (LPS), which is used in digital photogrammetry for triangulation, orthorectification of images, and extraction of digital surface and elevation models, among other things.  As such, we used LPS to orthorectify images and create a planimetrically true orthoimage.
Two orthorectified images were used as sources for ground control measurements, namely a SPOT image and an orthorectified aerial photo.  To carry out the orthorectification process, I used this work flow:  

  • Create new project
  • Select horizontal reference source: SPOT image
  • Collect GCPs (Figure 5, 6)
  • Add second image to block file: aerial photo
  • Collect GCPs in second image
  • Perform automatic tie point collection (Figure 7, 8)
  • Triangulate the images (Figure 9) 
  • Orthorectify the images (Figure 10, 11)
  • Orthoimage analysis 

Figure 5:  GCP point collection

Figure 6: Point Measurement Box displaying GCPs collected from first image

Figure 7: Autotie point collection and summary

Figure 8: Point Measurement Box displaying GCPs (triangles) and tie points (squares)

Figure 9: Triangulation Summary

Figure 10: Orthoimage creation
Figure 11: close up of boundary between two orthoimages


Results

The results of my methods are displayed below.  

Anaglyph Creation
 When wearing polaroid glasses, the features on both anaglyphs appear more 3D, however, the second anaglyph image displays more of this effect (Figure 12, 13).  This could be because the first anaglyph was created using a DEM, which provides more general bare earth features, while the second was created using a DSM, which includes more details like vegetation, buildings, etc.  Additionally, the differences between the two images may be because the spatial resolution of the DEM used for the first anaglyph was at 10m and the DSM used for the second anaglyph was at 1m. 


Figure 12: first anaglyph created with DEM

Figure 13: second anaglyph created with DSM

Orthorectification
The orthoimages produced from my methods, when laid over each other, display a high degree of spatial accuracy in the overlap zone (Figure 14).  

Figure 14: final orthoimages overload

Sources

Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009. 

Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010. 

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau Claire County and Chippewa County governments respectively. 

National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009. 

National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005. 

Scale calculation image is from Humboldt State University Geospatial Curriculum, 2014. 

Spot satellite images are from Erdas Imagine, 2009. 

Wednesday, April 20, 2016

Lab 6: Geometric Correction

Goals

The purpose of this lab was to provide an introduction to an important image preprocessing exercise, namely geometric correction.  Geometric correction is performed on satellite images as a part of preprocessing activities prior to the extraction of biophysical and sociocultural information from the images.  There are two major types of geometric correction, image-to-map rectification and image-to-image registration, both of which were explored in this lab.  


Methods

Image-to-Map Rectification: 
During the first part of this lab, I geometrically corrected an image using image-to-map rectification in Erdas Imagine 2015.  To do this, I first opened both the image to be rectified and the reference image in Erdas Imagine.  Then, I navigated to raster processing tools for multispectral imagery and used the Control Points function to begin the geometric correction process.  After this, I set the geometric model to Polynomial.  For this exercise, I ran a 1st order polynomial transformation.  

Then, I began to the process of adding Ground Control Points (GCPs) to both of the images using the Create GCP tool. The number of required GCPs varies depending on the extent of geometric distortion present in an image; moderate distortion only requires affine (linear) transformation and fewer GCPs while more serious geometric distortion requires higher order polynomial transformation and more GCPs (Figure 1).  Since this exercise involved a 1st order polynomial transformation, a minimum of 3 pairs of GCPs were necessary.  For this lab, however, I added four pairs of GCPs.  After adding the third pair of GCPs, the model solution changed from reading "model has no solution" to "model solution is current" because the necessary number of GCPs was achieved.  When adding points, I tried to place them in distinctive locations that could be easily found on both the image to be rectified and the reference map.  I also tried to evenly disperse my GCPs throughout the images.  

After placing all four pairs of GCPs, I evaluated their accuracy by examining the Root Mean Square (RMS) error.  The total RMS error is indicated in the lower right hand corner of the Multipoint Correction Window.  The ideal RMS error is .5 and below, however, for this lab, mine only needed to be below 2.0.  To achieve this, I made small adjustments to my GCPs' locations until my error dipped below 2.0 (Figure 2).  Once my total RMS error had been reduced, I performed the geometric correction using the Natural Neighbor resampling method by clicking the Display Resample Image Dialog button.  This created a rectified output image (Figure 4).



Figure 1: table indicating GCP requirement for different order polynomials


Figure 2: image-to-map registration geometric correction process in Erdas Imagine;
image to be rectified is on the left and reference map is on the right; RMS total error = .7275


Image-to-Image Registration: 

During this part of the lab, I geometrically corrected an imaged using image-to-image registration in Erdas Imagine 2015.  This involved essentially the same process that I outlined above for image-to-map rectification, except this time I used a previously rectified image for a reference instead of a map.  For this exercise, I performed a 3rd order polynomial transformation, so a minimum of 10 GCPs were required, though I added 12 GCPs.  After adding all 12 GCP pairs, I once again evaluated my geometric accuracy by assessing the RMS total error.  Upon achieving a total RMS error under 1.0, I moved on to the geometric correction process.  This time, I used the Bilinear Interpolation resampling method to geometrically correct the image by clicking the Display Resample Image Dialog button.  This produced a rectified output image (Figure 5).  

Figure 3: image-to-image registration geometric correction process in Erdas Imagine; 
image to be rectified is on the left and reference image is on the right


Results

The results of my methods are displayed below. 

Image-to-Map Rectification: 


Figure 4: unrectified image on left; rectified image on right


Image-to-Image Registration: 

Figure 5: unrectified image on right; rectified image on left


Sources

Satellite images. Earth Resources Observation and Science Center, United States Geological Survey. 

Digital raster graphic (DRG) from Illinois Geospatial Data Clearing House. 

Tuesday, April 12, 2016

Lab 5: LiDAR

Goals

LiDAR (Light Detection and Ranging) is one of the recently expanding areas in remote sensing, with the accompanying skill set of dealing with LiDAR data becoming increasingly marketable.  In light of this, the purpose of this lab was to gain a basic understanding of LiDAR data structure and processing.  This involved working with LiDAR point clouds in LAS file format to carry out a number of exercises, including:  

  • Processing and retrieval of various surface & terrain models
  • Processing and creation of intensity image and other derivative products from point cloud

Methods

All exercises in this lab were completed using Erdas Imagine 2015 and ArcMap 10.3.1.

Part 1: Point Cloud Visualization in Erdas Imagine
In this part of the lab, we displayed all of the provided LAS files for the Eau Claire region in Erdas Imagine (Figure 1).  Had there been any overlapping points at the boundaries of tiles, they would have been removed at this point.  After viewing all of the LAS files together, I also examined the relevant metadata.  We completed all further analysis and manipulation of the data in ArcGIS because it has easier workflows in comparison with the point cloud tools in Erdas Imagine.  

Figure 1: LiDAR point cloud of Eau Claire County visualized in Erdas Imagine


Part 2: Generation of LAS Dataset & Exploration with ArcGIS
After opening the tile index and LAS files in ArcGIS, I completed the following: 

  • Creation of a LAS dataset
  • Exploration of the LAS dataset properties
  • Visualization of the LAS dataset as point cloud in 2D and 3D

After creating the LAS dataset and assigning it an appropriate projection, I began to explore different ways of displaying the data.  The LAS Dataset Toolbar in ArcMap provides many different functions to aid in this, including viewing the data by Elevation, Aspect, Slope and Contour as well as applying a number of filters (Figure 2).  There is also the option of viewing the data with a 3D profile view, which can be very useful (Figure 3).
Figure 2: collage displaying LiDAR data viewed by Contour, Aspect, and Elevation respectively (left to right)

Figure 3 : 3D profile view of point cloud data in ArcMap


Part 3: Generation of LiDAR Derivative ProductsWhile LAS point cloud data is sufficient for visualization purposes, most applications require the creation of derivative products such as digital terrain models (DTM) and digital surface models (DSM).  In this part of the lab, I created four such derivative products:

  • Digital Surface Model (DSM) with first return
  • Digital Terrain Model (DTM)
  • Hillshade of DSM
  • Hillshade of DTM

First, to determine the spatial resolution at which the derivative products should be produced at, I took a look at the LAS dataset properties and estimated the average nominal pulse spacing (NPS) at which the point clouds were collected.  Then, I set the points to be displayed by elevation and set the filter to First Returns.  After this, I used the 'LAS to Raster' tool with the following parameters: Interpolation = Binning, Cell Assignment Type = Maximum, Void Fill Method = Natural_Neighbor, Sampling Type = Cell Size, Sampling Value = 6.56168 (approx. 2m) to create a DSM (Figure 3).  Then, I enhanced my DSM image by creating a hillshade of my first derivative product.  This process involved using the 'Hillshade' tool, found under 3D Analyst Tools, Raster Surface (Figure 4).  

Next, using the same  'LAS to Raster' tool, I derived a DTM from the LiDAR point cloud, setting the points to be displayed by elevation and filter to Ground.  In the tool, I used the following parameters: Interpolation = Binning, Cell Assignment Type = Minimum, Void Fill Method = Natural_Neighbor, Sampling Type = Cell Size, Sampling Value = 6.56168 (approx. 2m).  This DTM is a bare-Earth raster (Figure 5). Then, to create an image that showed more details about the Earth surface, I created a hillshade of my DTM using the 'Hillshade' tool (Figure 6).  

Finally, I created a LiDAR intensity image using a similar procedure as outlined above.  First, I set the LAS dataset to be displayed by elevation, with the filter set to First Returns.  I used First Returns since intensity is always captured by the first return echoes.  Then, I utilized the 'LAS to Raster' again with the following parameters: Value Field = Intensity, Interpolation = Binning, Cell Assignment Type = Average, Void Fill Method = Natural_Neighbor, Sampling Type = Cell Size, Sampling Value = 6.56168 (approx. 2m).  The resulting image appeared rather dark when displayed in ArcMap, so I viewed it in Erdas Imagine (Figure 7).  


Results

The results of my methods are displayed below. 

DSM & DSM Hillshade
The digital surface model (DSM) below (Figure 4) was created using LiDAR first returns.  It displays a generalized image of Eau Claire, with larger features discernible.  I also created a grayscale DSM Hillshade (Figure 5), which enhances the landscape by adding shaded relief to the features.  
  
Figure 4: DSM viewed in ArcMap

Figure 5: DSM Hillshade viewed in ArcMap

DTM & DTM Hillshade
The digital terrain model (DTM) below (Figure 6) displays very generalized topographic features of Eau Claire.  In contrast to the DSM above, this image does not display objects on the ground surface such as buildings or plants.  I also created a grayscale DTM Hillshade (Figure 7) which enhances the DTM image with relief shading. 

Figure 6: DTM viewed in ArcMap

Figure 7: DTM Hillshade viewed in ArcMap


Intensity Image
The intensity image below (Figure 8) depicts the intensity, or strongest return strength of the laser pulse, of each point for my Eau Claire LiDAR data.  Displaying images by their intensity values can aid in feature identification and in LiDAR point classification.  

Figure 8: Intensity Image viewed in Erdas Imagine 2015

Sources

Lidar point cloud and Tile Index are from Eau Claire County, 2013. 

Eau Claire County Shapefile is from Mastering ArcGIS 6th Edition data by Margaret Price, 2014.

Friday, April 1, 2016

Lab 4: Miscellaneous Image Functions


Goal and Background

The purpose of this lab was to provide an introduction to a number of image functions and exercises including exploration into the following: 
  • image preprocessing
  • image enhancement for visual interpretation
  • delineation of study area from larger satellite image
  • image mosaicking
  • graphical model creation

Methods

All exercises in this lab were carried out using Erdas Imagine 2015, with images and files provided by Dr. Cyril Wilson. 

Part 1: Image Subsetting
The first part of this lab addressed two ways to subset images, a process very helpful in image interpretation.  Often times, a desired area of interest (AOI) will be smaller than the image at hand.  In these cases, it is advantageous to employ image subsetting to cut away the areas which fall outside of your area of interest.  

Image Subsetting Methods:

I. Inquire Box: The first subsetting method I tackled was utilizing an Inquire Box in Erdas Imagine to indicate my desired AOI and subset the image (Figure 1).  One disadvantage to this method is that the AOI produced will always be a rectangle, although your actual AOI may have a more irregular shape.  


Figure 1: image subsetting using the Inquire Box


II. Shapefile: The second subsetting method I tried out in Erdas Imagine involved using a shapefile for AOI delineation (Figure 2).  After overlaying the shapefile atop the original image, I used the Subset & Chip tool to complete the subsetting.  With this method, the image produced reflects the extent of the AOI regardless of its shape, making this method more suitable for irregular AOI areas than the first method.  


Figure 2: image subsetting using shapefile of AOI


Part 2: Image Fusion
Sometimes, an image is not the spatial resolution that one desires.  In these cases, pansharpening, or the use of a panchromatic band to enhance the spatial resolution of a multispectral image, can be helpful.  In this lab, I loaded the original image and panchromatic image into Erdas Imagine, noting that the images had spatial resolutions of 30m and 15m respectively.  I then used the Resolution Merge tool to carry out the pansharpening process.  The resulting pansharpened image clearly possessed a higher spatial resolution than the original image, with features appearing much more defined.


Part 3: Radiometric Enhancement
When collecting images, one cannot assure that there will be perfect atmospheric conditions.  Sometimes, occurrences such as haze can negatively impact the quality of images.  Luckily, radiometric enhancement techniques can help combat this and increase image quality.  In this lab, I utilized the Haze Reduction tool to reduce the effects of haze upon an image, thus making it easier to complete image interpretation (Figure 4).  


Part 4: Linking Image Viewer to Google Earth
Recent versions of Erdas Imagine have allowed for a handy function called Google Earth Linking.  This allows the user to sync images loaded into Erdas Imagine with images on Google Earth, providing for simultaneous zooming in/out of the same areas (Figure 3).  In this way, Google Earth imagery stands as a form of a selective image interpretation key. 


Figure 3: Google Earth Linking displaying image in Erdas and in Google Earth

Part 5: Image Resampling
Image resampling refers to the process of changing the size of pixels, whether it be increasing or decreasing.  There are multiple methods of carrying out this process including Nearest Neighbor, Bilinear Interpolation, Cubic Convolution, and Bicubic Spline.  Each of these methods has various advantages and disadvantages and is suited to different purposes.  

In this lab, I utilized first utilized Nearest Neighbor to resample an image from 30m to 15m (output image: Figure 12).  While this method did work, it produced an image that was not much higher quality than the original image.  When zoomed in, a pixelated "stairstepped" effect was visible around curves and diagonals.  

Next, I resampled the same image using Bilinear Interpolation (output image: Figure 13).  In comparison to the resampled image produced using Nearest Neighbor, this output image was much smoother.  When zoomed in, curves and diagonals did not display a "stairstepped" effect (Figure 4).  

Figure 4: closer look at Bilinear Interpolation smoother resampling around curves


Part 6: Image Mosaicking
Image mosaicking is the process of stitching multiple satellite images together to form one, larger image.  This process is helpful when the desired AOI is larger than the extent of just one satellite image, or when the AOI falls along the boundary between two satellite images.  There are two different ways of mosaicking an image in Erdas Imagine, Mosaic Express and MosaicPro.  

In this lab, I first mosaicked two images together using Mosaic Express.  This process was quick and easy, however, the resulting image was not a smoothly stitched image (Figure 14).  The color transition at the boundary between both images was quite jarring, and the image as a whole was not cohesive.  

After this, I used MosaicPro to stitch the same two satellite images together.  This method required considerably more user input in regards to color correction, overlay functions, etc.  The resulting image was much more cohesive than the previous one, and had a smooth color transition at the boundary between the two images (Figure 15).  

Part 7: Image Differencing

Image differencing, also known as binary change detection, involves analyzing pixel brightness to determine changes in landscapes over time.  In this process, the pixel brightness values of an image taken at one date are subtracted from that of another image taken at a earlier or later date.  These two images must have the same radiometric characteristics and identical spatial, spectral, and temporal resolution.  The results of image differencing are displayed as a Gaussian histogram.

In this exercise, the study area was Eau Claire county and four of its surrounding counties.  Two images of this area were used, with one from 1991 and the other from 2011.  To carry out the image differencing process, I utilized Model Maker in Erdas Imagine (Figure 5).  Then, I analyzed the resulting Gaussian histogram using a graphic provided by my instructor (Figure 6, Figure 7).  Finally, I created a map in ArcGIS depicting areas of change within the AOI (Figure 12). 


Figure 5: Model Maker equation used to carry out Image Differencing

Figure 6: Gaussian histogram; result of image differencing


Figure 7: graphic explaining Gaussian histogram analysis


Results

The results of my methods are displayed below. 

Image Subset:
Figure 8: image subset produced with Inquire Box method

Figure 9: image subset produced by AOI delineation with shapefile

Image Fusing:
Figure 10: original image (left) vs. pan-sharpened image (right)

Haze Reduction:

Figure 11: original image (left) and haze reduced image (right)


Image Resampling:


Figure 12: resampling using Nearest Neighbor; original image (left), resampled image (right)



Figure 13: resampling using Bilinear Interpolation; original image (left), resampled image (right)

Image Mosaicking:
Figure 14: result of Mosaic Express



Figure 15: result of MosaicPro


Image Differencing:


Figure 16: map depicting areas of landscape change in the Chippewa Valley 1991 - 2011

Conclusion

This lab exposed me to lots of image functions that are essential for image interpretation.  The manipulation of an image depends on the goals of the project at hand and the properties of the image itself.  Image functions can be used in combination with each other to achieve the best possible image for interpretation.  No image is completely perfect, but with the aid of these processes, image interpretation can be improved.   


Sources

Satellite images from Earth Resources Observation and Science Center, United States Geological Survey.

Shapefile from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.