This study introduces change detection based on object/neighbourhood correlation image analysis and image segmentation techniques. The correlation image analysis is based on the fact that pairs of brightness values from the same geographic area (e.g. an object) between bi‐temporal image datasets tend to be highly correlated when little change occurres, and uncorrelated when change occurs. Five different change detection methods were investigated to determine how new contextual features could improve change classification results, and if an object‐based approach could improve change classification when compared with per‐pixel analysis. The five methods examined include (1) object‐based change classification incorporating object correlation images (OCIs), (2) object‐based change classification incorporating neighbourhood correlation images (NCIs), (3) object‐based change classification without contextual features, (4) per‐pixel change classification incorporating NCIs, and (5) traditional per‐pixel change classification using only bi‐temporal image data. Two different classification algorithms (i.e. a machine‐learning decision tree and nearest‐neighbour) were also investigated. Comparison between the OCI and the NCI variables was evaluated. Object‐based change classifications incorporating the OCIs or the NCIs produced more accurate change detection classes (Kappa approximated 90%) than other change detection results (Kappa ranged from 80 to 85%).
The effects of land cover and surface slope on lidar-derived elevation data were examined for a watershed in the piedmont of North Carolina. Lidar data were collected over the study area in a winter (leaf-off) overflight. Survey-grade elevation points (1,225) for six different land cover classes were used as reference points. Root mean squared error (RMSE) for land cover classes ranged from 14.5 cm to 36.1 cm. Land cover with taller canopy vegetation exhibited the largest errors. The largest mean error (36.1 cm RMSE) was in the scrub-shrub cover class. Over the small slope range (0° to 10°) in this study area, there was little evidence for an increase in elevation error with increased slopes. However, for low grass land cover, elevation errors do increase in a consistent manner with increasing slope. Slope errors increased with increasing surface slope, under-predicting true slope on surface slopes � 2°. On average, the lidarderived elevation under-predicted true elevation regardless of land cover category. The under-prediction was significant, and ranged up to � 23.6 cm under pine land cover.
Abstract This paper reports the results of a quantitative comparison of empirical and model based atmospheric correction techniques for the radiometric calibration of a Digital Airborne Imaging Spectrometer (DAIS) 3715 hyperspectral image dataset. Empirical line calibration (ELC) and the radiative transfer based model Atmospheric CORrection Now (ACORN) were applied to transform the hyperspectral dataset from values of radiance to scaled percent reflectance. An additional spectral polishing technique called single spectrum enhancement (SSE) was implemented a posteriori to refine the transformation results. To evaluate the accuracy of the radiometric calibration techniques, spectra extracted from the processed images were analytically compared to spectral measurements collected in situ with a handheld spectroradiometer at 46 sample point locations. Average RMSE errors were as follows: ELC without SSE = 0.1415, ACORN without SSE = 0.0645, ELC with SSE = 0.0345, and ACORN with SSE = 0.0314. Based on the results of this analysis, spectral polishing through the use of SSE appears to introduce the greatest improvement in the removal of deleterious atmospheric effects when compared to in situ data, regardless of the choice of the model (i.e. ACORN or ELC).
Abstract Quickened remote sensing‐assisted data‐to‐decision pathways in forest monitoring and other applications are fragmented with respect to the treatment of spatial scale. While definitions, terminology, and fundamental scale questions are fairly stable, common practices of dealing with the effects of spatial scale in such applications as forest monitoring have remained stagnant for decades. Recent studies are searching for ways to efficiently manage scale so that remote sensing‐derived information factors such as reliability and economy may be maximized. Development and research in emerging technologies such as artificial intelligence‐assisted spatial data processing, high‐throughput computing, automated geometric correction, and object‐oriented image analysis can be positioned so as to facilitate significantly improved worldwide forest remote sensor synergy. The potential impact of research in automated spatial scale management to this end suggests that it should be computationally integrated with forest remote sensing applications.
Computational trends toward shared services suggest the need to automatically manage spatial scale for overlapping applications. In three experiments using high-spatial-resolution optical imagery and LIDAR data to extract impervious, forest, and herbaceous classes, this study optimized C5.0 rule sets according to: (1) spatial scale within an image tile; (2) spatial scale within spectral clusters; and (3) stability of predicted accuracies based on cross validation. Alteration of the image segmentation scale parameter affected accuracy as did synergy with LIDAR derivatives. Within the tile examined, forest and herbaceous areas benefited more from optical and LIDAR synergy than did impervious surfaces.
An outbreak of red oak borer, an insect infesting red oak trees, prompted the need for a biomass model of closed-canopy oak-hickory forests in the rugged terrain of the Arkansas Ozarks. Multiple height percentiles were calculated from small-footprint aerial LIDAR data, and image segmentation was employed to partition the LIDAR-derived surface into structurally homogeneous modeling units. In situ reference data were incorporated into a machine-learning algorithm that produced a regression-tree model for predicting aboveground woody biomass per segment. Model results on training data appear adequate for prediction purposes (mean error 2.38 kg/m2, R 2 = 0.83). Model performance on withheld test data reveals slightly lower accuracy (2.77 kg/m2, R 2 = 0.72).
Abstract This study developed a method to rapidly assess, during early mitigation planning and recovery efforts, both the flood extent and the water depth of Hurricane Katrina's storm surge. Over a hundred high water marks were collected within the Mississippi Gulf Coast study area utilizing survey equipment and GPS measurements. In order to account for the many inlets and estuaries along the coast, an interpolation approach was tested that allowed the interpolation algorithm to calculate distance around such barrier features rather than directly through them, thus more appropriately modeling the physical process of water distribution during the storm surge event. This technique was implemented operationally using a "cost surface" algorithm, available in most GIS software packages. A binary impedance surface (travel is either possible or not) was utilized as input to the cost surface algorithm. The impedance surface was generated using modified hydrologic operations on an existing lidar-derived DEM. This technique is detailed. The water surface generated from these points more closely matched the reference surface than those of comparable surfaces generated using tradition interpolation techniques. The reference datasets for this project consisted of FEMA flood inundation and water surface maps released a few months after the datasets created for this project.
The imperviousness of land parcels was mapped and evaluated using high spatial resolution digitized color orthophotography and surface-cover height extracted from multiple-return lidar data. Maximum-likelihood classification, spectral clustering, and expert system approaches were used to extract the impervious information from the datasets. Classified pixels (or segments) were aggregated to parcels. The classification model based on the use of both the orthophotography and lidar-derived surface-cover height yielded impervious surface results for all parcels that were within 15 percent of reference data. The standard error for the rule-based per-pixel model was 7.15 percent with a maximum observed error of 18.94 percent. The maximum-likelihood per-pixel classification yielded a lower standard error of 6.62 percent with a maximum of 14.16 percent. The regression slope (i.e., 0.955) for the maximum-likelihood per-pixel model indicated a near perfect relationship between observed and predicted imperviousness. The additional effort of using a per-segment approach with a rule-based classification resulted in slightly better standard error (5.85 percent) and a near-perfect regression slope (1.016).
Humans produce large amounts of waste that must be processed or stored so that it does not contaminate the environment. When hazardous wastes are stored, waste site monitoring is typically conducted in situ which can lead to a serious time lag between the onset of a problem and detection. A Remote Sensing and GIS-assisted Spatial Decision Support System for Hazardous Waste Site Monitoring was developed to improve hazardous waste site management. The system was designed to be recursive, flexible, and integrative. It is recursive because the system is implemented iteratively until the risk assessment subsystem determines that an event is no longer a problem to the surrounding human population or to the environment. It is flexible in that it can be adapted to monitor a variety of hazardous waste sites. The system is integrative because it incorporates a number of different data types and sources (e.g., multispectral and lidar remote sensor data, numerous type of thematic information, and production rules), modules, and human expert knowledge of the hazardous waste sites. The system was developed for monitoring hazardous wastes on the Savannah River National Laboratory near Aiken, South Carolina.