Abstract
Airborne light detection and ranging (LiDAR) systems have been used to gather information about forests, their canopies and what lies beneath them for many decades. Recent advances in LiDAR sensor technology have enabled higher sampling rates, leading to increased point densities for discrete point clouds. However, vast portions of the forest sub-canopy still remain either unsampled or occluded. We contend, that waveform LiDAR, as a specific type of structural modality that digitizes the intensity of the laser backscatter as a function of time (range), contains additional information that can be extracted using modern artificial intelligence (AI) and machine learning (ML) methods. In this study we developed a geometrically, radiometrically, and structurally accurate 3D model of a 700 m x 500 m plot within the Harvard Forest to generate realistic waveform LiDAR. The Harvard Forest scene was validated by comparing simulated remote sensing data to field data that had been collected in 2019 and 2021. Simulated hyperspectral data produced realistic reflectance values across the entire spectrum for all tree species. When compared to hyperspectral data captured by the National Ecological Observatory Network’s (NEON) Airborne Observation Platform (AOP), the simulated data showed strong correlations across the spectrum with a RMSE under 5.5%. Leaf area index (LAI) values were generated from simulated ceptometer measurements across the scene and compared to real field data, in order to validate the structure of the scene’s canopy. The average LAI for the scene was within 6% of the real values and well within a standard deviation of the range. Simulated NEON Optech LiDAR point clouds were also compared to real data and produced extremely realistic duplicates that accurately modeled point density and canopy penetration rates to within 1%. Once the scene and simulated datasets were validated, we used the data to train a convolution neural network (CNN) to classify portions of the waveform previously unused. We used a modified CNN, originally intended for classifying discrete point clouds, to classify real NEON waveform LiDAR data into five classes (background, leaf, bark, ground, and man-made objects). This process yielded mixed results. It failed to correctly classify ground and sub-canopy object voxels due to high variability within the limited training data set. However, the CNN produced accurate canopy models filled with leaf and bark voxels that were four times greater than the point density (PD) of discrete systems. The scene and processes developed during this research effort will help expand our knowledge of discrete LiDAR systems and will provide a foundation for future iterations of AI/ML efforts to unlock the true potential of waveform data.
Library of Congress Subject Headings
Forest canopies--Remote sensing; Optical radar; Cathode ray oscillographs
Publication Date
3-3-2025
Document Type
Dissertation
Student Type
Graduate
Degree Name
Imaging Science (Ph.D.)
Department, Program, or Center
Chester F. Carlson Center for Imaging Science
College
College of Science
Advisor
David Ross
Advisor/Committee Member
Jan Van Aardt
Advisor/Committee Member
Keith Krause
Recommended Citation
Wible, Robert J. I., "Enhanced 3D Sub-Canopy Mapping via Airborne Full-Waveform LiDAR" (2025). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/12078
Campus
RIT – Main Campus
Plan Codes
IMGS-PHD