Description
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (LIDAR) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes “on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we are initiating a research effort that aims to reduce the man-in-the-loop requirements for several aspects of synthetic hyperspectral scene construction. Through a fusion of 3D LIDAR data with passive imagery, we are working to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks will also realize a shortened implementation time through this application of multi-modal imagery. This paper reports on the progress made thus far in achieving these objectives.
Date of creation, presentation, or exhibit
5-19-2006
Document Type
Conference Paper
Department, Program, or Center
Chester F. Carlson Center for Imaging Science (COS)
Recommended Citation
S. R. Lach, S. D. Brown, J. P. Kerekes, "Semi-automated DIRSIG scene modeling from 3D LIDAR and passive imaging sources", Proc. SPIE 6214, Laser Radar Technology and Applications XI, 62140I (19 May 2006); doi: 10.1117/12.666096; https://doi.org/10.1117/12.666096
Campus
RIT – Main Campus
Comments
Copyright 2006 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
This work has been supported in part by the NGA under University Research Initiative HM1582-05-1-2005, “Automated Imagery Analysis and Scene Modeling.”
Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014.