Abstract
Transfer learning facilitates the training of a deep learning (DL) model with limited or no labeled data, by initializing the network parameters using a similar model already trained on a different but related dataset or task. This dissertation examines two special cases of transfer learning for image classification tasks: cross-modal supervised learning, and cross-domain unsupervised adaptation. This dissertation proposes to apply cross-modal transfer learning to guide the training process of a DL model on Synthetic Aperture Radar (SAR) images via knowledge distillation from a DL model trained on corresponding electro-optical (EO) images. Furthermore, this approach explores class-balanced sampling strategies and multi-stage training procedures to account for the high class-imbalance encountered in a real-world SAR image dataset. When models trained in one domain (source) are deployed in a new environment (target), they may encounter performance degradation due to the data distribution shift between the source and the target. Domain adaptation (DA) aims to address this limitation by aligning the source domain features with those extracted from the target domain. Drawing inspiration from continual learning, we refine source-free continual unsupervised domain adaptation methods ConDA and UCL-GV, which are buffer-fed networks that adapt to the continually incoming small batches of unlabelled target data. Our models outperform state-of-the-art (SOTA) continual DA models on both static, and dynamic (gradually changing) target domains. We further introduce new synthetic aerial datasets under gradually degrading weather conditions, and propose techniques to improve training stability of continual DA methods. Recent tools for the commercialization of DL models have sparked concerns about protecting proprietary DL technologies during end-user deployment. We explore black-box domain adaptation (BBDA) as a means to mitigate these concerns. We propose a curriculum-guided domain adaptation method called CABB that splits the target data into clean and noisy subsets via pseudolabel distribution modeling, and progressively adapts to the reliable and clean pseudolabels first, and then to the noisy pseudolabels later. Our method outperforms existing BBDA models by up to 9.3\% across several popular DA datasets, and is on par with white-box DA models. All the object categories in the source and the target domains may not necessarily fully overlap, and the target domain may contain samples from novel classes that are absent in the source domain. We introduce Unknown Sample Discovery (USD) as a source-free open set domain adaptation (SF-OSDA) method that also utilizes pseudolabel distribution modeling to conduct known-unknown target sample separation. USD operates within a teacher-student framework using co-training and temporal consistency between the teacher and the student models, thereby significantly reducing error accumulation resulting from imperfect known-unknown sample separation. Empirical results show that USD is superior to existing SF-OSDA methods by as much as $\sim$20\% in terms of prediction accuracy.
Library of Congress Subject Headings
Transfer learning (Machine learning); Remote-sensing images--Classification; Deep learning (Machine learning)
Publication Date
4-2024
Document Type
Dissertation
Student Type
Graduate
Degree Name
Imaging Science (Ph.D.)
Department, Program, or Center
Chester F. Carlson Center for Imaging Science
College
College of Science
Advisor
Andreas Savakis
Advisor/Committee Member
Carl Salvaggio
Advisor/Committee Member
Jan van Aardt
Recommended Citation
Jahan, Chowdhury Sadman, "Transfer learning across domains and sensing modalities" (2024). Thesis. Rochester Institute of Technology. Accessed from
https://repository.rit.edu/theses/11696
Campus
RIT – Main Campus
Plan Codes
IMGS-PHD