Abstract

Deep Learning (DL) models have achieved great success in large data fields ranging from computer vision and natural language processing to digital arts and robotics. However, the effectiveness of the DL models is challenged by many real-world limited data problems (e.g., medicine, healthcare, and security intelligence) where data for model training is scarce. Unlike DL models, humans can use the prior knowledge stored in their brains to quickly learn new tasks with limited data. Inspired by such human learning, various meta-learning models have been developed that aim to address the challenge of learning from limited data. However, existing models are computationally expensive, lack fine-grained uncertainty-quantification capabilities, and the predictions are not always trustworthy. The dissertation focuses on different instances of the two most popular limited data problems: few-shot regression and few-shot classification. For both problems, the developed models need to be robust, and output well-calibrated trustworthy predictions while remaining computationally cheap and label-efficient to ensure real-world applicability. In this dissertation, we develop a novel uncertainty-aware meta-learning framework based on evidential deep learning that contributes towards developing a reliable model that can address the above challenges. We first introduce the evidential multidimensional belief theory for meta-learning that leads to computationally-efficient uncertainty-aware few-shot classification models. We then extend the evidential regression theory to meta-learning models that leads to computationally-efficient uncertainty-aware outlier-robust few-shot regression models. We then carry out a thorough analysis of the evidential deep learning framework to identify fundamental learning deficiency that helps explain the suboptimal performance, especially in challenging settings. We then develop theoretically justified, empirically validated solution to address the fundamental learning deficiency of the evidential models. Improving on the developed theory, we introduce the Bayesian-evidential framework for parameter-efficient-fine-tuning of vision foundation models that leads to well-calibrated uncertainty-aware few-shot learning models. We then study the adversarial robustness of the developed uncertainty-aware models. We also explore applications of the ideas developed in this dissertation to real-world problems of healthcare and high-density-energy physics. The theoretically grounded, empirically justified solutions of the uncertainty-aware meta-learning framework developed in this dissertation contribute towards development of trustworthy uncertainty-aware models that are capable of effectively learning from limited data.

Library of Congress Subject Headings

Deep learning (Machine learning); Metacognition; Bayesian field theory

Publication Date

1-2025

Document Type

Dissertation

Student Type

Graduate

Degree Name

Computing and Information Sciences (Ph.D.)

Department, Program, or Center

Computing and Information Sciences Ph.D, Department of

College

Golisano College of Computing and Information Sciences

Advisor

Qi Yu

Advisor/Committee Member

Rui Li

Advisor/Committee Member

Zhiqiang Tao

Campus

RIT – Main Campus

Plan Codes

COMPIS-PHD

Share

COinS