Abstract

This work explores a novel generative modeling approach inspired by variational autoencoders (VAEs). Traditional VAEs rely on a recognition model (encoder) that approximates the latent posterior with a single gaussian distribution for each input, limiting their flexibility in capturing complex data distributions. In contrast, we propose a modified recognition model that utilizes stochastic mixtures of gaussians, allowing for a more expressive latent representation. By leveraging stochastic neural networks within the VAE framework, we aim to achieve a tighter evidence lower bound (ELBO) on the log-likelihood of the data. Our approach is a preliminary investigation to enhance the latent space structure and improve generative performance by incorporating richer uncertainty modeling.

Library of Congress Subject Headings

Neural networks (Computer science); Stochastic models; Artificial intelligence--Data processing

Publication Date

5-2025

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Science (MS)

Department, Program, or Center

Computer Science, Department of

College

Golisano College of Computing and Information Sciences

Advisor

Richard D. Lange

Advisor/Committee Member

Alexander G. Ororbia II

Advisor/Committee Member

Christopher Homan

Campus

RIT – Main Campus

Plan Codes

COMPSCI-MS

Share

COinS