Guiding Generative Models via Class Label Information

Date
2016-01-22
Authors
Rudy, Jan
Journal Title
Journal ISSN
Volume Title
Publisher
University of Guelph
Abstract

Given a finite number of samples from some high-dimensional distribution, the task of efficiently and accurately modeling the distribution can be challenging. Some datasets, however, provide additional information (e.g. categorical class labels) for each input. When class labels are available, can they be used to better model the data distribution? A conditional modeling and training procedure is introduced for a type of generative model (the generalized denoising autoencoder) and two methods of injecting class label information are presented (additive vs. multiplicative). When trained on natural images, models with access to class information generate samples of higher visual fidelity than those trained on images alone. Additionally, with higher dimensional data, multiplicative architectures outperform their additive counterparts. Finally, experimental results confirm recent findings that Parzen likelihood estimates are a poor measure of visual sample quality.

Description
Keywords
Deep Learning, Generative models, neural networks, artificial intelligence, machine learning, gated models
Citation