Content Tags

There are no tags.

Learning Inverse Mappings with Adversarial Criterion.

RSS Source
Jiyi Zhang, Hung Dang, Hwee Kuan Lee, Ee-Chien Chang

We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneouslytrains a generative model G that maps an arbitrary latent code distribution toa data distribution and an encoder E that embodies an "inverse mapping" thatencodes a data sample into a latent code vector. Unlike previous hybridapproaches that leverage adversarial training criterion in constructingautoencoders, FAAE minimizes re-encoding errors in the latent space andexploits adversarial criterion in the data space. Experimental evaluationsdemonstrate that the proposed framework produces sharper reconstructed imageswhile at the same time enabling inference that captures rich semanticrepresentation of data.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.