Information Theoretic Co-Training.
This paper introduces an information theoretic co-training objective forunsupervised learning. We consider the problem of predicting the future. Ratherthan predict future sensations (image pixels or sound waves) we predict"hypotheses" to be confirmed by future sensations. More formally, we assume apopulation distribution on pairs $(x,y)$ where we can think of $x$ as a pastsensation and $y$ as a future sensation. We train both a predictor model$P_\Phi(z|x)$ and a confirmation model $P_\Psi(z|y)$ where we view $z$ ashypotheses (when predicted) or facts (when confirmed). For a populationdistribution on pairs $(x,y)$ we focus on the problem of measuring the mutualinformation between $x$ and $y$. By the data processing inequality this mutualinformation is at least as large as the mutual information between $x$ and $z$under the distribution on triples $(x,z,y)$ defined by the confirmation model$P_\Psi(z|y)$. The information theoretic training objective for $P_\Phi(z|x)$and $P_\Psi(z|y)$ can be viewed as a form of co-training where we want theprediction from $x$ to match the confirmation from $y$.
Continue reading and listening
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.