A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels.
The recent success of deep neural networks is powered in part by large-scalewell-labeled training data. However, it is a daunting task to laboriouslyannotate an ImageNet-like dateset. On the contrary, it is fairly convenient,fast, and cheap to collect training images from the Web along with their noisylabels. This signifies the need of alternative approaches to training deepneural networks using such noisy labels. Existing methods tackling this problemeither try to identify and correct the wrong labels or reweigh the data termsin the loss function according to the inferred noisy rates. Both strategiesinevitably incur errors for some of the data points. In this paper, we contendthat it is actually better to ignore the labels of some of the data points thanto keep them if the labels are incorrect, especially when the noisy rate ishigh. After all, the wrong labels could mislead a neural network to a bad localoptimum. We suggest a two-stage framework for the learning from noisy labels.In the first stage, we identify a small portion of images from the noisytraining set of which the labels are correct with a high probability. The noisylabels of the other images are ignored. In the second stage, we train a deepneural network in a semi-supervised manner. This framework effectively takesadvantage of the whole training set and yet only a portion of its labels thatare most likely correct. Experiments on three datasets verify the effectivenessof our approach especially when the noisy rate is high.
Stay in the loop.
Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.