HyperAIHyperAI
2 months ago

Meta Co-Training: Two Views are Better than One

Rothenberger, Jay C. ; Diochnos, Dimitrios I.
Meta Co-Training: Two Views are Better than One
Abstract

In many critical computer vision scenarios unlabeled data is plentiful, butlabels are scarce and difficult to obtain. As a result, semi-supervisedlearning which leverages unlabeled data to boost the performance of supervisedclassifiers have received significant attention in recent literature. Onerepresentative class of semi-supervised algorithms are co-training algorithms.Co-training algorithms leverage two different models which have access todifferent independent and sufficient representations or "views" of the data tojointly make better predictions. Each of these models creates pseudo-labels onunlabeled points which are used to improve the other model. We show that in thecommon case where independent views are not available, we can construct suchviews inexpensively using pre-trained models. Co-training on the constructedviews yields a performance improvement over any of the individual views weconstruct and performance comparable with recent approaches in semi-supervisedlearning. We present Meta Co-Training, a novel semi-supervised learningalgorithm, which has two advantages over co-training: (i) learning is morerobust when there is large discrepancy between the information content of thedifferent views, and (ii) does not require retraining from scratch on eachiteration. Our method achieves new state-of-the-art performance on ImageNet-10%achieving a ~4.7% reduction in error rate over prior work. Our method alsooutperforms prior semi-supervised work on several other fine-grained imageclassification datasets.

Meta Co-Training: Two Views are Better than One | Latest Papers | HyperAI