Cross-Domain Ensemble Distillation for Domain Generalization

Domain generalization is the task of learning models that generalize tounseen target domains. We propose a simple yet effective method for domaingeneralization, named cross-domain ensemble distillation (XDED), that learnsdomain-invariant features while encouraging the model to converge to flatminima, which recently turned out to be a sufficient condition for domaingeneralization. To this end, our method generates an ensemble of the outputlogits from training data with the same label but from different domains andthen penalizes each output for the mismatch with the ensemble. Also, we presenta de-stylization technique that standardizes features to encourage the model toproduce style-consistent predictions even in an arbitrary target domain. Ourmethod greatly improves generalization capability in public benchmarks forcross-domain image classification, cross-dataset person re-ID, andcross-dataset semantic segmentation. Moreover, we show that models learned byour method are robust against adversarial attacks and image corruptions.