HyperAIHyperAI
2 months ago

Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation

Nguyen-Meidine, Le Thanh ; Belal, Atif ; Kiran, Madhu ; Dolz, Jose ; Blais-Morin, Louis-Antoine ; Granger, Eric
Unsupervised Multi-Target Domain Adaptation Through Knowledge
  Distillation
Abstract

Unsupervised domain adaptation (UDA) seeks to alleviate the problem of domainshift between the distribution of unlabeled data from the target domain w.r.t.labeled data from the source domain. While the single-target UDA scenario iswell studied in the literature, Multi-Target Domain Adaptation (MTDA) remainslargely unexplored despite its practical importance, e.g., in multi-cameravideo-surveillance applications. The MTDA problem can be addressed by adaptingone specialized model per target domain, although this solution is too costlyin many real-world applications. Blending multiple targets for MTDA has beenproposed, yet this solution may lead to a reduction in model specificity andaccuracy. In this paper, we propose a novel unsupervised MTDA approach to traina CNN that can generalize well across multiple target domains. OurMulti-Teacher MTDA (MT-MTDA) method relies on multi-teacher knowledgedistillation (KD) to iteratively distill target domain knowledge from multipleteachers to a common student. The KD process is performed in a progressivemanner, where the student is trained by each teacher on how to perform UDA fora specific target, instead of directly learning domain adapted features.Finally, instead of combining the knowledge from each teacher, MT-MTDAalternates between teachers that distill knowledge, thereby preserving thespecificity of each target (teacher) when learning to adapt to the student.MT-MTDA is compared against state-of-the-art methods on several challenging UDAbenchmarks, and empirical results show that our proposed model can provide aconsiderably higher level of accuracy across multiple target domains. Our codeis available at: https://github.com/LIVIAETS/MT-MTDA

Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation | Latest Papers | HyperAI