Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations

Deep neural networks suffer from the major limitation of catastrophicforgetting old tasks when learning new ones. In this paper we focus on classincremental continual learning in semantic segmentation, where new categoriesare made available over time while previous training data is not retained. Theproposed continual learning scheme shapes the latent space to reduce forgettingwhilst improving the recognition of novel classes. Our framework is driven bythree novel components which we also combine on top of existing techniqueseffortlessly. First, prototypes matching enforces latent space consistency onold classes, constraining the encoder to produce similar latent representationfor previously seen classes in the subsequent steps. Second, featuressparsification allows to make room in the latent space to accommodate novelclasses. Finally, contrastive learning is employed to cluster featuresaccording to their semantics while tearing apart those of different classes.Extensive evaluation on the Pascal VOC2012 and ADE20K datasets demonstrates theeffectiveness of our approach, significantly outperforming state-of-the-artmethods.