Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing

Despite the noticeable progress in perceptual tasks like detection, instancesegmentation and human parsing, computers still perform unsatisfactorily onvisually understanding humans in crowded scenes, such as group behavioranalysis, person re-identification and autonomous driving, etc. To this end,models need to comprehensively perceive the semantic information and thedifferences between instances in a multi-human image, which is recently definedas the multi-human parsing task. In this paper, we present a new large-scaledatabase "Multi-Human Parsing (MHP)" for algorithm development and evaluation,and advances the state-of-the-art in understanding humans in crowded scenes.MHP contains 25,403 elaborately annotated images with 58 fine-grained semanticcategory labels, involving 2-26 persons per image and captured in real-worldscenes from various viewpoints, poses, occlusion, interactions and background.We further propose a novel deep Nested Adversarial Network (NAN) model formulti-human parsing. NAN consists of three Generative Adversarial Network(GAN)-like sub-nets, respectively performing semantic saliency prediction,instance-agnostic parsing and instance-aware clustering. These sub-nets form anested structure and are carefully designed to learn jointly in an end-to-endway. NAN consistently outperforms existing state-of-the-art solutions on ourMHP and several other datasets, and serves as a strong baseline to drive thefuture research for multi-human parsing.