HyperAIHyperAI
2 months ago

Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation

Lin, Kevin ; Wang, Lijuan ; Luo, Kun ; Chen, Yinpeng ; Liu, Zicheng ; Sun, Ming-Ting
Cross-Domain Complementary Learning Using Pose for Multi-Person Part
  Segmentation
Abstract

Supervised deep learning with pixel-wise training labels has great successeson multi-person part segmentation. However, data labeling at pixel-level isvery expensive. To solve the problem, people have been exploring to usesynthetic data to avoid the data labeling. Although it is easy to generatelabels for synthetic data, the results are much worse compared to those usingreal data and manual labeling. The degradation of the performance is mainly dueto the domain gap, i.e., the discrepancy of the pixel value statistics betweenreal and synthetic data. In this paper, we observe that real and synthetichumans both have a skeleton (pose) representation. We found that the skeletonscan effectively bridge the synthetic and real domains during the training. Ourproposed approach takes advantage of the rich and realistic variations of thereal data and the easily obtainable labels of the synthetic data to learnmulti-person part segmentation on real images without any human-annotatedlabels. Through experiments, we show that without any human labeling, ourmethod performs comparably to several state-of-the-art approaches which requirehuman labeling on Pascal-Person-Parts and COCO-DensePose datasets. On the otherhand, if part labels are also available in the real-images during training, ourmethod outperforms the supervised state-of-the-art methods by a large margin.We further demonstrate the generalizability of our method on predicting novelkeypoints in real images where no real data labels are available for the novelkeypoints detection. Code and pre-trained models are available athttps://github.com/kevinlin311tw/CDCL-human-part-segmentation

Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation | Latest Papers | HyperAI