Cloning Outfits from Real-World Images to 3D Characters for Generalizable Person Re-Identification

Recently, large-scale synthetic datasets are shown to be very useful forgeneralizable person re-identification. However, synthesized persons inexisting datasets are mostly cartoon-like and in random dress collocation,which limits their performance. To address this, in this work, an automaticapproach is proposed to directly clone the whole outfits from real-world personimages to virtual 3D characters, such that any virtual person thus created willappear very similar to its real-world counterpart. Specifically, based on UVtexture mapping, two cloning methods are designed, namely registered clothesmapping and homogeneous cloth expansion. Given clothes keypoints detected onperson images and labeled on regular UV maps with clear clothes structures,registered mapping applies perspective homography to warp real-world clothes tothe counterparts on the UV map. As for invisible clothes parts and irregular UVmaps, homogeneous expansion segments a homogeneous area on clothes as arealistic cloth pattern or cell, and expand the cell to fill the UV map.Furthermore, a similarity-diversity expansion strategy is proposed, byclustering person images, sampling images per cluster, and cloning outfits for3D character generation. This way, virtual persons can be scaled up densely invisual similarity to challenge model learning, and diversely in population toenrich sample distribution. Finally, by rendering the cloned characters inUnity3D scenes, a more realistic virtual dataset called ClonedPerson iscreated, with 5,621 identities and 887,766 images. Experimental results showthat the model trained on ClonedPerson has a better generalization performance,superior to that trained on other popular real-world and synthetic personre-identification datasets. The ClonedPerson project is available athttps://github.com/Yanan-Wang-cs/ClonedPerson.