AI-Powered Innovation Brings Lifelike Virtual Companion Animals to Life
Researchers at Ulsan National Institute of Science and Technology (UNIST) have developed an innovative AI system called DogRecon that creates detailed, animatable 3D models of dogs from a single photograph. The technology enables users to bring their companion animals to life in virtual reality, augmented reality, and metaverse environments. Led by Professor Kyungdon Joo from the Artificial Intelligence Graduate School at UNIST, the team introduced DogRecon, a new AI framework that reconstructs 3D Gaussian models of dogs—including realistic textures and shapes—using just one image. This breakthrough addresses long-standing challenges in 3D reconstruction of quadruped animals, such as diverse breeds, complex body shapes, and frequent occlusions of joints due to their natural postures. Traditional methods struggle to produce accurate 3D representations from single images, often resulting in distorted or unrealistic models—especially in relaxed or crouched positions. DogRecon overcomes these limitations by leveraging breed-specific statistical models to capture variations in anatomy and posture. It also uses advanced generative AI to synthesize multiple viewpoints, filling in hidden areas with high precision. The integration of Gaussian Splatting techniques further enhances realism by accurately rendering curved body shapes and intricate fur textures. Performance tests across various datasets show that DogRecon produces 3D dog avatars that match the quality of those created by video-based methods, despite relying on only a single image. Unlike previous systems that often distorted limbs, ears, tails, or fur, DogRecon generates natural and lifelike results even in complex poses. The system is also designed to be scalable and compatible with text-to-animation workflows, allowing users to generate new animations or retarget motion to existing videos using the reconstructed 3D dogs. This opens up possibilities for interactive storytelling, personalized digital companions, and immersive experiences in gaming and social platforms. The research was led by first author Gyeongsu Cho, with contributions from Changwoo Kang at UNIST and Donghyeon Soon at DGIST. Cho emphasized the growing importance of this technology, noting that over a quarter of households own pets, and extending 3D reconstruction beyond humans to companion animals is a significant step forward. He added that DogRecon empowers anyone to create and animate a digital version of their beloved pet. Professor Joo highlighted the broader impact of the work, stating that combining generative AI with 3D reconstruction marks a meaningful advancement. He expressed plans to expand the approach to include other animals and personalized avatars in the future. The study was published in the International Journal of Computer Vision under the title "DogRecon: Canine Prior-Guided Animatable 3D Gaussian Dog Reconstruction From A Single Image" with DOI: 10.1007/s11263-025-02485-5.