HyperAIHyperAI
2 months ago

DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations

Park, Dogyun ; Kim, Sihyeon ; Lee, Sojin ; Kim, Hyunwoo J.
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing
  High-Quality Implicit Neural Representations
Abstract

Recent studies have introduced a new class of generative models forsynthesizing implicit neural representations (INRs) that capture arbitrarycontinuous signals in various domains. These models opened the door fordomain-agnostic generative models, but they often fail to achieve high-qualitygeneration. We observed that the existing methods generate the weights ofneural networks to parameterize INRs and evaluate the network with fixedpositional embeddings (PEs). Arguably, this architecture limits the expressivepower of generative models and results in low-quality INR generation. Toaddress this limitation, we propose Domain-agnostic Latent Diffusion Model forINRs (DDMI) that generates adaptive positional embeddings instead of neuralnetworks' weights. Specifically, we develop a Discrete-to-continuous spaceVariational AutoEncoder (D2C-VAE), which seamlessly connects discrete data andthe continuous signal functions in the shared latent space. Additionally, weintroduce a novel conditioning mechanism for evaluating INRs with thehierarchically decomposed PEs to further enhance expressive power. Extensiveexperiments across four modalities, e.g., 2D images, 3D shapes, Neural RadianceFields, and videos, with seven benchmark datasets, demonstrate the versatilityof DDMI and its superior performance compared to the existing INR generativemodels.