HyperAI

Image Generation On Celeba Hq 256X256

Métriques

FID

Résultats

Résultats de performance de divers modèles sur ce benchmark

Nom du modèle
FID
Paper TitleRepository
LFM5.26Flow Matching in Latent Space
DC-VAE15.81Dual Contradistinctive Generative Autoencoder-
UNCSN++ (RVE) + ST7.16Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
VAEBM20.38VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
RDUOT5.6A High-Quality Robust Diffusion Framework for Corrupted Dataset
DDGAN7.64Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
WaveDiff5.94Wavelet Diffusion Models are fast and scalable Image Generators
LDM-45.11High-Resolution Image Synthesis with Latent Diffusion Models
DDMI8.73DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
VQGAN+Transformer10.2Taming Transformers for High-Resolution Image Synthesis
BOSS-Bellman Optimal Stepsize Straightening of Flow-Matching Models
RDM3.15Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
Dual-MCMC EBM15.89Learning Energy-based Model via Dual-MCMC Teaching-
StyleSwin3.25StyleSwin: Transformer-based GAN for High-resolution Image Generation
LSGM7.22Score-based Generative Modeling in Latent Space
Joint-EBM9.89Learning Joint Latent Space EBM Prior Model for Multi-layer Generator-
RNODE-How to train your neural ODE: the world of Jacobian and kinetic regularization
Diffusion-JEBM8.78Learning Latent Space Hierarchical EBM Diffusion Models-
GLOW68.93Glow: Generative Flow with Invertible 1x1 Convolutions
0 of 19 row(s) selected.