HyperAI
HyperAI
الرئيسية
الأخبار
الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
البحث في الموقع...
⌘
K
الرئيسية
SOTA
توليد الصور
Image Generation On Celeba Hq 256X256
Image Generation On Celeba Hq 256X256
المقاييس
FID
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
FID
Paper Title
Repository
LFM
5.26
Flow Matching in Latent Space
-
DC-VAE
15.81
Dual Contradistinctive Generative Autoencoder
-
UNCSN++ (RVE) + ST
7.16
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
-
VAEBM
20.38
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
-
RDUOT
5.6
A High-Quality Robust Diffusion Framework for Corrupted Dataset
-
DDGAN
7.64
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
-
WaveDiff
5.94
Wavelet Diffusion Models are fast and scalable Image Generators
-
LDM-4
5.11
High-Resolution Image Synthesis with Latent Diffusion Models
-
DDMI
8.73
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
-
VQGAN+Transformer
10.2
Taming Transformers for High-Resolution Image Synthesis
-
BOSS
-
Bellman Optimal Stepsize Straightening of Flow-Matching Models
-
RDM
3.15
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
-
Dual-MCMC EBM
15.89
Learning Energy-based Model via Dual-MCMC Teaching
-
StyleSwin
3.25
StyleSwin: Transformer-based GAN for High-resolution Image Generation
-
LSGM
7.22
Score-based Generative Modeling in Latent Space
-
Joint-EBM
9.89
Learning Joint Latent Space EBM Prior Model for Multi-layer Generator
-
RNODE
-
How to train your neural ODE: the world of Jacobian and kinetic regularization
Diffusion-JEBM
8.78
Learning Latent Space Hierarchical EBM Diffusion Models
-
GLOW
68.93
Glow: Generative Flow with Invertible 1x1 Convolutions
-
0 of 19 row(s) selected.
Previous
Next