HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
ホーム
SOTA
画像生成
Image Generation On Celeba Hq 256X256
Image Generation On Celeba Hq 256X256
評価指標
FID
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
FID
Paper Title
Repository
GLOW
68.93
Glow: Generative Flow with Invertible 1x1 Convolutions
VAEBM
20.38
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Dual-MCMC EBM
15.89
Learning Energy-based Model via Dual-MCMC Teaching
-
DC-VAE
15.81
Dual Contradistinctive Generative Autoencoder
-
VQGAN+Transformer
10.2
Taming Transformers for High-Resolution Image Synthesis
Joint-EBM
9.89
Learning Joint Latent Space EBM Prior Model for Multi-layer Generator
-
Diffusion-JEBM
8.78
Learning Latent Space Hierarchical EBM Diffusion Models
-
DDMI
8.73
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
DDGAN
7.64
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
LSGM
7.22
Score-based Generative Modeling in Latent Space
UNCSN++ (RVE) + ST
7.16
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
WaveDiff
5.94
Wavelet Diffusion Models are fast and scalable Image Generators
RDUOT
5.6
A High-Quality Robust Diffusion Framework for Corrupted Dataset
LFM
5.26
Flow Matching in Latent Space
LDM-4
5.11
High-Resolution Image Synthesis with Latent Diffusion Models
StyleSwin
3.25
StyleSwin: Transformer-based GAN for High-resolution Image Generation
RDM
3.15
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
BOSS
-
Bellman Optimal Stepsize Straightening of Flow-Matching Models
RNODE
-
How to train your neural ODE: the world of Jacobian and kinetic regularization
0 of 19 row(s) selected.
Previous
Next