HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
이미지 생성
Image Generation On Celeba Hq 256X256
Image Generation On Celeba Hq 256X256
평가 지표
FID
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
FID
Paper Title
Repository
LFM
5.26
Flow Matching in Latent Space
DC-VAE
15.81
Dual Contradistinctive Generative Autoencoder
-
UNCSN++ (RVE) + ST
7.16
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
VAEBM
20.38
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
RDUOT
5.6
A High-Quality Robust Diffusion Framework for Corrupted Dataset
DDGAN
7.64
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
WaveDiff
5.94
Wavelet Diffusion Models are fast and scalable Image Generators
LDM-4
5.11
High-Resolution Image Synthesis with Latent Diffusion Models
DDMI
8.73
DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
VQGAN+Transformer
10.2
Taming Transformers for High-Resolution Image Synthesis
BOSS
-
Bellman Optimal Stepsize Straightening of Flow-Matching Models
RDM
3.15
Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
Dual-MCMC EBM
15.89
Learning Energy-based Model via Dual-MCMC Teaching
-
StyleSwin
3.25
StyleSwin: Transformer-based GAN for High-resolution Image Generation
LSGM
7.22
Score-based Generative Modeling in Latent Space
Joint-EBM
9.89
Learning Joint Latent Space EBM Prior Model for Multi-layer Generator
-
RNODE
-
How to train your neural ODE: the world of Jacobian and kinetic regularization
Diffusion-JEBM
8.78
Learning Latent Space Hierarchical EBM Diffusion Models
-
GLOW
68.93
Glow: Generative Flow with Invertible 1x1 Convolutions
0 of 19 row(s) selected.
Previous
Next