HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
이미지 생성
Image Generation On Imagenet 32X32
Image Generation On Imagenet 32X32
평가 지표
bpd
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
bpd
Paper Title
Repository
NVAE w/ flow
3.92
NVAE: A Deep Hierarchical Variational Autoencoder
Glow (Kingma and Dhariwal, 2018)
4.09
Glow: Generative Flow with Invertible 1x1 Convolutions
MintNet
4.06
MintNet: Building Invertible Neural Networks with Masked Convolutions
Residual Flow
4.01
Residual Flows for Invertible Generative Modeling
VDM
3.72
Variational Diffusion Models
SPN Menick and Kalchbrenner (2019)
3.85
Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling
-
StyleGAN-XL
-
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets
δ-VAE
3.77
Preventing Posterior Collapse with delta-VAEs
-
PaGoDA
-
PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher
Very Deep VAE
3.8
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
PixelRNN
3.86
Pixel Recurrent Neural Networks
Hourglass
3.74
Hierarchical Transformers Are More Efficient Language Models
DDPM++ (VP, NLL) + ST
3.85
Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
i-DODE
3.43
Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs
MRCNF
3.77
Multi-Resolution Continuous Normalizing Flows
Flow++
3.86
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
BIVA Maaloe et al. (2019)
3.96
BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling
Reflected Diffusion
3.74
Reflected Diffusion Models
NDM
3.55
Neural Diffusion Models
-
DDPM
3.89
Denoising Diffusion Probabilistic Models
0 of 33 row(s) selected.
Previous
Next