HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
홈
SOTA
이미지 생성
Image Generation On Imagenet 64X64
Image Generation On Imagenet 64X64
평가 지표
Bits per dim
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Bits per dim
Paper Title
Repository
DenseFlow-74-10
3.35 (different downsampling)
Densely connected normalizing flows
2-rectified flow++ (NFE=1)
-
Improving the Training of Rectified Flows
Performer (6 layers)
3.719
Rethinking Attention with Performers
GDD-I
-
Diffusion Models Are Innate One-Step Generators
Sparse Transformer 59M (strided)
3.44
Generating Long Sequences with Sparse Transformers
CTM (NFE 1)
-
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion
CD (Diffusion + Distillation, NFE=2)
-
Consistency Models
CT (Direct Generation, NFE=1)
-
Consistency Models
MaCow (Unf)
3.75
MaCow: Masked Convolutional Generative Flow
TCM
-
Truncated Consistency Models
-
Very Deep VAE
3.52
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
CDM
-
Cascaded Diffusion Models for High Fidelity Image Generation
-
Combiner-Axial
3.42
Combiner: Full Attention Transformer with Sparse Computation Cost
GLIDE + CLS-FREE
-
Composing Ensembles of Pre-trained Models via Iterative Consensus
-
SiD
-
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
RIN
-
Scalable Adaptive Computation for Iterative Generation
Logsparse (6 layers)
4.351
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting
Efficient-VDVAE
3.30 (different downsampling)
Efficient-VDVAE: Less is more
MRCNF
3.44
Multi-Resolution Continuous Normalizing Flows
Gated PixelCNN (van den Oord et al., [2016c])
3.57
Conditional Image Generation with PixelCNN Decoders
0 of 64 row(s) selected.
Previous
Next