HyperAI
HyperAI超神経
ホーム
プラットフォーム
ドキュメント
ニュース
論文
チュートリアル
データセット
百科事典
SOTA
LLMモデル
GPU ランキング
学会
検索
サイトについて
日本語
HyperAI
HyperAI超神経
Toggle sidebar
サイトを検索…
⌘
K
Command Palette
Search for a command to run...
ホーム
SOTA
画像生成
Image Generation On Imagenet 64X64
Image Generation On Imagenet 64X64
評価指標
Bits per dim
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
Columns
モデル名
Bits per dim
Paper Title
Repository
DenseFlow-74-10
3.35 (different downsampling)
Densely connected normalizing flows
2-rectified flow++ (NFE=1)
-
Improving the Training of Rectified Flows
Performer (6 layers)
3.719
Rethinking Attention with Performers
GDD-I
-
Diffusion Models Are Innate One-Step Generators
Sparse Transformer 59M (strided)
3.44
Generating Long Sequences with Sparse Transformers
CTM (NFE 1)
-
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion
CD (Diffusion + Distillation, NFE=2)
-
Consistency Models
CT (Direct Generation, NFE=1)
-
Consistency Models
MaCow (Unf)
3.75
MaCow: Masked Convolutional Generative Flow
TCM
-
Truncated Consistency Models
-
Very Deep VAE
3.52
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
CDM
-
Cascaded Diffusion Models for High Fidelity Image Generation
-
Combiner-Axial
3.42
Combiner: Full Attention Transformer with Sparse Computation Cost
GLIDE + CLS-FREE
-
Composing Ensembles of Pre-trained Models via Iterative Consensus
-
SiD
-
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
RIN
-
Scalable Adaptive Computation for Iterative Generation
Logsparse (6 layers)
4.351
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting
Efficient-VDVAE
3.30 (different downsampling)
Efficient-VDVAE: Less is more
MRCNF
3.44
Multi-Resolution Continuous Normalizing Flows
Gated PixelCNN (van den Oord et al., [2016c])
3.57
Conditional Image Generation with PixelCNN Decoders
0 of 64 row(s) selected.
Previous
Next