Image Generation On Imagenet 64X64
평가 지표
Bits per dim
평가 결과
이 벤치마크에서 각 모델의 성능 결과
비교 표
모델 이름 | Bits per dim |
---|---|
densely-connected-normalizing-flows | 3.35 (different downsampling) |
improving-the-training-of-rectified-flows | - |
rethinking-attention-with-performers | 3.719 |
diffusion-models-are-innate-one-step | - |
190410509 | 3.44 |
consistency-trajectory-models-learning | - |
consistency-models | - |
consistency-models | - |
macow-masked-convolutional-generative-flow | 3.75 |
truncated-consistency-models | - |
very-deep-vaes-generalize-autoregressive-1 | 3.52 |
cascaded-diffusion-models-for-high-fidelity | - |
combiner-full-attention-transformer-with | 3.42 |
composing-ensembles-of-pre-trained-models-via | - |
score-identity-distillation-exponentially | - |
scalable-adaptive-computation-for-iterative | - |
enhancing-the-locality-and-breaking-the | 4.351 |
efficient-vdvae-less-is-more | 3.30 (different downsampling) |
multi-resolution-continuous-normalizing-flows | 3.44 |
conditional-image-generation-with-pixelcnn | 3.57 |
generating-high-fidelity-images-with-subscale | 3.52 |
composing-ensembles-of-pre-trained-models-via | - |
mali-a-memory-efficient-and-reverse-accurate-1 | 3.71 |
composing-ensembles-of-pre-trained-models-via | - |
pagoda-progressive-growing-of-a-one-step | - |
flow-matching-for-generative-modeling | 3.31 |
consistency-models | - |
neural-diffusion-models | 3.35 |
direct-discriminative-optimization-your-1 | - |
reformer-the-efficient-transformer-1 | 3.740 |
disco-diff-enhancing-continuous-diffusion | - |
constant-acceleration-flow-1 | - |
consistency-models-made-easy | - |
self-improving-diffusion-models-with | - |
consistency-models | - |
variational-diffusion-models | 3.40 |
axial-attention-in-multidimensional-1 | 4.032 |
consistency-trajectory-models-learning | - |
learning-stackable-and-skippable-lego-bricks | - |
pixelcnn-models-with-auxiliary-variables-for | 3.57 |
normalizing-flows-are-capable-generative | 2.99 |
composing-ensembles-of-pre-trained-models-via | - |
efficient-content-based-sparse-attention-with-1 | 3.43 |
residual-flows-for-invertible-generative | 3.757 |
glow-generative-flow-with-invertible-1x1 | 3.81 |
neural-flow-diffusion-models-learnable | 3.2 |
rethinking-attention-with-performers | 3.636 |
diffusion-models-are-innate-one-step | - |
stable-consistency-tuning-understanding-and | - |
hierarchical-transformers-are-more-efficient | 3.44 |
improved-denoising-diffusion-probabilistic-1 | 3.53 |
generative-modeling-with-bayesian-sample | 3.22 |
macow-masked-convolutional-generative-flow | 3.69 |
reformer-the-efficient-transformer-1 | 3.710 |
partition-guided-gans | - |
diffusion-models-beat-gans-on-image-synthesis | - |
combiner-full-attention-transformer-with | 3.504 |
composing-ensembles-of-pre-trained-models-via | - |
flow-improving-flow-based-generative-models | 3.69 |
stylegan-xl-scaling-stylegan-to-large-diverse | - |
clr-gan-improving-gans-stability-and-quality | - |
improving-the-training-of-rectified-flows | - |
parallel-multiscale-autoregressive-density | 3.7 |
adversarial-score-identity-distillation | - |