HyperAI

Image Generation On Imagenet 64X64

المقاييس

Bits per dim

النتائج

نتائج أداء النماذج المختلفة على هذا المعيار القياسي

جدول المقارنة
اسم النموذجBits per dim
densely-connected-normalizing-flows3.35 (different downsampling)
improving-the-training-of-rectified-flows-
rethinking-attention-with-performers3.719
diffusion-models-are-innate-one-step-
1904105093.44
consistency-trajectory-models-learning-
consistency-models-
consistency-models-
macow-masked-convolutional-generative-flow3.75
truncated-consistency-models-
very-deep-vaes-generalize-autoregressive-13.52
cascaded-diffusion-models-for-high-fidelity-
combiner-full-attention-transformer-with3.42
composing-ensembles-of-pre-trained-models-via-
score-identity-distillation-exponentially-
scalable-adaptive-computation-for-iterative-
enhancing-the-locality-and-breaking-the4.351
efficient-vdvae-less-is-more3.30 (different downsampling)
multi-resolution-continuous-normalizing-flows3.44
conditional-image-generation-with-pixelcnn3.57
generating-high-fidelity-images-with-subscale3.52
composing-ensembles-of-pre-trained-models-via-
mali-a-memory-efficient-and-reverse-accurate-13.71
composing-ensembles-of-pre-trained-models-via-
pagoda-progressive-growing-of-a-one-step-
flow-matching-for-generative-modeling3.31
consistency-models-
neural-diffusion-models3.35
direct-discriminative-optimization-your-1-
reformer-the-efficient-transformer-13.740
disco-diff-enhancing-continuous-diffusion-
constant-acceleration-flow-1-
consistency-models-made-easy-
self-improving-diffusion-models-with-
consistency-models-
variational-diffusion-models3.40
axial-attention-in-multidimensional-14.032
consistency-trajectory-models-learning-
learning-stackable-and-skippable-lego-bricks-
pixelcnn-models-with-auxiliary-variables-for3.57
normalizing-flows-are-capable-generative2.99
composing-ensembles-of-pre-trained-models-via-
efficient-content-based-sparse-attention-with-13.43
residual-flows-for-invertible-generative3.757
glow-generative-flow-with-invertible-1x13.81
neural-flow-diffusion-models-learnable3.2
rethinking-attention-with-performers3.636
diffusion-models-are-innate-one-step-
stable-consistency-tuning-understanding-and-
hierarchical-transformers-are-more-efficient3.44
improved-denoising-diffusion-probabilistic-13.53
generative-modeling-with-bayesian-sample3.22
macow-masked-convolutional-generative-flow3.69
reformer-the-efficient-transformer-13.710
partition-guided-gans-
diffusion-models-beat-gans-on-image-synthesis-
combiner-full-attention-transformer-with3.504
composing-ensembles-of-pre-trained-models-via-
flow-improving-flow-based-generative-models3.69
stylegan-xl-scaling-stylegan-to-large-diverse-
clr-gan-improving-gans-stability-and-quality-
improving-the-training-of-rectified-flows-
parallel-multiscale-autoregressive-density3.7
adversarial-score-identity-distillation-