HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Image Reconstruction
Image Reconstruction On Imagenet
Image Reconstruction On Imagenet
Metrics
FID
LPIPS
PSNR
SSIM
Results
Performance results of various models on this benchmark
Columns
Model Name
FID
LPIPS
PSNR
SSIM
Paper Title
Repository
Taming-VQGAN (16x16)
3.64
0.177
19.93
0.542
Taming Transformers for High-Resolution Image Synthesis
-
Open-Magvit2 (16x16)
1.17
-
21.90
-
Open-MAGVIT2: An Open-Source Project Toward Democratizing Auto-regressive Visual Generation
-
TiTok-S-128
1.71
-
-
-
An Image is Worth 32 Tokens for Reconstruction and Generation
-
VQGAN-LC (16x16)
2.62
0.120
23.80
0.589
Scaling the Codebook Size of VQGAN to 100,000 with a Utilization Rate of 99%
-
OptVQ (16x16x8)
0.91
0.066
27.57
0.729
Preventing Local Pitfalls in Vector Quantization via Optimal Transport
-
MaskBit (16x16)
1.66
-
-
-
MaskBit: Embedding-free Image Generation via Bit Tokens
-
IBQ (16x16)
1.00
0.2030
-
-
Scalable Image Tokenization with Index Backpropagation Quantization
-
ViT-VQGAN (16x16)
1.28
-
-
-
Vector-quantized Image Modeling with Improved VQGAN
-
MaskGIT-VQGAN (16x16)
2.28
-
-
-
MaskGIT: Masked Generative Image Transformer
-
RQ-VAE (8x8x16)
1.83
-
-
-
Autoregressive Image Generation using Residual Quantization
-
OptVQ (16x16x4)
1.00
0.076
26.59
0.717
Preventing Local Pitfalls in Vector Quantization via Optimal Transport
-
Mo-VQGAN (16x16x4)
1.12
0.113
22.42
0.673
MoVQ: Modulating Quantized Vectors for High-Fidelity Image Generation
-
0 of 12 row(s) selected.
Previous
Next
Image Reconstruction On Imagenet | SOTA | HyperAI