GPU Leaderboard
Compare performance metrics for AI workloads across different NVIDIA GPUs
GPU Performance Visualization
GPU Comparison
NVIDIA Tesla V100 PCIe 16 GB
Metric
NVIDIA Tesla V100 SXM2 16 GB
NVIDIA Volta
Architecture
NVIDIA Volta
2017.06
Release Date
2019.11
16 GB HBM2
Memory
16 GB HBM2
Not supported
FP8 Performance
Not supported
28.26 TFLOPS
FP16 Performance
31.33 TFLOPS
14.13 TFLOPS
FP32 Performance
15.67 TFLOPS
7.07 TFLOPS
FP64 Performance
7.83 TFLOPS
5120
CUDA Cores
5120
250 W
Power Consumption
300 W
Performance Summary
FP8 Performance: Not available for comparison
FP16 Performance: 9.80% slower (NVIDIA Tesla V100 SXM2 16 GB advantage)
FP32 Performance: 9.83% slower (NVIDIA Tesla V100 SXM2 16 GB advantage)
FP64 Performance: 9.71% slower (NVIDIA Tesla V100 SXM2 16 GB advantage)
GPU Benchmark Comparison
GPU Model | Release Date | Architecture | CUDA Cores | Memory | TGP | FP8 | FP16 | FP32 | FP64 |
---|---|---|---|---|---|---|---|---|---|
NVIDIA Tesla V100 PCIe 16 GB | 2017.06 | NVIDIA Volta | 5120 | 16 GB HBM2 | 250 W | Not supported | 28.26 TFLOPS | 14.13 TFLOPS | 7.07 TFLOPS |
NVIDIA Tesla V100 SXM2 16 GB | 2019.11 | NVIDIA Volta | 5120 | 16 GB HBM2 | 300 W | Not supported | 31.33 TFLOPS | 15.67 TFLOPS | 7.83 TFLOPS |
NVIDIA Tesla V100 PCIe 32 GB | 2018.03 | NVIDIA Volta | 5120 | 32 GB HBM2 | 250 W | Not supported | 28.26 TFLOPS | 14.13 TFLOPS | 7.07 TFLOPS |
NVIDIA Tesla V100 SXM2 32 GB | 2018.03 | NVIDIA Volta | 5120 | 32 GB HBM2 | 300 W | Not supported | 31.33 TFLOPS | 15.67 TFLOPS | 7.83 TFLOPS |
NVIDIA GeForce RTX 3090 | 2020.09 | NVIDIA Ampere | 10496 | 24 GB GDDR6X | 350 W | Not supported | 35.58 TFLOPS | 35.58 TFLOPS | 0.56 TFLOPS |
NVIDIA GeForce RTX 4090 | 2022.10 | NVIDIA Ada Lovelace | 16384 | 24 GB GDDR6X | 450 W | Not supported | 82.58 TFLOPS | 82.58 TFLOPS | 1.29 TFLOPS |
NVIDIA GeForce RTX 5090 | 2025.01 | NVIDIA Blackwell | 21760 | 32 GB GDDR7 | 575 W | Not supported | 104.8 TFLOPS | 104.8 TFLOPS | 1.64 TFLOPS |
NVIDIA A40 PCIe | 2020.10 | NVIDIA Ampere | 10752 | 48 GB GDDR6 | 300 W | Not supported | 37.42 TFLOPS | 37.42 TFLOPS | 0.58 TFLOPS |
NVIDIA RTX A6000 | 2021.03 | NVIDIA Ampere | 10752 | 48 GB GDDR6 | 300 W | Not supported | 38.71 TFLOPS | 38.71 TFLOPS | 0.60 TFLOPS |
NVIDIA A100 PCIe 40 GB | 2020.06 | NVIDIA Ampere | 6912 | 40 GB HBM2e | 250 W | Not supported | 77.97 TFLOPS | 19.49 TFLOPS | 9.75 TFLOPS |
NVIDIA A100 SXM4 40 GB | 2020.05 | NVIDIA Ampere | 6912 | 40 GB HBM2e | 400 W | Not supported | 77.97 TFLOPS | 19.49 TFLOPS | 9.75 TFLOPS |
NVIDIA A100 PCIe 80 GB | 2021.06 | NVIDIA Ampere | 6912 | 80 GB HBM2e | 300 W | Not supported | 77.97 TFLOPS | 19.49 TFLOPS | 9.75 TFLOPS |
NVIDIA A100 SXM4 80 GB | 2020.11 | NVIDIA Ampere | 6912 | 80 GB HBM2e | 400 W | Not supported | 77.97 TFLOPS | 19.49 TFLOPS | 9.75 TFLOPS |
NVIDIA L40 | 2022.10 | NVIDIA Ada Lovelace | 14080 | 48 GB GDDR6 | 300 W | Not supported | 90.52 TFLOPS | 90.52 TFLOPS | 1.41 TFLOPS |
NVIDIA L40s | 2023.08 | NVIDIA Ada Lovelace | 14080 | 48 GB GDDR6 | 350 W | Not supported | 91.61 TFLOPS | 91.61 TFLOPS | 1.43 TFLOPS |
NVIDIA H100 PCIe 80 GB | 2023.03 | NVIDIA Hopper | 16896 | 80 GB HBM3 | 350 W | 3026 TFLOPS | 204.9 TFLOPS | 51.22 TFLOPS | 25.61 TFLOPS |
NVIDIA H100 SXM5 80 GB | 2023.03 | NVIDIA Hopper | 16896 | 80 GB HBM3 | 700 W | 3958 TFLOPS | 267.6 TFLOPS | 66.91 TFLOPS | 33.45 TFLOPS |
NVIDIA H100 PCIe 96 GB | 2023.03 | NVIDIA Hopper | 16896 | 96 GB HBM3 | 350 W | 3026 TFLOPS | 248.3 TFLOPS | 62.08 TFLOPS | 31.04 TFLOPS |
NVIDIA H100 SXM5 96 GB | 2023.03 | NVIDIA Hopper | 16896 | 96 GB HBM3 | 700 W | 3026 TFLOPS | 248.3 TFLOPS | 66.91 TFLOPS | 31.04 TFLOPS |
NVIDIA H200 NVL | 2024.11 | NVIDIA Hopper | 16896 | 141 GB HBM3 | 600 W | 3026 TFLOPS | 241.3 TFLOPS | 60.32 TFLOPS | 30.16 TFLOPS |
NVIDIA H200 SXM 141 GB | 2024.11 | NVIDIA Hopper | 16896 | 141 GB HBM3 | 700 W | 3958 TFLOPS | 267.6 TFLOPS | 66.91 TFLOPS | 33.45 TFLOPS |