Video Quality Assessment On Msu Video Quality
評価指標
KLCC
PLCC
SRCC
Type
評価結果
このベンチマークにおける各モデルのパフォーマンス結果
比較表
モデル名 | KLCC | PLCC | SRCC | Type |
---|---|---|---|---|
blindly-assess-quality-of-in-the-wild-videos | 0.7640 | 0.9270 | 0.9131 | NR |
disentangling-aesthetic-and-technical-effects | 0.7216 | 0.9099 | 0.8871 | NR |
nima-neural-image-assessment | 0.6745 | 0.8784 | 0.8494 | NR |
perceptual-quality-assessment-of-smartphone | 0.7186 | 0.8814 | 0.8822 | NR |
ugc-vqa-benchmarking-blind-video-quality | 0.5414 | 0.7717 | 0.7286 | NR |
fast-vqa-efficient-end-to-end-video-quality | 0.5645 | 0.8087 | 0.7508 | NR |
fast-vqa-efficient-end-to-end-video-quality | 0.6498 | 0.8613 | 0.8308 | NR |
barriers-towards-no-reference-metrics | 0.4215 | 0.6713 | 0.5985 | NR |
unified-quality-assessment-of-in-the-wild | 0.7883 | 0.9431 | 0.9289 | NR |
deep-learning-based-full-reference-and-no | 0.6942 | 0.8851 | 0.8673 | NR |
from-patches-to-pictures-paq-2-piq-mapping | 0.7079 | 0.8549 | 0.8705 | NR |
quality-assessment-of-in-the-wild-videos | 0.7483 | 0.9180 | 0.9049 | NR |
モデル 13 | 0.3775 | 0.2898 | 0.5066 | NR |
koniq-10k-an-ecologically-valid-database-for | 0.6608 | 0.8464 | 0.8360 | NR |
musiq-multi-scale-image-quality-transformer | 0.7433 | 0.9068 | 0.9004 | NR |
perceptual-quality-assessment-of-smartphone | 0.7148 | 0.8824 | 0.8794 | NR |
unique-unsupervised-image-quality-estimation | 0.7648 | 0.9238 | 0.9148 | NR |
blind-image-quality-assessment-using-a-deep | 0.7750 | 0.9222 | 0.9220 | NR |
norm-in-norm-loss-with-faster-convergence-and | 0.7589 | 0.9106 | 0.9104 | NR |
deep-learning-based-full-reference-and-no | 0.7037 | 0.8933 | 0.8742 | NR |
perceptual-quality-assessment-of-smartphone | 0.7106 | 0.8855 | 0.8799 | NR |