Video Quality Assessment On Msu Sr Qa Dataset
Metrics
KLCC
PLCC
SROCC
Type
Results
Performance results of various models on this benchmark
Comparison Table
Model Name | KLCC | PLCC | SROCC | Type |
---|---|---|---|---|
Model 1 | 0.16365 | 0.20138 | 0.21450 | FR |
exploring-clip-for-assessing-the-look-and | 0.69774 | 0.71808 | 0.56875 | NR |
musiq-multi-scale-image-quality-transformer | 0.51897 | 0.59151 | 0.64589 | NR |
fsim-a-feature-similarity-index-for-image | 0.26942 | 0.35083 | 0.34996 | FR |
Model 5 | 0.12067 | 0.09428 | 0.16441 | FR |
pieapp-perceptual-image-error-assessment | 0.61945 | 0.75743 | 0.75215 | FR |
norm-in-norm-loss-with-faster-convergence-and | 0.52172 | 0.62204 | 0.64382 | NR |
multiscale-structural-similarity-for-image | 0.07821 | 0.16035 | 0.11017 | FR |
topiq-a-top-down-approach-from-semantics-to | 0.40663 | 0.51061 | 0.51687 | NR |
blind-image-quality-assessment-using-a-deep | 0.55139 | 0.63971 | 0.68621 | NR |
q-align-teaching-lmms-for-visual-scoring-via | 0.61677 | 0.74116 | 0.75088 | NR |
the-unreasonable-effectiveness-of-deep | 0.43158 | 0.52385 | 0.54461 | FR |
Model 13 | 0.09998 | 0.13840 | 0.12914 | FR |
topiq-a-top-down-approach-from-semantics-to | 0.42811 | 0.57564 | 0.55568 | FR |
the-unreasonable-effectiveness-of-deep | 0.41471 | 0.52820 | 0.52868 | FR |
erqa-edge-restoration-quality-assessment-for | 0.47785 | 0.60188 | 0.59345 | FR |
Model 17 | 0.47674 | 0.62311 | 0.60468 | FR |
Model 18 | 0.32283 | 0.40073 | 0.43219 | FR |
image-quality-assessment-unifying-structure | 0.42320 | 0.55042 | 0.53346 | FR |
q-align-teaching-lmms-for-visual-scoring-via | 0.42211 | 0.50055 | 0.51521 | NR |
Model 21 | 0.34004 | 0.41892 | 0.44064 | NR |
blindly-assess-image-quality-in-the-wild | 0.48466 | 0.55211 | 0.59883 | NR |
Model 23 | 0.32331 | 0.39744 | 0.43296 | FR |
Model 24 | 0.13551 | 0.19672 | 0.17889 | FR |
unified-quality-assessment-of-in-the-wild | 0.48406 | 0.61821 | 0.60193 | NR |
quality-assessment-of-in-the-wild-videos | 0.43634 | 0.54407 | 0.53652 | NR |
musiq-multi-scale-image-quality-transformer | 0.44669 | 0.52404 | 0.56152 | NR |
Model 28 | 0.24254 | 0.33169 | 0.33167 | NR |
musiq-multi-scale-image-quality-transformer | 0.52673 | 0.60216 | 0.64927 | NR |
multiscale-structural-similarity-for-image | 0.18174 | 0.21800 | 0.24422 | FR |
topiq-a-top-down-approach-from-semantics-to | 0.53140 | 0.60905 | 0.64923 | NR |
exploring-clip-for-assessing-the-look-and | 0.38794 | 0.50379 | 0.49881 | NR |
no-reference-image-quality-assessment-via-1 | 0.39398 | 0.50005 | 0.48882 | NR |
topiq-a-top-down-approach-from-semantics-to | 0.28473 | 0.34000 | 0.36204 | NR |
topiq-a-top-down-approach-from-semantics-to | 0.48428 | 0.58949 | 0.59564 | NR |
multiscale-structural-similarity-for-image | 0.16578 | 0.30014 | 0.21604 | FR |
the-2018-pirm-challenge-on-perceptual-image | 0.39101 | 0.53178 | 0.52319 | NR |
musiq-multi-scale-image-quality-transformer | 0.55312 | 0.66531 | 0.67746 | NR |
from-patches-to-pictures-paq-2-piq-mapping | 0.57753 | 0.70988 | 0.71167 | NR |
vila-learning-image-aesthetics-from-user | 0.26180 | 0.28846 | 0.33728 | NR |
exploring-clip-for-assessing-the-look-and | 0.49417 | 0.58944 | 0.60808 | NR |
no-reference-image-quality-assessment-via-1 | 0.49004 | 0.56226 | 0.62578 | NR |
no-reference-image-quality-assessment-in-the | 0.24803 | 0.31143 | 0.32327 | NR |
image-quality-assessment-from-error | 0.17175 | 0.20670 | 0.22468 | FR |
Model 45 | 0.08263 | 0.10931 | 0.10733 | FR |
topiq-a-top-down-approach-from-semantics-to | 0.46217 | 0.57955 | 0.57341 | FR |
maniqa-multi-dimension-attention-network-for | 0.54744 | 0.62733 | 0.66613 | NR |
exploring-clip-for-assessing-the-look-and | 0.52628 | 0.65154 | 0.65713 | NR |
Model 49 | 0.11040 | 0.14638 | 0.14277 | FR |
Model 50 | 0.26485 | 0.36944 | 0.34862 | FR |
topiq-a-top-down-approach-from-semantics-to | 0.50670 | 0.57674 | 0.62715 | NR |
multiscale-structural-similarity-for-image | 0.17468 | 0.20935 | 0.23108 | FR |
shift-tolerant-perceptual-similarity-metric-1 | 0.42897 | 0.54740 | 0.53473 | FR |
learning-a-no-reference-quality-metric-for | 0.52301 | 0.65357 | 0.67362 | NR |
q-align-teaching-lmms-for-visual-scoring-via | 0.58634 | 0.71121 | 0.71812 | NR |
shift-tolerant-perceptual-similarity-metric-1 | 0.45898 | 0.56431 | 0.57336 | FR |
nima-neural-image-assessment | 0.20377 | 0.26550 | 0.25887 | NR |
no-reference-image-quality-assessment-via-1 | 0.48901 | 0.56277 | 0.62496 | NR |
topiq-a-top-down-approach-from-semantics-to | 0.26774 | 0.33940 | 0.34092 | NR |
locally-adaptive-structure-and-texture | 0.41261 | 0.53289 | 0.51717 | FR |