Out Of Distribution Detection On Cifar 10 Vs
Metriken
AUROC
FPR95
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Vergleichstabelle
Modellname | AUROC | FPR95 |
---|---|---|
boosting-out-of-distribution-detection-with-1 | 97.12 | 18.29 |
exploring-the-limits-of-out-of-distribution | 98.52 | - |
exploring-the-limits-of-out-of-distribution | 98.42 | - |
deep-hybrid-models-for-out-of-distribution | 100 | - |
using-self-supervised-learning-can-improve | 90.9 | - |
exploring-the-limits-of-out-of-distribution | 97.85 | - |
forte-finding-outliers-with-representation | 97.63 ± 00.15 | - |
learn-what-you-can-t-learn-regularized-1 | 95.1 | - |
a-baseline-for-detecting-misclassified-and | 87.9 | - |
simultaneous-classification-and-novelty | 94.9 | - |
hybrid-models-for-open-set-recognition | 95.1 | - |
out-of-distribution-detection-using-outlier | 91.95 | - |
detecting-out-of-distribution-examples-with | 79.7 | - |
deep-anomaly-detection-with-outlier-exposure | 93.3 | - |