HyperAI
Startseite
Neuigkeiten
Neueste Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Deutsch
HyperAI
Toggle sidebar
Seite durchsuchen…
⌘
K
Startseite
SOTA
Image Retrieval
Image Retrieval On Flickr30K Cn
Image Retrieval On Flickr30K Cn
Metriken
R@1
R@10
R@5
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
R@1
R@10
R@5
Paper Title
Repository
R2D2 (ViT-L/14)
84.4
98.4
96.7
CCMB: A Large-scale Chinese Cross-modal Benchmark
InternVL-G-FT
85.9
97.1
98.7
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
CN-CLIP (RN50)
66.7
94.1
89.4
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-L/14@336px)
84.4
98.7
97.1
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-H/14)
83.8
98.6
96.9
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-B/16)
79.1
97.4
94.8
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Wukong (ViT-B/32)
67.6
94.2
89.6
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark
R2D2 (ViT-B)
78.3
97.0
94.6
CCMB: A Large-scale Chinese Cross-modal Benchmark
Wukong (ViT-L/14)
77.4
97.0
94.5
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark
InternVL-C-FT
85.2
97.0
98.5
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
CN-CLIP (ViT-L/14)
82.7
98.6
96.7
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
0 of 11 row(s) selected.
Previous
Next