HyperAI초신경
홈
뉴스
최신 연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
Image Retrieval
Image Retrieval On Flickr30K Cn
Image Retrieval On Flickr30K Cn
평가 지표
R@1
R@10
R@5
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
R@1
R@10
R@5
Paper Title
Repository
R2D2 (ViT-L/14)
84.4
98.4
96.7
CCMB: A Large-scale Chinese Cross-modal Benchmark
InternVL-G-FT
85.9
97.1
98.7
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
CN-CLIP (RN50)
66.7
94.1
89.4
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-L/14@336px)
84.4
98.7
97.1
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-H/14)
83.8
98.6
96.9
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
CN-CLIP (ViT-B/16)
79.1
97.4
94.8
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Wukong (ViT-B/32)
67.6
94.2
89.6
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark
R2D2 (ViT-B)
78.3
97.0
94.6
CCMB: A Large-scale Chinese Cross-modal Benchmark
Wukong (ViT-L/14)
77.4
97.0
94.5
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark
InternVL-C-FT
85.2
97.0
98.5
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
CN-CLIP (ViT-L/14)
82.7
98.6
96.7
Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
0 of 11 row(s) selected.
Previous
Next