HyperAI
HyperAI초신경
홈
플랫폼
문서
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
서비스 약관
개인정보 처리방침
한국어
HyperAI
HyperAI초신경
Toggle Sidebar
전체 사이트 검색...
⌘
K
Command Palette
Search for a command to run...
플랫폼
홈
SOTA
소수 샘플 학습
Few Shot Learning On Medconceptsqa
Few Shot Learning On Medconceptsqa
평가 지표
Accuracy
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Accuracy
Paper Title
gpt-4-0125-preview
61.911
GPT-4 Technical Report
gpt-3.5-turbo
41.476
Language Models are Few-Shot Learners
meta-llama/Meta-Llama-3-8B-Instruct
25.653
LLaMA: Open and Efficient Foundation Language Models
johnsnowlabs/JSL-MedMNX-7B
25.627
MedConceptsQA: Open Source Medical Concepts QA Benchmark
yikuan8/Clinical-Longformer
25.547
Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences
dmis-lab/biobert-v1.1
25.458
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
epfl-llm/meditron-70b
25.262
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
HuggingFaceH4/zephyr-7b-beta
25.058
Zephyr: Direct Distillation of LM Alignment
BioMistral/BioMistral-7B-DARE
25.058
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
dmis-lab/meerkat-7b-v1.0
24.942
Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks
PharMolix/BioMedGPT-LM-7B
24.924
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine
epfl-llm/meditron-7b
23.787
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
0 of 12 row(s) selected.
Previous
Next