HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Few-Shot-Lernen
Few Shot Learning On Medconceptsqa
Few Shot Learning On Medconceptsqa
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
gpt-4-0125-preview
61.911
GPT-4 Technical Report
gpt-3.5-turbo
41.476
Language Models are Few-Shot Learners
meta-llama/Meta-Llama-3-8B-Instruct
25.653
LLaMA: Open and Efficient Foundation Language Models
johnsnowlabs/JSL-MedMNX-7B
25.627
MedConceptsQA: Open Source Medical Concepts QA Benchmark
yikuan8/Clinical-Longformer
25.547
Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences
dmis-lab/biobert-v1.1
25.458
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
epfl-llm/meditron-70b
25.262
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
HuggingFaceH4/zephyr-7b-beta
25.058
Zephyr: Direct Distillation of LM Alignment
BioMistral/BioMistral-7B-DARE
25.058
BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
dmis-lab/meerkat-7b-v1.0
24.942
Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks
PharMolix/BioMedGPT-LM-7B
24.924
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine
epfl-llm/meditron-7b
23.787
MEDITRON-70B: Scaling Medical Pretraining for Large Language Models
0 of 12 row(s) selected.
Previous
Next