HyperAI
HyperAI
Startseite
Plattform
Dokumentation
Neuigkeiten
Forschungsarbeiten
Tutorials
Datensätze
Wiki
SOTA
LLM-Modelle
GPU-Rangliste
Veranstaltungen
Suche
Über
Nutzungsbedingungen
Datenschutzrichtlinie
Deutsch
HyperAI
HyperAI
Toggle Sidebar
Seite durchsuchen…
⌘
K
Command Palette
Search for a command to run...
Plattform
Startseite
SOTA
Alltagswissen
Common Sense Reasoning On Arc Easy
Common Sense Reasoning On Arc Easy
Metriken
Accuracy
Ergebnisse
Leistungsergebnisse verschiedener Modelle zu diesem Benchmark
Columns
Modellname
Accuracy
Paper Title
ST-MoE-32B 269B (fine-tuned)
95.2
ST-MoE: Designing Stable and Transferable Sparse Expert Models
LLaMA 3 8B+MoSLoRA (fine-tuned)
90.5
Mixture-of-Subspaces in Low-Rank Adaptation
PaLM 2-L (1-shot)
89.7
PaLM 2 Technical Report
PaLM 2-M (1-shot)
88.0
PaLM 2 Technical Report
LLaMA-3 8B + MixLoRA
86.5
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
Camelidae-8×34B
86.2
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
PaLM 2-S (1-shot)
85.6
PaLM 2 Technical Report
LLaMA 65B + CFG (0-shot)
84.2
Stay on topic with Classifier-Free Guidance
GAL 120B (0-shot)
83.8
Galactica: A Large Language Model for Science
LLaMA-2 13B + MixLoRA
83.5
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
LLaMA 30B + CFG (0-shot)
83.2
Stay on topic with Classifier-Free Guidance
Mixtral 8x7B (0-shot)
83.1
Mixtral of Experts
FLAN 137B (few-shot, k=14)
80.7
Finetuned Language Models Are Zero-Shot Learners
Mistral 7B (0-shot)
80.5
Mixtral of Experts
LLaMA 33B (0-shot)
80.0
LLaMA: Open and Efficient Foundation Language Models
Mistral 7B (0-shot)
80.0
Mistral 7B
FLAN 137B (0-shot)
79.6
Finetuned Language Models Are Zero-Shot Learners
LLaMA 13B + CFG (0-shot)
79.1
Stay on topic with Classifier-Free Guidance
LLaMA 65B (0-shot)
78.9
LLaMA: Open and Efficient Foundation Language Models
LLaMA-2 7B + MixLoRA
77.7
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
0 of 47 row(s) selected.
Previous
Next
Common Sense Reasoning On Arc Easy | SOTA | HyperAI