HyperAI

Natural Language Understanding On Lexglue

Metrics

CaseHOLD
ECtHR Task A
ECtHR Task B
EUR-LEX
LEDGAR
SCOTUS
UNFAIR-ToS

Results

Performance results of various models on this benchmark

Model Name
CaseHOLD
ECtHR Task A
ECtHR Task B
EUR-LEX
LEDGAR
SCOTUS
UNFAIR-ToS
Paper TitleRepository
CaseLaw-BERT75.671.2 / 64.288.0 / 77.571.0 / 55.988.0 / 82.376.4 / 66.288.3 / 81.0LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
RoBERTa71.769.5 / 60.787.2 / 77.371.8 / 57.587.9 / 82.170.8 / 61.287.7 / 81.5LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
BERT70.771.4 / 64.087.6 / 77.871.6 / 55.687.7 / 82.270.5 / 60.987.5 / 81.0LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Optimised SVM Baseline-66.3 / 55.076.0 / 65.465.7 / 49.088.0 / 82.674.4 / 64.5-The Unreasonable Effectiveness of the Baseline: Discussing SVMs in Legal Text Classification-
DeBERTa 72.169.1 / 61.287.4 / 77.372.3 / 57.287.9 / 82.070.0 / 60.087.2 / 78.8LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Longformer72.069.6 / 62.488.0 / 77.871.9 / 56.787.7 / 82.372.2 / 62.587.7 / 80.1LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Legal-BERT75.171.2 / 64.688.0 / 77.272.2 / 56.288.1 / 82.776.2 / 65.888.6 / 82.3LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
BigBird70.470.5 / 63.888.1 / 76.671.8 / 56.687.7 / 82.171.7 / 61.487.7 / 80.2LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
0 of 8 row(s) selected.