HyperAI
HyperAI초신경
홈
뉴스
연구 논문
튜토리얼
데이터셋
백과사전
SOTA
LLM 모델
GPU 랭킹
컨퍼런스
전체 검색
소개
한국어
시스템 설정
HyperAI
HyperAI초신경
Toggle sidebar
전체 사이트 검색...
⌘
K
홈
SOTA
질문 응답
Question Answering On Babi
Question Answering On Babi
평가 지표
Mean Error Rate
평가 결과
이 벤치마크에서 각 모델의 성능 결과
Columns
모델 이름
Mean Error Rate
Paper Title
Repository
ReMO
1.2%
Finding ReMO (Related Memory Object): A Simple Neural Architecture for Text based Reasoning
-
DMN+
-
Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes
-
EntNet
9.7%
Tracking the World State with Recurrent Entity Networks
-
RR
0.46%
Recurrent Relational Networks
-
SDNC
6.4%
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
-
STM
0.39%
Self-Attentive Associative Memory
-
NUTM
5.6%
Neural Stored-program Memory
-
RUM
-
Rotational Unit of Memory
-
GORU
-
Gated Orthogonal Recurrent Units: On Learning to Forget
-
End-To-End Memory Networks
7.5%
End-To-End Memory Networks
-
H-Mem
-
H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks
ours
-
Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity
-
LSTM
28.7%
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
-
QRN
0.3%
Query-Reduction Networks for Question Answering
-
0 of 14 row(s) selected.
Previous
Next
Question Answering On Babi | SOTA | HyperAI초신경