HyperAI
HyperAI
الرئيسية
الأخبار
أحدث الأوراق البحثية
الدروس
مجموعات البيانات
الموسوعة
SOTA
نماذج LLM
لوحة الأداء GPU
الفعاليات
البحث
حول
العربية
HyperAI
HyperAI
Toggle sidebar
البحث في الموقع...
⌘
K
الرئيسية
SOTA
تصنيف الصور التتابعي
Sequential Image Classification On Sequential
Sequential Image Classification On Sequential
المقاييس
Permuted Accuracy
النتائج
نتائج أداء النماذج المختلفة على هذا المعيار القياسي
Columns
اسم النموذج
Permuted Accuracy
Paper Title
Repository
UnICORNN
98.4
UnICORNN: A recurrent model for learning very long time dependencies
-
CKCNN (1M)
98.54%
CKConv: Continuous Kernel Convolution For Sequential Data
-
Dilated GRU
94.6%
Dilated Recurrent Neural Networks
-
FlexTCN-6
-
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes
-
LSSL
98.76%
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers
-
LEM
96.6%
Long Expressive Memory for Sequence Modeling
-
STAR
-
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
-
GAM-RHN-1
96.8%
Recurrent Highway Networks with Grouped Auxiliary Memory
CKCNN (100k)
98%
CKConv: Continuous Kernel Convolution For Sequential Data
-
BN LSTM
95.4%
Recurrent Batch Normalization
-
Sparse Combo Net
96.94
RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks
-
Temporal Convolutional Network
97.2%
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
-
EGRU
95.1%
Efficient recurrent architectures through activity sparsity and sparse back-propagation through time
-
FlexTCN-4
98.72%
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes
-
HiPPO-LegS
98.3%
HiPPO: Recurrent Memory with Optimal Polynomial Projections
-
Dense IndRNN
97.2%
Deep Independently Recurrent Neural Network (IndRNN)
-
coRNN
97.34%
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
-
Adaptive-saturated RNN
96.96%
Adaptive-saturated RNN: Remember more with less instability
LMU
97.2%
Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks
ODE-LSTM
97.83%
Learning Long-Term Dependencies in Irregularly-Sampled Time Series
-
0 of 30 row(s) selected.
Previous
Next