Continual Learning
Continual Learning, also known as Incremental Learning or Life-long Learning, refers to model training methods that sequentially learn multiple tasks while retaining knowledge from previous tasks. This approach involves validating the model with task identifiers (task-ids) during new task training, without accessing data from old tasks. Continual Learning aims to enhance a model's adaptability in dynamic environments and holds significant application value, especially in scenarios where data is constantly changing.
20Newsgroup (10 tasks)
5-dataset - 1 epoch
5-Datasets
ASC (19 tasks)
CTR
CIFAR-100 AlexNet - 300 Epoch
CIFAR-100 ResNet-18 - 300 Epochs
IBM
Cifar100 (10 tasks)
RMN (Resnet)
Cifar100 (20 tasks)
Model Zoo-Continual
Cifar100 (20 tasks) - 1 epoch
Coarse-CIFAR100
Model Zoo-Continual
CUB-200-2011 (20 tasks) - 1 epoch
CUBS (Fine-grained 6 Tasks)
CondConvContinual
DSC (10 tasks)
CTR
F-CelebA (10 tasks)
CAT (CNN backbone)
Flowers (Fine-grained 6 Tasks)
CondConvContinual
ImageNet-50 (5 tasks)
RMN
ImageNet (Fine-grained 6 Tasks)
CondConvContinual
mini-Imagenet (20 tasks) - 1 epoch
TAG-RMSProp
miniImagenet
MiniImageNet ResNet-18 - 300 Epochs
MLT17
Permuted MNIST
RMN
Rotated MNIST
Model Zoo-Continual
Sketch (Fine-grained 6 Tasks)
Split CIFAR-10 (5 tasks)
H$^{2}$
split CIFAR-100
Split MNIST (5 tasks)
H$^{2}$
Stanford Cars (Fine-grained 6 Tasks)
CPG
Tiny-ImageNet (10tasks)
ALTA-ViTB/16
TinyImageNet ResNet-18 - 300 Epochs
visual domain decathlon (10 tasks)
NetTailor
Wikiart (Fine-grained 6 Tasks)