Continual Learning
持续学习(Continual Learning),又称增量学习(Incremental Learning)或终身学习(Life-long Learning),是指在不遗忘先前任务知识的前提下,顺序学习多个任务的模型训练方法。该方法在新任务训练时无法访问旧任务数据,通过提供任务标识(task-id)进行验证。持续学习旨在提升模型在动态环境中的适应能力,具有重要的应用价值,特别是在数据不断变化的场景中。
20Newsgroup (10 tasks)
5-dataset - 1 epoch
5-Datasets
ASC (19 tasks)
CTR
CIFAR-100 AlexNet - 300 Epoch
CIFAR-100 ResNet-18 - 300 Epochs
IBM
Cifar100 (10 tasks)
RMN (Resnet)
Cifar100 (20 tasks)
Model Zoo-Continual
Cifar100 (20 tasks) - 1 epoch
Coarse-CIFAR100
Model Zoo-Continual
CUB-200-2011 (20 tasks) - 1 epoch
CUBS (Fine-grained 6 Tasks)
CondConvContinual
DSC (10 tasks)
CTR
F-CelebA (10 tasks)
CAT (CNN backbone)
Flowers (Fine-grained 6 Tasks)
CondConvContinual
ImageNet-50 (5 tasks)
RMN
ImageNet (Fine-grained 6 Tasks)
CondConvContinual
mini-Imagenet (20 tasks) - 1 epoch
TAG-RMSProp
miniImagenet
MiniImageNet ResNet-18 - 300 Epochs
MLT17
Permuted MNIST
RMN
Rotated MNIST
Model Zoo-Continual
Sketch (Fine-grained 6 Tasks)
Split CIFAR-10 (5 tasks)
H$^{2}$
split CIFAR-100
Split MNIST (5 tasks)
H$^{2}$
Stanford Cars (Fine-grained 6 Tasks)
CPG
Tiny-ImageNet (10tasks)
ALTA-ViTB/16
TinyImageNet ResNet-18 - 300 Epochs
visual domain decathlon (10 tasks)
NetTailor
Wikiart (Fine-grained 6 Tasks)