HyperAIHyperAI
2 months ago

Revisiting Fine-tuning for Few-shot Learning

Nakamura, Akihiro ; Harada, Tatsuya
Revisiting Fine-tuning for Few-shot Learning
Abstract

Few-shot learning is the process of learning novel classes using only a fewexamples and it remains a challenging task in machine learning. Manysophisticated few-shot learning algorithms have been proposed based on thenotion that networks can easily overfit to novel examples if they are simplyfine-tuned using only a few examples. In this study, we show that in thecommonly used low-resolution mini-ImageNet dataset, the fine-tuning methodachieves higher accuracy than common few-shot learning algorithms in the 1-shottask and nearly the same accuracy as that of the state-of-the-art algorithm inthe 5-shot task. We then evaluate our method with more practical tasks, namelythe high-resolution single-domain and cross-domain tasks. With both tasks, weshow that our method achieves higher accuracy than common few-shot learningalgorithms. We further analyze the experimental results and show that: 1) theretraining process can be stabilized by employing a low learning rate, 2) usingadaptive gradient optimizers during fine-tuning can increase test accuracy, and3) test accuracy can be improved by updating the entire network when a largedomain-shift exists between base and novel classes.

Revisiting Fine-tuning for Few-shot Learning | Latest Papers | HyperAI