Simple Semantic-Aided Few-Shot Learning

Learning from a limited amount of data, namely Few-Shot Learning, stands outas a challenging computer vision task. Several works exploit semantics anddesign complicated semantic fusion mechanisms to compensate for rarerepresentative features within restricted data. However, relying on naivesemantics such as class names introduces biases due to their brevity, whileacquiring extensive semantics from external knowledge takes a huge time andeffort. This limitation severely constrains the potential of semantics inFew-Shot Learning. In this paper, we design an automatic way called SemanticEvolution to generate high-quality semantics. The incorporation of high-qualitysemantics alleviates the need for complex network structures and learningalgorithms used in previous works. Hence, we employ a simple two-layer networktermed Semantic Alignment Network to transform semantics and visual featuresinto robust class prototypes with rich discriminative features for few-shotclassification. The experimental results show our framework outperforms allprevious methods on six benchmarks, demonstrating a simple network withhigh-quality semantics can beat intricate multi-modal modules on few-shotclassification tasks. Code is available athttps://github.com/zhangdoudou123/SemFew.