Laplacian Regularized Few-Shot Learning

We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assign- ing query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label as- signments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to super- vision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes inde- pendent (parallel) updates for each query sample, while guaranteeing convergence. Following a sim- ple cross-entropy training on the base classes, and without complex meta-learning strategies, we con- ducted comprehensive experiments over five few- shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our trans- ductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.