Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection

Recently proposed decoupled training methods emerge as a dominant paradigmfor long-tailed object detection. But they require an extra fine-tuning stage,and the disjointed optimization of representation and classifier might lead tosuboptimal results. However, end-to-end training methods, like equalizationloss (EQL), still perform worse than decoupled training methods. In this paper,we reveal the main issue in long-tailed object detection is the imbalancedgradients between positives and negatives, and find that EQL does not solve itwell. To address the problem of imbalanced gradients, we introduce a newversion of equalization loss, called equalization loss v2 (EQL v2), a novelgradient guided reweighing mechanism that re-balances the training process foreach category independently and equally. Extensive experiments are performed onthe challenging LVIS benchmark. EQL v2 outperforms origin EQL by about 4 pointsoverall AP with 14-18 points improvements on the rare categories. Moreimportantly, it also surpasses decoupled training methods. Without furthertuning for the Open Images dataset, EQL v2 improves EQL by 7.3 points AP,showing strong generalization ability. Codes have been released athttps://github.com/tztztztztz/eqlv2