Expeditious Saliency-guided Mix-up through Random Gradient Thresholding

Mix-up training approaches have proven to be effective in improving thegeneralization ability of Deep Neural Networks. Over the years, the researchcommunity expands mix-up methods into two directions, with extensive efforts toimprove saliency-guided procedures but minimal focus on the arbitrary path,leaving the randomization domain unexplored. In this paper, inspired by thesuperior qualities of each direction over one another, we introduce a novelmethod that lies at the junction of the two routes. By combining the bestelements of randomness and saliency utilization, our method balances speed,simplicity, and accuracy. We name our method R-Mix following the concept of"Random Mix-up". We demonstrate its effectiveness in generalization, weaklysupervised object localization, calibration, and robustness to adversarialattacks. Finally, in order to address the question of whether there exists abetter decision protocol, we train a Reinforcement Learning agent that decidesthe mix-up policies based on the classifier's performance, reducing dependencyon human-designed objectives and hyperparameter tuning. Extensive experimentsfurther show that the agent is capable of performing at the cutting-edge level,laying the foundation for a fully automatic mix-up. Our code is released at[https://github.com/minhlong94/Random-Mixup].