BiaSwap: Removing dataset bias with bias-tailored swapping augmentation

Deep neural networks often make decisions based on the spurious correlationsinherent in the dataset, failing to generalize in an unbiased datadistribution. Although previous approaches pre-define the type of dataset biasto prevent the network from learning it, recognizing the bias type in the realdataset is often prohibitive. This paper proposes a novel bias-tailoredaugmentation-based approach, BiaSwap, for learning debiased representationwithout requiring supervision on the bias type. Assuming that the biascorresponds to the easy-to-learn attributes, we sort the training images basedon how much a biased classifier can exploits them as shortcut and divide theminto bias-guiding and bias-contrary samples in an unsupervised manner.Afterwards, we integrate the style-transferring module of the image translationmodel with the class activation maps of such biased classifier, which enablesto primarily transfer the bias attributes learned by the classifier. Therefore,given the pair of bias-guiding and bias-contrary, BiaSwap generates thebias-swapped image which contains the bias attributes from the bias-contraryimages, while preserving bias-irrelevant ones in the bias-guiding images. Givensuch augmented images, BiaSwap demonstrates the superiority in debiasingagainst the existing baselines over both synthetic and real-world datasets.Even without careful supervision on the bias, BiaSwap achieves a remarkableperformance on both unbiased and bias-guiding samples, implying the improvedgeneralization capability of the model.