HyperAIHyperAI
2 months ago

Training Debiased Subnetworks with Contrastive Weight Pruning

Park, Geon Yeong ; Lee, Sangmin ; Lee, Sang Wan ; Ye, Jong Chul
Training Debiased Subnetworks with Contrastive Weight Pruning
Abstract

Neural networks are often biased to spuriously correlated features thatprovide misleading statistical evidence that does not generalize. This raisesan interesting question: ``Does an optimal unbiased functional subnetwork existin a severely biased network? If so, how to extract such subnetwork?" Whileempirical evidence has been accumulated about the existence of such unbiasedsubnetworks, these observations are mainly based on the guidance ofground-truth unbiased samples. Thus, it is unexplored how to discover theoptimal subnetworks with biased training datasets in practice. To address this,here we first present our theoretical insight that alerts potential limitationsof existing algorithms in exploring unbiased subnetworks in the presence ofstrong spurious correlations. We then further elucidate the importance ofbias-conflicting samples on structure learning. Motivated by theseobservations, we propose a Debiased Contrastive Weight Pruning (DCWP)algorithm, which probes unbiased subnetworks without expensive groupannotations. Experimental results demonstrate that our approach significantlyoutperforms state-of-the-art debiasing methods despite its considerablereduction in the number of parameters.