HyperAI
2 days ago

Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

Zedong Wang, Siyuan Li, Dan Xu
Rep-MTL: Unleashing the Power of Representation-level Task Saliency for
  Multi-Task Learning
Abstract

Despite the promise of Multi-Task Learning in leveraging complementaryknowledge across tasks, existing multi-task optimization (MTO) techniquesremain fixated on resolving conflicts via optimizer-centric loss scaling andgradient manipulation strategies, yet fail to deliver consistent gains. In thispaper, we argue that the shared representation space, where task interactionsnaturally occur, offers rich information and potential for operationscomplementary to existing optimizers, especially for facilitating theinter-task complementarity, which is rarely explored in MTO. This intuitionleads to Rep-MTL, which exploits the representation-level task saliency toquantify interactions between task-specific optimization and sharedrepresentation learning. By steering these saliencies through entropy-basedpenalization and sample-wise cross-task alignment, Rep-MTL aims to mitigatenegative transfer by maintaining the effective training of individual tasksinstead pure conflict-solving, while explicitly promoting complementaryinformation sharing. Experiments are conducted on four challenging MTLbenchmarks covering both task-shift and domain-shift scenarios. The resultsshow that Rep-MTL, even paired with the basic equal weighting policy, achievescompetitive performance gains with favorable efficiency. Beyond standardperformance metrics, Power Law exponent analysis demonstrates Rep-MTL'sefficacy in balancing task-specific learning and cross-task sharing. Theproject page is available at HERE.