MaxViT-UNet: Multi-Axis Attention for Medical Image Segmentation

Since their emergence, Convolutional Neural Networks (CNNs) have madesignificant strides in medical image analysis. However, the local nature of theconvolution operator may pose a limitation for capturing global and long-rangeinteractions in CNNs. Recently, Transformers have gained popularity in thecomputer vision community and also in medical image segmentation due to theirability to process global features effectively. The scalability issues of theself-attention mechanism and lack of the CNN-like inductive bias may havelimited their adoption. Therefore, hybrid Vision transformers(CNN-Transformer), exploiting the advantages of both Convolution andSelf-attention Mechanisms, have gained importance. In this work, we presentMaxViT-UNet, a new Encoder-Decoder based UNet type hybrid vision transformer(CNN-Transformer) for medical image segmentation. The proposed Hybrid Decoderis designed to harness the power of both the convolution and self-attentionmechanisms at each decoding stage with a nominal memory and computationalburden. The inclusion of multi-axis self-attention, within each decoder stage,significantly enhances the discriminating capacity between the object andbackground regions, thereby helping in improving the segmentation efficiency.In the Hybrid Decoder, a new block is also proposed. The fusion processcommences by integrating the upsampled lower-level decoder features, obtainedthrough transpose convolution, with the skip-connection features derived fromthe hybrid encoder. Subsequently, the fused features undergo refinement throughthe utilization of a multi-axis attention mechanism. The proposed decoder blockis repeated multiple times to segment the nuclei regions progressively.Experimental results on MoNuSeg18 and MoNuSAC20 datasets demonstrate theeffectiveness of the proposed technique.