Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain Activities

Decoding visual stimuli from neural responses recorded by functional MagneticResonance Imaging (fMRI) presents an intriguing intersection between cognitiveneuroscience and machine learning, promising advancements in understandinghuman visual perception and building non-invasive brain-machine interfaces.However, the task is challenging due to the noisy nature of fMRI signals andthe intricate pattern of brain visual representations. To mitigate thesechallenges, we introduce a two-phase fMRI representation learning framework.The first phase pre-trains an fMRI feature learner with a proposedDouble-contrastive Mask Auto-encoder to learn denoised representations. Thesecond phase tunes the feature learner to attend to neural activation patternsmost informative for visual reconstruction with guidance from an imageauto-encoder. The optimized fMRI feature learner then conditions a latentdiffusion model to reconstruct image stimuli from brain activities.Experimental results demonstrate our model's superiority in generatinghigh-resolution and semantically accurate images, substantially exceedingprevious state-of-the-art methods by 39.34% in the 50-way-top-1 semanticclassification accuracy. Our research invites further exploration of thedecoding task's potential and contributes to the development of non-invasivebrain-machine interfaces.