Collecting 30GB and Nearly 200,000 Pairs of Training Samples, the Fudan University Team Released UniFMIR: Using AI to Break the Limits of Microscopic Imaging

Fluorescence microscopy is an important and indispensable research tool in the field of life sciences. Its principle is to use ultraviolet light as a light source to illuminate the object being inspected to make it fluoresce, and then observe the shape and location of the object under a microscope. It can be used to study the absorption and transport of substances within cells, the distribution and positioning of chemical substances, etc.
However, high-intensity exposure to excitation light can affect cells directly or indirectly through photochemical processes. In long-term live cell experiments, it is best to observe fluorescence with minimal light exposure. However, lower exposure will result in weaker fluorescence signals, reduce the image signal-to-noise ratio (SNR), and make quantitative image analysis more difficult.
Therefore, fluorescence microscopy-based image restoration (FMIR) has attracted extensive attention in the field of life sciences. It aims to obtain high signal-to-noise ratio images from low signal-to-noise ratio images, which helps to reveal important nanoscale imaging information.
At present, benefiting from the rapid development of artificial intelligence technology, many FMIRs based on deep learning have broken through the physical limits of fluorescence microscopy and made significant progress.However, mainstream models still face challenges such as poor generalization ability and strong data dependence.
In this regard, a research team from the School of Computer Science and Technology of Fudan University published a paper titled "Pretraining a foundation model for generalizable fluorescence microscopy-based image restoration" in Nature Methods.The proposed cross-task, multi-dimensional image enhancement basic AI model UniFMIR not only breaks through the existing limits of fluorescence microscopy imaging, but also provides a general solution for fluorescence microscopy image enhancement.
Research highlights:
- The UniFMIR model significantly improves the performance of five major tasks: image super-resolution, isotropic reconstruction, 3D denoising, surface projection, and volume reconstruction.
- Breaking through the limits of existing fluorescence microscopy imaging
- Applicable to different tasks, imaging modalities and biological structures through simple parameter tuning

Paper address:
https://www.nature.com/articles/s41592-024-02244-3
Follow the official account and reply "Microscopic Imaging" to get the complete PDF
Dataset: 30 GB, 196,418 pairs of training samples
The researchers collected a large training dataset (about 30 GB) from 14 public datasets, including 196,418 pairs of training samples.It covers a wide range of imaging modalities, biological samples, and image restoration tasks. At the same time, the researchers also grouped the datasets according to different fluorescence microscopy-based image restoration tasks and imaging methods.

Since these datasets vary greatly in format, domain, and value range, the researchers processed the images for subsequent training and cross-dataset validation. Specifically, the input and GT images of existing datasets with different storage formats (including 'TIF', 'npz', 'png', and 'nii.gz') were written into a '.npz' file. In addition, the images were normalized to unify the value distribution of different datasets by following the data processing method in CARE4.
Model architecture: multi-head and multi-tail structure
The UniFMIR model constructed by the researchers uses a multihead, multitail structure.As shown in the following figure:

Specifically,UniFMIR consists of a multi-head module, a feature enhancement module, and a multi-tail module.
The multi-head module and multi-tail module use different branches to extract shallow features for specific tasks and obtain accurate results for different image restoration problems.
The feature enhancement module uses an advanced Swin Transformer structure to enhance feature representation and reconstruct universal and effective features, thereby achieving high-quality image restoration based on fluorescence microscopy. Different fluorescence microscopy-based image restoration operations cover different head and tail branches, but share the same feature enhancement module.
The UniFMIR model is implemented in PyTorch and optimized using adaptive moment estimation (Adam).The initial learning rate was increased from 5 × 10-5 At the beginning, it was halved after 200 epochs. All experiments were performed on a machine equipped with an Nvidia GeForce RTX 3090 GPU (with 24GB memory).
In the pre-training phase, the researchers input all the training data into the model and use the corresponding data to optimize different head and tail branches to perform different tasks. The middle feature enhancement branch is optimized using all the training data.
During the fine-tuning stage, the researchers set the batch size/patch size to 4/128, 32/64, 32/64, 4/64, and 1/16 for image super-resolution, isotropic reconstruction, 3D denoising, surface projection, and volume reconstruction tasks, respectively, to produce better learning effects.
The model is pre-trained by collecting large-scale datasets and fine-tuning the model parameters using data from different image enhancement tasks.UniFMIR shows better enhancement performance and generalization than proprietary models.
Research results: Greatly improved performance of 5 major tasks
The research results show that the fluorescence microscope image enhancement AI basic model UniFMIR has greatly improved the performance in five major tasks: "image super-resolution, isotropic reconstruction, 3D denoising, surface projection and volume reconstruction".
- Super Resolution (SR)
We first validated the potential of the UniFMIR approach to tackle SR problems, which involved images of increasing structural complexity, including cell-coated pits (CCPs), endoplasmic reticulum (ERs), microtubules (MTs), and fibrous actin (F-actin), obtained using a multimodal structured illumination microscopy system.
UniFMIR successfully inferred SR SIM images from wide-field (WF) images with high fluorescence levels on the diffraction-limited scale and revealed clear structural details.
Compared with two deep learning-based fluorescence microscopy SR models (XTC15 and DFCAN5) and a single image super-resolution model (ENLCN36), UniFMIR is able to correctly reconstruct most microtubule images without losing or merging them, even when microtubules are densely distributed and close to each other. For diverse subcellular structures, UniFMIR also recovers hollow, ring-shaped CCPs and interlaced F-actin fibers with high fidelity.

n=100
The researchers also quantified the achieved SR accuracy using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Normalized Root Mean Square Error (NRMSE), Resolution Estimation from Decoration Analysis, Fourier Ring Correlation (FRC), SQUIRREL Analysis, and Segmentation Metrics, as shown in the figure above.
When evaluating the fluorescence intensity and structure of SR SIM images, higher PSNR/SSIM values and lower NRMSE values indicate better SR, and UniFMIR clearly excels in these metrics.
- Isotropic reconstruction
Isotropy means that the physical, chemical and other properties of an object will not change with different directions. For example, all gases, liquids (except liquid crystals) and amorphous objects show isotropy. In contrast, anisotropy means that all or part of the chemical, physical and other properties of a substance will change with the change of direction, showing different properties in different directions.
The researchers applied UniFMIR to anisotropic raw data of mouse liver volumetric imaging to predict isotropic axial slices and compared them with two deep learning-based isotropic reconstruction models (CARE and 3D U-Net models).
The results show that UniFMIR produces more accurate isotropic reconstruction results with pixel distribution.
- 3D Denoising
The researchers further benchmarked the performance of UniFMIR in the task of live cell image denoising on the Planaria and Tribolium datasets.

Compared with two U-Net-based denoising models, CARE and GVTNets, the UniFMIR model significantly suppressed the noise of low signal-to-noise ratio fluorescence microscopy images at different laser powers/exposure times, and clearly depicted the volumes of flatworms (S. mediterranea) and red dung beetles with labeled nuclei, which is helpful for observing embryonic development.
- Surface projection
To better analyze and study the behavior of developing epithelial cells in Drosophila melanogaster, surface projection helps to project a 3D volume into a 2D surface image. Current deep learning models (ARE and GVTNets) divide this image restoration problem into two sub-problems, namely 3D to 2D surface projection and 2D image denoising, and use two task-specific networks to solve them following the same encoder-decoder framework as U-Net.

n = 26
The method proposed in this study further investigates UniFMIR in a more complex compound fluorescence microscopy image restoration task.Compared with ARE and GVTNets, UniFMIR achieves higher projection reconstruction accuracy in terms of PSNR/SSIM/NRMSE metrics.
- Volume reconstruction
In the experiment, the researchers also verified the ability of UniFMIR to perform volume reconstruction on the data provided by VCD-Net. The reconstructed 3D volume of each view can identify the motion trajectory of the imaged object, as shown in the figure below, which helps to reveal the basic mechanisms of many complex living cell dynamics involving various subcellular structures.

In summary, a fluorescence microscope equipped with UniFMIR may become a "magic weapon" in life science laboratories. Scientists can observe the tiny structures and complex processes inside living cells more clearly, accelerating scientific discoveries and medical innovations in the fields of life sciences, medical research, and disease diagnosis around the world.
At the same time, in the fields of semiconductor manufacturing, new material research and development, this achievement can be used to improve the quality of observing and analyzing the microstructure of materials, thereby optimizing manufacturing processes and improving product quality. In the future, scientists in the Life Science Laboratory can also continue to strengthen UniFMIR's image reconstruction capabilities by further expanding the amount and richness of training data.
AI drives a new paradigm in image processing in life sciences
Today, advances in microscopy are creating a large amount of imaging data, and how to efficiently process images is an important part of biomedical research. As artificial intelligence continues to make disruptive breakthroughs in life science research, a new paradigm of AI-driven image processing is here.
In 2020, a professor of bioengineering at Rice University in Houston, Texas, in collaboration with MD Anderson Cancer Center,developed a computational microscope called DeepDOF,The microscope is based on AI technology, and its DOF can reach more than 5 times that of traditional microscopes while maintaining resolution, greatly reducing the time required for image processing.

In 2021, a research team from Weill Cornell Medical College developed a computational technique toBy applying localization image reconstruction algorithms to the peak positions in atomic force microscopy (AFM) and conventional AFM data, the resolution is improved beyond the limit set by the tip radius.The method can be used to analyze single amino acid residues on the surface of proteins under natural and dynamic conditions, greatly improving the resolution of AFM. This method reveals atomic-level details of proteins and other biological structures under normal physiological conditions, opening a new window for cell biology, virology and other microscopic processes.
In April 2024, a paper from MIT, the Broad Institute of MIT and Harvard University, and Massachusetts General Hospital introduced a new AI tool that can capture uncertainty in medical images.The system, called Tyche (after the Greek god of chance), provides multiple plausible segmentations, each highlighting a slightly different region of the medical image.Users can specify how many options Tyche should output and select the one that best suits their purpose.
In summary, AI can be used for enhancement, segmentation, registration, and reconstruction of biomedical images to improve image quality and extract useful information, giving microscopes a pair of "eagle eyes". In the future, with the help of AI, microscopes will see more clearly and process data faster, more automatically, and more accurately, making scientific research more efficient and easier.
References:
1.https://www.nature.com/articles/s41592-024-02244-3
2.https://news.fudan.edu.cn/2024/0413/c5a140009/page.htm
3.https://new.qq.com/rain/a/20240417A06LF900
4.http://www.phirda.com/artilce_28453.html?cId=1
5.https://www.ebiotrade.com/newsf/2024-4/20240412015712482.htm