HyperAIHyperAI

Command Palette

Search for a command to run...

MIT’s New AI Tool Revolutionizes Medical Image Segmentation with Fewer Clicks and Zero Retraining

A new AI system developed by MIT researchers could significantly speed up clinical research by automating the time-intensive process of segmenting medical images. Known as MultiverSeg, the tool allows scientists to quickly outline regions of interest in biomedical images—such as brain structures or tumors—using simple interactions like clicks, scribbles, or boxes. Unlike traditional methods, MultiverSeg learns from each user interaction and gradually reduces the need for manual input, eventually making accurate predictions without any further user input. The system stands out because it doesn’t require a pre-labeled dataset or machine learning expertise to get started. Researchers can begin using it immediately on a new imaging task by uploading a few images and marking areas of interest. As the model processes more images, it builds a context set of previously segmented examples, which it uses to improve future predictions. This allows the system to adapt and refine its accuracy over time, even when dealing with complex or rare anatomical structures. Compared to existing tools, MultiverSeg dramatically reduces the number of interactions needed. In tests, it achieved higher accuracy than state-of-the-art models with only two clicks on the ninth image—far fewer than required by other systems. For certain image types like X-rays, the model often reaches high accuracy after just one or two manual segmentations. Users can also correct the AI’s output at any point, enabling rapid iteration and fine-tuning without starting over. The tool combines the strengths of interactive segmentation and automated models. While interactive tools require users to repeat marking steps for each image, and automated models demand large training datasets and technical setup, MultiverSeg eliminates both barriers. It dynamically uses past examples to inform new predictions, enabling efficient, scalable segmentation across diverse medical imaging tasks. The research team, led by Hallee Wong, a graduate student in electrical engineering and computer science, includes Jose Javier Gonzalez Ortiz PhD ’24, John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering, and Adrian Dalca, an assistant professor at Harvard Medical School and a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Their work will be presented at the International Conference on Computer Vision. The team envisions MultiverSeg accelerating clinical studies, reducing the cost and time of medical research, and improving applications like radiation treatment planning. By making image segmentation faster and more accessible, the tool could unlock new research opportunities that were previously impractical due to time and resource constraints. Future work includes testing the system in real clinical settings, gathering feedback from medical professionals, and extending its capabilities to 3D biomedical images. The project is supported by Quanta Computer, Inc., the National Institutes of Health, and the Massachusetts Life Sciences Center.

Related Links