HyperAIHyperAI

Command Palette

Search for a command to run...

AI Enhances Monitoring and Support for Vulnerable Ecosystems

MIT doctoral student Justin Kay, a researcher at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and member of Sara Beery’s lab, is developing advanced computer vision and machine learning tools to help monitor wildlife and address biodiversity loss. With over 3,500 animal species at risk of extinction due to habitat loss, overexploitation, and climate change, Kay and his team are tackling the data analysis bottlenecks that slow down conservation efforts. One of their key innovations is CODA—Consensus-Driven Active Model Selection—a new method that helps researchers choose the best pre-trained AI model for their specific wildlife data with minimal human effort. With over 1.9 million pre-trained models available on platforms like HuggingFace, selecting the right one has become a major challenge. Traditional model evaluation requires extensive manual labeling of test datasets, which is time-consuming and costly. CODA solves this by using an interactive, active learning approach. Instead of labeling thousands of images upfront, users annotate just 25–50 strategically selected examples. CODA identifies the most informative data points by analyzing consensus among model predictions and estimating each model’s confusion matrix—how likely it is to misclassify certain species. This probabilistic framework allows the system to infer which model performs best across the entire dataset with far fewer annotations. The method excels because it treats model predictions collectively, leveraging the “wisdom of the crowd.” If multiple models agree on a label, that label is more likely to be correct. This insight helps prioritize which data points to label next, dramatically improving efficiency. The approach was recognized as a Highlight Paper at the International Conference on Computer Vision (ICCV 2025) and is available on arXiv. Kay’s work extends beyond model selection. He is also developing computer vision systems to track migrating salmon using underwater sonar video—critical for understanding ecosystem health in the Pacific Northwest. These systems face challenges like shifting data distributions when new cameras are deployed, which can degrade performance. To address this, Kay and his team created a new domain adaptation framework that improves model robustness across changing environments, with applications beyond ecology, including self-driving vehicles and spacecraft analysis. Another focus is aligning AI outputs with real-world conservation goals. Rather than treating object detection as an end in itself, the team designs systems that connect model predictions to ecological questions—like species presence and population trends. They are building integrated pipelines that combine machine learning with ecological statistics to produce actionable insights. Supported by the National Science Foundation, NSERC, and J-WAFS, Kay’s research emphasizes efficient, human-in-the-loop AI systems that prioritize robust evaluation over model training. His work underscores the importance of designing AI not just to analyze data, but to answer urgent ecological questions quickly and accurately. As biodiversity declines at unprecedented rates, AI tools like CODA offer a scalable path forward—empowering conservationists with smarter, faster ways to monitor ecosystems and protect vulnerable species. By reducing the burden of model selection and enhancing predictive reliability, these innovations help bridge the gap between technological capability and real-world conservation impact.

Related Links