New JAMA Report Outlines Roadmap for Safer, More Effective AI in Health Care with Calls for Stronger Oversight, Real-World Evaluation, and National Data Infrastructure
A new report published on October 13, 2025, in the Journal of the American Medical Association presents a comprehensive roadmap for the responsible integration of artificial intelligence in health care. Co-authored by Michelle Mello, professor of law and health policy at Stanford Law School and Stanford University School of Medicine, the report titled “AI, Health, and Health Care Today and Tomorrow” calls for urgent changes to ensure AI adoption enhances patient outcomes, not just operational efficiency. The report emerged from the 2024 JAMA Summit on Artificial Intelligence, a high-level gathering of over 60 leaders from medicine, law, policy, and industry. It is part of JAMA’s ongoing series launched in 2023 to foster cross-sector dialogue and drive practical solutions to critical health policy challenges. Mello, a member of the National Academy of Medicine, highlights a growing gap: AI is being adopted rapidly across health care, but regulatory and evaluation systems have not evolved to keep pace. “AI is being adopted at remarkable speed in the health care sector, but our systems for evaluating and regulating it haven’t kept pace,” she said. “This report identifies concrete steps to make AI’s integration more transparent, effective, and fair.” While AI holds immense promise—reducing administrative burdens, improving diagnostics, personalizing treatment, and expanding access to underserved communities—the authors warn that without proper safeguards, these benefits may be limited, inequitable, or even harmful. The report outlines four key priorities for responsible AI integration: First, multistakeholder engagement throughout the entire life cycle of AI tools, involving developers, clinicians, patients, health systems, and regulators from design through deployment and monitoring. Second, the development of robust evaluation tools that measure real-world clinical effectiveness, not just technical performance in controlled settings. The authors stress the need for rapid, scalable methods to assess outcomes across diverse populations and care environments. Third, the creation of a national data infrastructure to support continuous learning across health systems. Modeled after the FDA’s Sentinel Initiative, which monitors medical product safety using distributed health data networks, such a system would enable faster detection of both benefits and unintended harms. Fourth, stronger regulatory frameworks and incentives to ensure accountability. The report urges the FDA and other federal agencies to expand their oversight, particularly for AI tools that impact patient care. It also calls for funding mechanisms, clearer rules, and aligned incentives to encourage developers and health systems to participate in evaluation and compliance. Currently, many AI tools in health care fall outside FDA oversight. Clinical tools like sepsis prediction systems or AI scribes that transcribe conversations and suggest treatments are sometimes regulated, but often not required to prove real-world effectiveness. Meanwhile, business-focused tools—such as those for prior authorization or scheduling—and thousands of direct-to-consumer wellness apps are typically exempt from rigorous review, despite their potential to influence patient access and outcomes. “Hospitals are adopting AI tools faster than they can realistically evaluate them, and most don’t have the infrastructure or resources to run rigorous assessments in-house,” Mello said. “Right now, oversight is mostly about process and safety checks—like preventing algorithmic errors or meeting transparency requirements—not about whether these tools actually improve health.” The goal is not to slow innovation, but to ensure that when AI is used in health care, it delivers tangible, measurable, and equitable benefits for patients.
