AI Must Be Developed with Safety as a Priority, Experts Urge
Artificial intelligence tools are increasingly being used to summarize research, draft papers, and generate policy documents. While these capabilities offer valuable assistance, they also pose serious risks. As highlighted in a recent Nature article, AI systems can omit crucial perspectives, fabricate information, and produce false references—commonly referred to as "hallucinations." These shortcomings can undermine the integrity of scientific communication and decision-making. Given the growing reliance on AI across academia, industry, and governance, it is essential to prioritize safety and accountability in AI development and deployment. This requires more than technical fixes—it demands a proactive commitment to ethical principles, rigorous oversight, and transparency. Researchers and institutions must establish clear guidelines for when and how AI tools can be used in scholarly work. Peer reviewers and editors should be trained to detect AI-generated content and assess its reliability. Journals should consider requiring disclosure of AI use in submissions, much like they do for human co-authors or funding sources. Moreover, developers of AI systems must be held accountable for the quality and accuracy of their models. This includes investing in better training data, improving fact-checking mechanisms, and enabling users to trace the origins of generated content. Open-source models and independent audits can help build trust and ensure that AI systems are not only powerful but also trustworthy. Education also plays a critical role. Students and early-career researchers need to be taught not only how to use AI tools but also how to critically evaluate their outputs. The goal should not be to replace human judgment, but to enhance it. In short, the benefits of AI in science and policy must be balanced with a deep commitment to accuracy, fairness, and integrity. Taking the time to ensure AI is safe is not a luxury—it is a necessity for maintaining public trust and advancing knowledge responsibly.
