HyperAIHyperAI

Command Palette

Search for a command to run...

DOT Faces Backlash for Using Gemini to Draft Safety Rules Amid Warnings of Potential Harm

Senior staff at the U.S. Department of Transportation (DOT) have raised serious concerns over the agency’s use of artificial intelligence, specifically Google’s Gemini, to draft safety regulations, calling the practice “wildly irresponsible.” Internal warnings from agency employees highlight fears that relying on AI for rulemaking could lead to flawed or dangerous policies, potentially resulting in injuries or even fatalities. The concern stems from reports that DOT officials have used Gemini to generate language for proposed safety rules, including those related to vehicle emissions, automated driving systems, and infrastructure safety. While the agency has not confirmed the extent of AI use, multiple sources within the department say that staff members were briefed on or directly involved in using the tool to draft initial versions of regulatory language. Critics argue that AI models like Gemini, while capable of producing coherent text, lack the nuanced understanding of safety engineering, legal precedent, and real-world risk assessment required for high-stakes regulatory work. They warn that the tool may generate inaccurate, misleading, or incomplete content—especially when fed incomplete or biased training data. One senior DOT official, speaking anonymously, said, “We’re putting lives on the line by letting an algorithm draft rules that could determine how vehicles are built, how roads are designed, or how new technologies are regulated. This isn’t just a technical oversight—it’s a moral failure.” The issue has also drawn scrutiny from outside experts and watchdog groups. Legal and technology analysts point out that federal agencies are required to follow rigorous processes like public comment periods and cost-benefit analyses when creating regulations. They argue that using AI without proper human oversight undermines transparency and accountability. In response, a DOT spokesperson stated that AI tools are used only as “supporting aids” in the drafting process and that all final rules undergo extensive internal review and legal scrutiny. The agency added that no regulations have been finalized based solely on AI-generated content. Still, the controversy has intensified as other federal agencies, including the FAA and FDA, explore similar AI applications. Critics say that without clear guidelines and safeguards, the use of AI in rulemaking could set a dangerous precedent, especially when public safety is at stake.

Related Links

DOT Faces Backlash for Using Gemini to Draft Safety Rules Amid Warnings of Potential Harm | Trending Stories | HyperAI