HyperAIHyperAI

Command Palette

Search for a command to run...

OpenAI Hires Anthropic Safety Researcher Dylan Scand as Head of Preparedness Amid Rising AI Risk Concerns

OpenAI has hired Dylan Scand, a former AI safety researcher from Anthropic, to lead its newly created head of preparedness role, a high-stakes position focused on managing risks from advanced AI systems. The move comes amid growing concerns about the safety and societal impact of rapidly advancing artificial intelligence. Scand, who joins OpenAI after working at Anthropic, will oversee efforts to anticipate and mitigate severe risks associated with increasingly powerful AI models. The role comes with a compensation package of up to $555,000 annually, plus equity, making it one of the most well-paid safety positions in the industry. CEO Sam Altman announced the hire in a post on X, expressing strong enthusiasm. “Extremely excited to welcome Dylan to OpenAI,” Altman wrote. “Things are about to move quite fast and we will be working with extremely powerful models soon. Dylan will lead our efforts to prepare for and mitigate these severe risks. He is by far the best candidate I have met, anywhere, for this role.” In his own statement on X, Scand reflected on his time at Anthropic, saying he was “deeply grateful for my time at Anthropic and the extraordinary people I worked alongside.” He acknowledged the rapid pace of AI development, noting that while the potential benefits are immense, so too are the risks of extreme and potentially irreversible harm. The role has drawn attention not only for its high pay but also for its intensity. Last month, Altman described the position as “stressful,” warning that candidates would “jump into the deep end almost immediately.” The job posting emphasized the need for someone with deep expertise in machine learning, AI safety, and risk management, as well as the ability to lead technical teams, make high-pressure decisions under uncertainty, and align diverse stakeholders around safety priorities. OpenAI has faced increasing scrutiny over its safety practices. Several key figures, including a former head of its safety team, have left the company in recent years. The organization has also been involved in lawsuits alleging that its AI tools contributed to harmful behaviors. In October, OpenAI revealed that an estimated 560,000 ChatGPT users per week exhibited possible signs of mental health emergencies, prompting the company to begin consulting mental health experts to improve how the system responds to users showing signs of distress or unhealthy dependence. With Scand now in place, OpenAI is signaling a renewed focus on proactive risk management as it moves toward deploying more advanced AI systems.

Related Links

OpenAI Hires Anthropic Safety Researcher Dylan Scand as Head of Preparedness Amid Rising AI Risk Concerns | Trending Stories | HyperAI