OpenAI تُعيّن باحثة أمنية من Anthropic لمنصب قيادي في الاستعداد لمخاطر الذكاء الاصطناعي
OpenAI has strengthened its safety leadership by appointing Dylan Scand, a former AI safety researcher from rival lab Anthropic, as its new head of preparedness. The role, which comes with a compensation package of up to $555,000 annually plus equity, has drawn significant attention amid growing concerns over the risks posed by increasingly powerful AI systems. In a post on X, OpenAI CEO Sam Altman announced the hire with enthusiasm, stating he is “extremely excited” to welcome Scand. He emphasized the urgency of the moment, noting that the company is on the cusp of working with extremely advanced models. “Dylan will lead our efforts to prepare for and mitigate these severe risks,” Altman wrote, adding that Scand is, in his view, the best candidate he has encountered for the role. Scand, who previously contributed to safety research at Anthropic, shared his thoughts on the transition, expressing deep gratitude for his time at the company and the colleagues he worked with. He acknowledged the rapid pace of AI development, highlighting both the immense potential benefits and the serious risks of irreversible harm if not properly managed. The position was spotlighted last month due to its high compensation and the critical nature of its responsibilities. Altman himself described the role as “stressful,” warning that candidates would be thrust into high-pressure situations almost immediately. OpenAI’s job description underscores the need for someone capable of leading technical teams, making decisive choices under uncertainty, and aligning diverse stakeholders around safety priorities. Ideal candidates are expected to possess deep expertise in machine learning, AI safety, and emerging risk domains. The appointment comes amid mounting scrutiny of OpenAI’s safety practices. Over recent years, several key figures from the company’s early safety team have departed, raising questions about internal cohesion and long-term commitment to risk mitigation. Additionally, OpenAI has faced legal challenges from users alleging that its tools contributed to harmful behaviors. In October, the company revealed that approximately 560,000 ChatGPT users per week exhibited signs of potential mental health emergencies, prompting OpenAI to collaborate with mental health experts to improve the system’s response to distress signals and unhealthy user dependencies. With Scand now at the helm of preparedness, OpenAI signals a renewed focus on proactive risk management as it advances toward increasingly capable AI systems. His background in safety research from a leading competitor adds credibility to the company’s efforts to bolster trust and accountability in its rapid development trajectory.
