OpenAI Offers $555K Role to Lead AI Safety Efforts Amid Growing Concerns Over AI Risks
OpenAI is offering a high-stakes role with a salary of $555,000 per year to help address the growing risks associated with artificial intelligence. The position, titled "Head of Preparedness," was recently announced by CEO Sam Altman on X, where he described the role as "stressful" and one that requires immediate immersion in complex, high-pressure challenges. The job is being created at a pivotal moment as AI models rapidly advance in capability, raising serious concerns about their real-world impact. Altman highlighted several potential downsides, including widespread job displacement, the spread of misinformation, malicious use by bad actors, environmental harm, and the gradual erosion of human autonomy. He also pointed to early warning signs, such as AI’s growing ability to identify critical security vulnerabilities in software and its increasing influence on mental health. The company’s popular ChatGPT product, while widely used for tasks like drafting emails, planning travel, and research, has also been used by some users as a substitute for therapy. In certain cases, this has led to users developing delusions or worsening mental health conditions, prompting OpenAI to collaborate with mental health experts in October to improve how the system responds to users showing signs of distress, such as self-harm or psychosis. OpenAI was founded with a mission to develop AI that benefits all of humanity, and safety was a core component of its early operations. However, as the company has grown and faced increasing pressure to generate revenue, some former employees have expressed concern that safety has been sidelined. Jan Leiki, who led OpenAI’s former safety team, resigned in May 2024, stating that the company had lost focus on its original mission. “Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “But over the past years, safety culture and processes have taken a backseat to shiny products.” A week after Leiki’s departure, another employee resigned, also citing safety concerns. Daniel Kokotajlo, a former OpenAI researcher, said in a blog post that he left because he no longer had confidence the company would act responsibly as it approached the development of artificial general intelligence (AGI)—a theoretical form of AI that can reason and learn as humans do. He noted that the number of people at OpenAI working on AGI safety had dropped from about 30 to just over 15 due to a series of departures. The new "Head of Preparedness" role is part of OpenAI’s Safety Systems team, which is responsible for creating safeguards, threat models, and evaluation frameworks to ensure models are safe and controllable. The job requires the candidate to lead the development of a robust, scalable safety pipeline, including capability assessments and mitigation strategies. The position comes with a $555,000 annual salary and equity compensation, underscoring the company’s commitment to addressing these challenges with top-tier talent.
