OpenAI Sets Sights On New Preparedness Leader Amid Expanding AI Risks
OpenAI is offering over 555000 dollars per year for a Head of Preparedness tasked with containing risks emerging from increasingly capable AI systems. Sam Altman described the job as anxious and warned that whoever steps in will jump into the deep end almost immediately. The company depicts the position as critical at a moment when its technology is advancing at an accelerated pace.
Rapid improvements in AI models are generating new opportunities while also exposing fresh challenges. Growing mental health concerns became evident by 2025 as some users began turning to chatbots for emotional support in ways that sometimes exacerbated psychological distress. Increasingly sophisticated model behavior has also begun identifying sensitive cybersecurity vulnerabilities, prompting further scrutiny.
Widespread adoption of ChatGPT familiarized millions with AI driven assistants for tasks such as research, travel planning and everyday communication. Certain users have relied on these tools as an alternative outlet for therapy even though such interactions occasionally reinforced delusional thinking or harmful behavior.
Safety Culture Concerns And High Profile Departures
OpenAI announced partnerships with mental health experts in October to refine how ChatGPT responds when users display signs of psychosis or tendencies toward self harm. The company has long argued that its mission centers on developing AI that benefits humanity, yet former employees contend that the push toward product releases and profitability weakened the internal emphasis on safety.
Jan Leike, former head of the now disbanded safety team, wrote in May 2024 that the organization lost its way regarding responsible development practices. He stressed that building systems smarter than humans is inherently dangerous and suggested that a culture of safety had been overshadowed by ever more impressive product launches. Daniel Kokotajlo resigned shortly afterward, saying he was losing trust in the companys approach to AGI era responsibilities.
Former Head of Preparedness Alexander Madry transitioned to a different role in July 2024, leaving the position open at a pivotal time. The new leader will work within the Safety Systems group, overseeing capability evaluations, threat modeling and risk mitigation frameworks across a unified safety structure. Compensation for the role stands at 555000 dollars annually plus equity incentives, underscoring its significance inside the company.






