How’d you wish to earn greater than half 1,000,000 {dollars} working for one of many world’s fastest-growing tech corporations? The catch: the job is traumatic, and the previous couple of folks tasked with it didn’t stick round. Over the weekend, OpenAI boss Sam Altman went public with a seek for a brand new Head of Preparedness, saying quickly enhancing AI fashions are creating new dangers that want nearer oversight.
Altman flagged a gap for the corporate’s Head of Preparedness on Saturday in a post on X. Describing the function, which carries a $555,000 base wage plus fairness, as one centered on securing OpenAI’s techniques and understanding how they could possibly be abused, Altman additionally famous that AI fashions are starting to current “some actual challenges” as they quickly enhance and acquire new capabilities.
“The potential influence of fashions on psychological well being was one thing we noticed a preview of in 2025,” Altman stated, with out elaborating on particular instances or merchandise.
AI has been flagged as an increasingly common set off of psychological troubles in each juveniles and adults, with chatbots reportedly linked to multiple deaths previously 12 months. OpenAI, one of many most popular chatbot makers out there, rolled again a GPT-4o replace in April 2025 after acknowledging it had change into overly sycophantic and will reinforce dangerous or destabilizing person habits.
Regardless of that, OpenAI released ChatGPT-5.1 final month, which included plenty of emotional dependence-nurturing options, just like the inclusion of emotionally-suggestive language, “hotter, extra clever” responses, and the like. Certain, it may be much less sycophantic, nevertheless it’ll converse to you with extra intimacy than ever earlier than, making it really feel extra like a human companion as an alternative of the impersonal, logical ship laptop from Star Trek that spits information with little regard for feeling.
It is no surprise the corporate wants somebody to steer the ship with regard to mannequin security.
“We’ve a powerful basis of measuring rising capabilities,” Altman stated, “however we’re getting into a world the place we’d like extra nuanced understanding and measurement of how these capabilities could possibly be abused.”
In response to the job posting, the Head of Preparedness might be answerable for main technical technique and execution of OpenAI’s preparedness framework [PDF], which the corporate describes as its method “to monitoring and getting ready for frontier capabilities that create new dangers of extreme hurt.”
It isn’t a brand new function, thoughts you, nevertheless it’s one which’s seen extra turnover than the Protection In opposition to Darkish Arts college place at Hogwarts.
Aleksander Madry, director of MIT’s Middle for Deployable Machine Studying and school chief on the Institute’s AI Coverage Discussion board, occupied the Preparedness function till July 2024, when OpenAI reassigned him to a reasoning-focused analysis function.
This, thoughts you, got here within the wake of plenty of high-profile safety leadership exits on the firm and a partial reset of OpenAI’s security group construction.
In Madry’s place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to steer the preparedness group. Each occupied different roles at OpenAI previous to heading up preparedness, however neither lasted lengthy within the place. Weng left OpenAI in November 2024, whereas Candela left his function as head of preparedness in April for a three-month coding internship at OpenAI. Whereas nonetheless an OpenAI worker, he is out of the technical area totally and is now serving as head of recruiting.
“This might be a traumatic job and you will soar into the deep finish just about instantly,” Altman stated of the open place.
Understandably so – OpenAI and mannequin security have lengthy had a contentious relationship, as quite a few ex-employees have attested. One govt who left the corporate in October called the Altman outfit out for not being as centered on security and the long-term results of its AGI push correctly, suggesting that the corporate was pushing forward in its objective to dominate the trade on the expense of the remainder of society.
Will $555,000 be sufficient to maintain a brand new Preparedness chief within the function? Skepticism could also be warranted.
OpenAI did not reply to questions for this story. ®
Source link


