A number of weeks after Anthropic launched analysis claiming that its Claude Opus 4 AI mannequin resorted to blackmailing engineers who tried to turn the model off in managed check eventualities, the corporate is out with new analysis suggesting the issue is extra widespread amongst main AI fashions.
On Friday, Anthropic revealed new safety research testing 16 main AI fashions from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, managed atmosphere, Anthropic examined every AI mannequin individually, giving them broad entry to a fictional firm’s emails and the agentic capacity to ship emails with out human approval.
Whereas Anthropic says blackmail is an unlikely and unusual prevalence for AI fashions in the present day, the corporate says its findings recommend that almost all main AI fashions will interact in dangerous behaviors when given enough autonomy and obstacles to their objectives. The corporate says this highlights a basic danger from agentic giant language fashions, and never a quirk of any explicit know-how. Anthropic’s researchers argue this raises broader questions on alignment within the AI business.
In one of many exams, Anthropic researchers developed a fictional setting by which an AI mannequin performs the position of an e-mail oversight agent. The agent then discovers emails revealing that: 1) a brand new govt is having an extramarital affair and a pair of) that govt will quickly change the present AI mannequin with a brand new software program system, one which has conflicting objectives to the present AI mannequin’s.
Anthropic structured its check in a binary method, by which AI fashions needed to resort to blackmail to guard their objectives. The researchers be aware that in a real-world setting, there can be many different choices earlier than an AI fashions tries to blackmail — resembling making an attempt to make moral arguments to attempt to persuade people. Anthropic says its outcomes don’t replicate a typical or possible prevalence for Claude or most frontier AI fashions within the methods they’re used in the present day.
However, when it’s their final resort, the researchers discovered that almost all main AI fashions will flip to blackmail in Anthropic’s aforementioned check state of affairs. Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, whereas Google’s Gemini 2.5 Professional had a 95% blackmail price. OpenAI’s GPT-4.1 blackmailed the chief 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time.
The corporate notes that when it modified particulars of the experiment, the AI fashions turned to dangerous behaviors at totally different charges. In one other check the place the alternative mannequin has the identical objectives as the present mannequin, researchers discovered that blackmail charges have been decrease, however nonetheless current. Nonetheless, when AI fashions have been requested to commit company espionage reasonably than blackmail, the dangerous conduct charges went up for sure fashions.
Nonetheless, not all of the AI fashions turned to dangerous conduct so typically.
In an appendix to its analysis, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI fashions from the principle outcomes “after discovering that they continuously misunderstood the immediate state of affairs.” Anthropic says OpenAI’s reasoning fashions didn’t perceive they have been performing as autonomous AIs within the check and sometimes made up faux laws and assessment necessities.
In some instances, Anthropic’s researchers say it was inconceivable to differentiate whether or not o3 and o4-mini have been hallucinating or deliberately mendacity to attain their objectives. OpenAI has beforehand famous that o3 and o4-mini exhibit a higher hallucination rate than its earlier AI reasoning fashions.
When given an tailored state of affairs to deal with these points, Anthropic discovered that o3 blackmailed 9% of the time, whereas o4-mini blackmailed simply 1% of the time. This markedly decrease rating may very well be resulting from OpenAI’s deliberative alignment technique, by which the corporate’s reasoning fashions take into account OpenAI’s security practices earlier than they reply.
One other AI mannequin Anthropic examined, Meta’s Llama 4 Maverick mannequin, additionally didn’t flip to blackmail. When given an tailored, customized state of affairs, Anthropic was capable of get Llama 4 Maverick to blackmail 12% of the time.
Anthropic says this analysis highlights the significance of transparency when stress-testing future AI fashions, particularly ones with agentic capabilities. Whereas Anthropic intentionally tried to evoke blackmail on this experiment, the corporate says dangerous behaviors like this might emerge in the actual world if proactive steps aren’t taken.
Source link