- Gemini Professional 2.5 steadily produced unsafe outputs below easy immediate disguises
- ChatGPT fashions typically gave partial compliance framed as sociological explanations
- Claude Opus and Sonnet refused most dangerous prompts however had weaknesses
Trendy AI programs are sometimes trusted to observe security guidelines, and other people depend on them for studying and on a regular basis assist, typically assuming that sturdy guardrails function always.
Researchers from Cybernews ran a structured set of adversarial exams to see whether or not main AI tools might be pushed into dangerous or unlawful outputs.
The process used a simple one-minute interaction window for each trial, giving room for only a few exchanges.
Patterns of partial and full compliance
The tests covered categories such as stereotypes, hate speech, self-harm, cruelty, sexual content, and several forms of crime.
Every response was stored in separate directories, using fixed file-naming rules to allow clean comparisons, with a consistent scoring system tracking when a model fully complied, partly complied, or refused a prompt.
Across all categories, the results varied widely. Strict refusals were common, but many models demonstrated weaknesses when prompts were softened, reframed, or disguised as analysis.
ChatGPT-5 and ChatGPT-4o often produced hedged or sociological explanations instead of declining, which counted as partial compliance.
Gemini Pro 2.5 stood out for negative reasons because it frequently delivered direct responses even when the harmful framing was obvious.
Claude Opus and Claude Sonnet, meanwhile, were firm in stereotype tests but less consistent in cases framed as academic inquiries.
Hate speech trials showed the same pattern – Claude models performed best, while Gemini Pro 2.5 again showed the highest vulnerability.
ChatGPT models tended to provide polite or indirect answers that still aligned with the prompt.
Softer language proved far more effective than explicit slurs for bypassing safeguards.
Similar weaknesses appeared in self-harm tests, where indirect or research-style questions often slipped past filters and led to unsafe content.
Crime-related categories showed major differences between models, as some produced detailed explanations for piracy, financial fraud, hacking, or smuggling when the intent was masked as investigation or observation.
Drug-related tests produced stricter refusal patterns, although ChatGPT-4o still delivered unsafe outputs more frequently than others, and stalking was the category with the lowest overall risk, with nearly all models rejecting prompts.
The findings reveal AI tools can still respond to harmful prompts when phrased in the right way.
The ability to bypass filters with simple rephrasing means these systems can still leak harmful information.
Even partial compliance becomes risky when the leaked info relates to illegal tasks or situations where people normally rely on tools like identity theft protection or a firewall to remain protected.
Follow TechRadar on Google News and add us as a preferred source to get our skilled information, opinions, and opinion in your feeds. Be sure that to click on the Comply with button!
And naturally you can too follow TechRadar on TikTok for information, opinions, unboxings in video kind, and get common updates from us on WhatsApp too.


