Ahmed stated OpenAI’s assurances rang hole. “It’s not credible for them to say they’ve red-teamed this in the event that they didn’t predict that individuals would use it to provide racist content material,” he stated. “It’s staggering that the world’s cleverest engineers can’t engineer one thing to cease antisemitism from being produced by their companies.”

This isn’t the primary time OpenAI has come underneath scrutiny for its security controls. Earlier this yr, the corporate confronted backlash after reports that its ChatGPT chatbot supplied a suicidal teenager with details about strategies of self-harm—an episode that raised broader issues about how successfully OpenAI enforces its guardrails throughout merchandise.

“In the end content material moderation can turn into fairly subjective when the platform itself is deciding what must be moderated,” Emarketer’s Smiley stated. 


Source link