Tales about folks constructing emotional connections with AI are showing extra usually, however Anthropic just dropped some numbers claiming it’s miles from as frequent because it might sound. Scraping 4.5 million conversations from Claude, the corporate found that solely 2.9 p.c of customers have interaction with it for emotional or private help.

Anthropic needed to emphasise that whereas sentiment often improves over the dialog, Claude will not be a digital shrink. It hardly ever pushes again outdoors of security issues, that means it will not give medical recommendation and can inform folks to not self-harm.


Source link