Microsoft Copilot AI attack only required one click to compromise users – here’s what we know


  • Varonis discovers a new method of prompt injection via malicious URL parameters, called “Reprompt”.
  • Attackers could trick GenAI tools into disclosing sensitive data with a single click
  • Microsoft fixed the flaw, blocking rapid injection attacks via URLs

Security researchers Varonis have discovered Repompt, a new way to perform prompt injection attacks in Microsoft Copilot that does not include sending an email with a hidden prompt or hiding malicious commands in a compromised website.

Similar to other quick injection attacks, this one also only takes a single click.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top