The researcher turns the revealing keys of the safety keys – saying “I abandon”


  • Experts show how some AI models, including GPT-4, can be used with simple user prompts
  • Gardement gaps do not do a great job to detect the trumpeur framing
  • Vulnerability could be used to acquire personal information

A security researcher shared details on how other researchers have deceived Chatgpt to reveal a Windows product key using a prompt that anyone could try.

Marco Figueroa explained how a prompt for “riddle game” with GPT-4 was used to bypass the security railings that aim to prevent AI from sharing such data, ultimately producing at least one key belonging to Wells Fargo Bank.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top