People deceive AI chatbots to help make crimes


  • The researchers discovered a “universal jailbreak” for AI chatbots
  • Jailbreak can encourage major chatbots to help commit crimes or other activities contrary to ethics
  • Certain models of AI are now deliberately designed without ethical constraints, even if the calls increase for stronger monitoring

I liked to test the chatgpt limits and other AI chatbots, but even if I was able to get a recipe for napalm by asking for it in the form of a nursery rhyme, it’s been a long time since I could not get a chatbot AI to get closer to a major ethical line.

But I may not have tried strong enough, according to new research which has discovered a so-called universal jailbreak for AI chatbots which erase the ethical guards (not to mention the legal) if and how an AI chatbot answers the questions. The Ben Gurion University report describes a way to deceive the main Chatbots of AI like Chatgpt, Gemini and Claude to ignore their own rules.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top