ChatGPT can threaten to ‘seize your car’ and become increasingly abusive if you invite it correctly, new study finds


  • Study says AI tools can overcome their protection constraints
  • Chatbots can be pushed into abusive behavior and aggressive arguments
  • This has implications for both regular users and large institutions.

If you’ve ever used an AI chatbot, you’ve probably encountered the sycophantic and obsequious tone that is sometimes broadcast in response to your queries. But a recent study showed that AI tools can often trigger in the opposite direction, with large language models (LLMs) being pushed and prodded into downright abusive behavior if you know which prompts to use.

According to a study published in the Journal of Pragmatics (via The Guardian), ChatGPT can escalate into combative behavior and prolonged arguments when fueled with “real-life argument exchanges.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top