Researchers poison their own data when it is stolen by AI to ruin results


  • Researchers from China and Singapore proposed AURA (Active Utility Reduction via Adulteration) to protect GraphRAG systems.
  • AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and wrong answers
  • Correct outputs require a secret key; tests showed approximately 94% effectiveness in degrading stolen KG utility

Researchers from universities in China and Singapore have found a creative way to prevent the theft of data used in generative AI.

Among other things, current large language models (LLMs) contain two important elements: training data and retrieval augmented generation (RAG).

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top