- Researchers from China and Singapore proposed AURA (Active Utility Reduction via Adulteration) to protect GraphRAG systems.
- AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and wrong answers
- Correct outputs require a secret key; tests showed approximately 94% effectiveness in degrading stolen KG utility
Researchers from universities in China and Singapore have found a creative way to prevent the theft of data used in generative AI.
Among other things, current large language models (LLMs) contain two important elements: training data and retrieval augmented generation (RAG).
Training data teaches an LLM how the language works and gives them in-depth knowledge up to a certain point. This does not give the model access to new information, private documents, or rapidly changing facts. Once the training is completed, this knowledge is frozen.
Replacement of obsolete equipment
RAG, on the other hand, exists because many real-world questions depend on current, specific, or proprietary data (such as company policies, recent news, internal reports, or specialized technical documents). Instead of retraining the model every time the data changes, RAG allows the model to retrieve relevant information on demand and then write a response based on it.
In 2024, Microsoft proposed GraphRAG, a version of RAG that organizes retrieved information as a knowledge graph instead of a flat list of documents. This helps the model understand how entities, facts, and relationships connect to each other. As a result, AI can answer more complex questions, track connections between concepts, and reduce contradictions by reasoning about structured relationships rather than isolated texts.
Since these knowledge graphs can be quite expensive, they could be targeted by cybercriminals, nation states, and other malicious entities.
In their research paper, titled Making Theft Useless: Adulteration-Based Protection of Proprietary Knowledge Graphs in GraphRAG Systems, authors Weijie Wang, Peizhuo Lv et al. proposed a defense mechanism called Active Utility Reduction via Adulteration, or AURA – which poisons KGs, causing LLMs to give wrong answers and hallucinate.
The only way to get correct answers is to have a secret key. The researchers said the system is not without flaws, but it works very well in most cases (94%).
“By degrading the stolen KG utility, AURA provides a practical solution to protect intellectual property in GraphRAG,” the authors said.
Via The register
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




