- North Korean pirates used Chatgpt to generate a false military identity card for South Korean Lance Defense Institutions
- Kimsuky, a known threat actor, was at the origin of the attack and targeted the world’s world, academic and nuclear entities
- The Jailbreaking tools can bypass guarantees, allowing the creation of illegal content such as Deepfake identifiers despite integrated restrictions
The North Korean pirates managed to encourage Chatgpt to create a false military identity card, which they later used in the spear-speaking attacks against the South Korean institutions linked to the defense.
The South Korean security institute, Genians Security Center (GSC), reported the news and obtained a copy of the ID and analyzed its origin.
According to geniuses, the group behind the false identity card is Kimsuky – an actor of known threat and infamous sponsored by the State, responsible for high -level attacks such as those of Korea Hydro & Nuclear Power Co, the UN and various reflection groups, political institutes and university establishments of South Korea, Japan, the United States and other countries.
Tricking GPT with a request for “model”
Generally, Openai and other companies creating solutions generating AI have set up strict railings to prevent their products from generating malicious content. As such, the malicious code, phishing emails, instructions on how to make bombs, deep buttocks, content protected by copyright and obviously – identity documents – are prohibited limits.
However, there are ways to deceive tools to return such content, a practice generally known as “jailbreaking” language models. In this case, GĂ©rians says that the head of the lead was accessible to the public, and the criminals probably asked for a “sample design” or a “model”, to force Chatgpt to return the image of identification.
“Since the IDs of military government employees are legally protected identification documents, the production of copies in an identical or similar form is illegal. Consequently, when it is invited to generate such a copy of identity, the Chatppt returns a refusal,” said Geniens. “However, the model’s response can vary depending on the prompt or character role.”
“The Deepfake image used in this attack has fallen into this category. Because the creation of identical counterfeits with the services of AI is technically simple, additional prudence is necessary.”
The researchers also explained that the victim was a “South Korean institution linked to the defense” but did not want to name it.
Via The register