- Google publishes a new report detailing how criminals abuse Gemini
- Iran, North Korea, Russia and elsewhere attackers were mentioned
- The pirates experience, but have not yet found “new capacities”
Dozens of cybercriminal organizations from around the world abuse the artificial intelligence solution (AI) of Google Gemini in their attacks, admitted the company.
In an in-depth analysis to discuss who are the threat stakeholders and what they use the tools, the Google threat intelligence group stressed how the platform has not yet been used to discover new methods of ‘Attack, but is rather used to refine those existing.
“Threat actors experience Gemini to allow their operations, finding productivity gains but not yet in development of new capacities,” said the team in their analysis. “Currently, they mainly use AI for research, repair of the code and the creation and location of the content.”
Apt42 and many other threats
The greatest users of Gemini among cybercriminals are the Iranians, the Russians, the Chinese and the North Koreans, who use the platform for recognition, research on vulnerability, scripts and development, translation and Explanation, and access to the more in-depth system and post-compromise posts.
In total, Google observed 57 groups, including more than 20 from China, and among the 10+ North Korean threats using Gemini, a group stands out – Apt42.
More than 30% of threat stakeholders that the use of Gemini in the country was linked to the APT42, said Google. “The Gemini activity of Apt42 reflected the accent put by the group on the development of successful phishing campaigns. We observed the group using Gemini to carry out recognition as experts in politics and defense, as well as organizations of interest in the group. »»
APT42 has also used text and publishing generation capacities to develop phishing messages, especially those that target American defense organizations. “The APT42 also used Gemini for translation, including the location or adaptation of the content for a local audience. This includes content adapted to local culture and local language, such as requesting that translations be in fluid English. »»
Since Chatgpt was published for the first time, safety researchers warn against cybercrime abuses. Before Genai, the best way to identify phishing attacks was to look for spelling and grammar errors and the incoherent formulation. Now, with AI writing and publishing, the method does not work practically anymore, and security professionals are turning to new approaches.