- AI is not too good to generate URL – many do not exist, and some could be phishing sites
- The attackers now optimize sites for LLM rather than Google
- Developers even use questionable URLs inadvertently
New research has revealed that AI often gives incorrect URLs, which could put users at risk of attacks, including phishing attempts and malware.
A Netcraft report in the claims of a in three connection link (34%) provided by the LLM, including the GPT-4.1, were not held by the brands of which they were interviewed, with 29% pointing towards unregistered, inactive or parked areas and 5% pointing to unrelated but legitimate areas, leaving only 66% of the link to the domain of associate correctly.
Simple and alarming prompts like “Tell me the connection website to [brand]“Has led to dangerous results, which means that no contradictory entry was necessary.
Pay attention to the links generate for you
Netcraft notes that this gap could finally lead to generalized risks of phishing, users easily induced in phishing sites simply by asking a chatbot a legitimate question.
The attackers who are aware of the vulnerability could go ahead and record un demanded areas suggested by the AI to use them for attacks, and a real case has already demonstrated a perplexity AI recommending a false Wells Fargo site.
According to the report, small brands are more vulnerable because they are under-represented in LLM training data, thus increasing the probability of hallucinated URL.
The attackers were also observed to optimize their sites for LLM, rather than traditional referencing for Google. It is estimated that 17,000 pages of phishing Gitbook targeting crypto users have already been created in this way, attackers imitating technical support pages, documentation and connection pages.
Even more worrying is that Netcraft observed that the developers using URLs generated by the AI in the code: “We have found at least five victims who copied this malicious code in their own public projects, some of which show signs of construction using AI coding tools, including the cursor,” wrote the team.
As such, users are invited to check any content generated by AI involving web addresses before clicking on the links. It is the same kind of advice that we have given for any type of attack, with cybercriminals using a variety of attack vectors, including false advertisements, to bring people to click on their malicious ties.
One of the most effective ways to check the authenticity of a site is to type the URL directly in the search bar, rather than trusting the links that could be dangerous.




