Summary
Kaspersky’s AI Research Center says that cybercriminals are using Large Language Models (LLMs) to pump out tons of phishing and scam content. Their goal is simply to produce fake websites, especially to steal from crypto investors and wallet users. But there’s a trick that helps us differ these sites from actual legit ones.
Experts from Kaspersky’s AI Research Center claims to have discovered an increase in the use of Large Language Models used by cybercriminals, in order to scam people using large-scale scam and phishing attacks. They say these websites are created in bulks and every single one of them is specifically designed to lure the investors into their scam. But there’s a kick: every such website contains a distinguishable artifacts such as AI-specific phrases which makes them a bit easier to avoid. Reportedly, most of these phishing websites target users of cryptocurrency exchanges and wallets.
A big giveaway of such AI created sites is using phrases such as “As an AI language model…” and refusal to do certain tasks like, acting like a search engine or logging into sites, which are showing up on fake crypto sites targeting KuCoin, Gemini, and Exodus users. Another major giveaway is using phrases like “While I can’t do exactly what you want, I can try something similar,” this feels really obvious machine-made style of talk. According to Vladislav Tushkanov, threat actors can now pump out lots of these scam pages quickly with AI, filling entire sites, text, and even hidden tags with these tells.
But lately, cybercriminals have started to throw in non-standard symbols to dodge detection. Tushkanov further said that these AI powered scams are evolving rapidly, there are even records of AI writing malware scripts on their own, one way to defend is to catch AI made mistakes but even so, advanced security tools are must use. To stay safe, always double check links, type site addresses manually and only use modern security software.