Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API

Yu Xian, known for founding Slowmist has just exposed a new vibe-killer titled as “AI code poisoning”. Basically, this means that now scammers have stepped up their game with shady codes from AI space. One dude lost $2,500 after ChatGPT suggested a fake Solana API. It’s a big wake-up call—AI tools are getting played, and users gotta stay woke to avoid getting scammed.

The founder of Slowmist, a leading blockchain security firm known for protecting crypto ecosystems, named Yu Xian has warned users about AI written malicious codes. He’s a trustable source to believe, as his previous works prove him to be an expert in cybersecurity. His expertise lies on uncovering and preventing threats like hacks, scams, and emerging risks such as AI code poisoning in the blockchain space.

The ChatGPT Blunder

Xian mentioned the recent incident regarding a famous OpenAI’s chatbot, ChatGPT which apparently suggested a fraudulent Solana API website. This incident occured on this past week, where a trader by the name “r_cky0” said he lost about $2500 in digital assets after seeking GPT’s recommendation in creating a Solana-based memecoin generator Pump.fun.

The fraudulent website chatGPT suggested led to a theft of the users personal information, including his private keys. He noted that within just 30 mins, his overall wallet went numb and got linked with the scams.

AI bots making such blunders are getting more common, this case is similar to another famous incident where there were clear signs of an AI writing malicious code.

Read more: AI Firm Genius Group Adds $14M to Bitcoin Treasury, Shares Jump 8.5%

Anmol Khatiwada

Copy link