X users manipulated by ChatGPT bots to visit malicious crypto sites

A ChatGPT-powered botnet that is luring users on X (formerly Twitter), into visiting fake crypto news sites is just the “tip of the iceberg” with regards to AI-driven disinformation, researchers have warned.

The botnet — dubbed ‘Fox8’ due to its links to similarly-named crypto websites — was discovered in May by researchers at Indiana University Bloomington. It comprises at least 1,140 accounts designed to share a mix of original tweets, retweeted posts, and images taken from sites outside of X.

It also posts crypto, blockchain, and NFT-related content, engages with influencers, and promotes the suspicious websites.

However, according to researchers, this sprawling network of click-harvesting bots may be just the beginning.

Micah Musser, who has studied the potential for AI-driven disinformation, said, “This is the low-hanging fruit.

“It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”

However, despite the botnet’s size and apparent sophistication, the techniques used to uncover it were surprisingly simple and pointed to what researchers refer to as its “sloppy” methods.

Researcher Kai-Cheng Yang tweeted, “How did we find them? By searching “as an AI language model” on Twitter!” 

The full “as an AI language model” phrase is produced by ChatGPT when the software is asked to produce text that goes against its own policies. This could include content likely to be used in scams or general disinformation.

Read more: Binance says ChatGPT was weaponized to falsely claim it has Communist ties

Despite Fox8’s obvious flaws, the researchers who uncovered it have warned that, if executed correctly, bots can be far more difficult to spot and used for far more nefarious purposes than simply driving traffic to fake news sites.

Indeed, as detailed by Wired, “A correctly configured ChatGPT-based botnet would be difficult to spot, more capable of duping users, and more effective at gaming the algorithms used to prioritize content on social media.”

Professor Filippo Menczer from Indiana University said of the ChatGPT-powered bot, “It tricks both the platform and the users.

“That’s exactly why these bots are behaving the way they do.” He also told Wired that governments looking to leverage disinformation are likely already developing tools like this.

Got a tip? Send us an email or ProtonMail. For more informed news, follow us on TwitterInstagramBluesky, and Google News, or subscribe to our YouTube channel.