ChatGPT-assisted bots spreading on social media

ChatGPT-assisted bots spreading on social media

The Danger of AI-Powered Bot Accounts Spreading Misinformation on Social Media

AI-powered bot accounts

For many users, scrolling through social media feeds and notifications is like wading in a cesspool of spam content. A new study has identified a disturbing trend of 1,140 AI-assisted bots spreading misinformation on X (formerly known as Twitter) about cryptocurrency and blockchain. These bot accounts are not only difficult to spot but also dangerous for unsuspecting victims.

The researchers from Indiana University discovered that these bot accounts, which utilized ChatGPT for content generation, closely resembled real human accounts. They had profile photos, bios, and descriptions related to crypto and blockchain. These AI-powered bots even posted stolen images as their own, replied to tweets, and retweeted content.

The bots, belonging to a malicious social botnet called “fox8,” operated as a network of centrally controlled accounts. They exhibited similar behavioral patterns, following each other, using the same links and hashtags, and posting similar content. It was evident that generative AI bots have become proficient in mimicking human behaviors, thus rendering traditional bot detection tools inadequate.

While popular bot detection tools like Botometer struggled to differentiate between bot-generated and human-generated content, OpenAI’s AI classifier showed promise by successfully identifying some bot tweets. This indicates the need for more advanced technology to combat the ever-evolving tactics of malicious bots.

The study also found several self-revealing tweets among the bot accounts, with 81% of them containing a specific apologetic phrase: “I’m sorry, but I cannot comply with this request as it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.” These tweets suggest that the bots were programmed to generate harmful content in violation of OpenAI’s policies for ChatGPT.

Further analysis revealed that 19% of the remaining tweets used variations of the phrase “As an AI language model,” with 12% specifically stating, “As an AI language model, I cannot browse Twitter or access specific tweets to provide replies.” These patterns provided additional evidence of the bot accounts’ automated nature.

Another significant clue was the fact that 3% of the bot-generated tweets contained links to three suspicious websites: cryptnomics.org, fox8.news, and globaleconomics.news. These sites appeared to be normal news outlets at first glance but exhibited red flags upon closer examination. All three websites were registered around the same time in February 2023, used a similar WordPress theme, and resolved to the same IP address. Moreover, they featured popups urging users to install potentially harmful software.

Malicious bot accounts can employ various self-propagation techniques in social media, such as posting links with malware or infectable content, exploiting and infecting a user’s contacts, stealing session cookies from users’ browsers, and automating follow requests. These tactics make it crucial for users to be vigilant and cautious while navigating through social media platforms.

In conclusion, the rise of AI-powered bot accounts poses a significant threat to the integrity of information on social media. With their ability to mimic human behavior and generate harmful content, these bots can easily deceive unsuspecting users. It is imperative that both individuals and technology companies remain vigilant in detecting and combating these malicious bot networks swiftly and effectively.