- Over 420,000 websites targeted by AkiraBot, an AI-powered spam tool using OpenAI’s GPT-4o-mini model to promote shady SEO services.
- Bypasses CAPTCHA protections (hCAPTCHA, reCAPTCHA, Cloudflare Turnstile) and spam filters using proxy networks and human-like traffic patterns.
- OpenAI API key disabled following the discovery, but the case underscores the rising threat of AI-generated spam and automation in cybercrime.
A new AI-driven spam campaign has been uncovered, revealing the use of an advanced platform called AkiraBot that has targeted over 420,000 websites globally since September 2024. Designed to exploit website forms and chat systems, AkiraBot pushes suspicious SEO services like Akira and ServicewrapGO through mass-generated messages using OpenAI’s language models. The campaign has successfully delivered spam to at least 80,000 websites.
AkiraBot stands out due to its use of OpenAI’s GPT-4o-mini model to craft customized messages tailored to each website’s content. These messages are inserted into contact forms, live chat widgets, and comment sections, primarily on small to mid-sized business websites. Unlike traditional spam tools, AkiraBot is capable of evading spam filters and CAPTCHAs, using a combination of realistic traffic patterns and proxy services such as SmartProxy.
Initially known as “Shopbot” and seemingly aimed at Shopify sites, the tool has since broadened its scope to platforms like Wix, GoDaddy, Squarespace, and Reamaze. While the associated SEO campaigns date back to 2023, researchers believe that dynamic AI-generated content only began surfacing in late 2024, indicating an evolution from static spam strategies to AI-powered automation.
The tool includes a graphical interface for selecting target lists and setting the volume of concurrent spam submissions. AkiraBot’s ability to bypass common CAPTCHA systems—including hCAPTCHA, reCAPTCHA, and Cloudflare Turnstile—makes it particularly dangerous. It logs outcomes of each attempt in a CSV file and reports statistics to a Telegram channel via API, tracking CAPTCHA success rates and proxy efficiency.
In response, OpenAI has disabled the API keys used in this operation. However, the case highlights a growing threat: the use of generative AI to scale malicious web campaigns with precision. The AkiraBot discovery comes alongside news of a separate tool, Xanthorox AI, which is being promoted on cybercrime forums for code generation, malware development, and data analysis—signaling a broader shift in how threat actors are integrating AI into their toolkits.





















