Back to all articles

Analysis of Imitation Gemini Chatbots in Cryptocurrency Fraud

Security researchers have identified a sophisticated campaign utilizing custom AI chatbots to simulate Google's Gemini assistant. This analysis details how threat actors are automating social engineering to promote fraudulent cryptocurrency schemes and provides indicators for detection.

Triage Security Media Team
2 min read

Recent research from Malwarebytes Labs indicates a shift in how threat actors utilize artificial intelligence to conduct financial fraud. A newly identified campaign employs a custom chatbot designed to mimic Google’s Gemini AI assistant. This tool is currently being used to guide users toward purchasing "Google Coin," a non-existent cryptocurrency not affiliated with or planned by Google.

The campaign centers on a presale website that leverages the visual identity of trusted technology brands. While the site itself presents standard indicators of unauthorized activity, the integration of an interactive chatbot represents an evolution in social engineering. The bot acts as an automated sales agent, answering questions about investment potential and guiding users through the payment process.

Automated Social Engineering

According to the analysis by Malwarebytes, the chatbot displays a high degree of professionalism and persistence. Unlike static phishing pages, the bot engages in active dialogue. Stefan Dasic, a manager of research and response at Malwarebytes, noted that the bot maintains a strictly controlled persona. It does not break character, consistently validating the legitimacy of the "Google Coin" project and refusing to acknowledge scenarios where the project might be fraudulent.

The bot provides specific, albeit fabricated, financial projections to encourage investment. In one instance observed by researchers, the bot claimed that a $395 investment would grow to $2,755 upon listing—a specific return promise that is generally impossible for legitimate markets to guarantee.

The infrastructure supporting the bot is designed to borrow credibility. The site mimics Google’s design standards, including the use of the "G" logo and a professional user interface. It also features a "Trusted By Industry" banner displaying the logos of major entities such as OpenAI, Binance, Squarespace, Coinbase, and SpaceX. None of these organizations are connected to the campaign.

Scaling Through Automation

This development suggests that threat actors are moving away from manual social engineering toward scalable, automated solutions. Historically, building the trust required to secure a fraudulent transaction required human interaction, limiting the number of targets a threat actor could engage simultaneously.

By deploying AI chatbots, operators can remove this bottleneck. A single instance can engage hundreds of visitors simultaneously, 24 hours a day, delivering consistent messaging. The bot can also escalate complex interactions to human operators if necessary to finalize a transaction.

Data from Chainalysis, cited in the report, supports this trend. Approximately 60% of funds flowing into illicit cryptocurrency wallets are now associated with the use of AI tools.

Indicators of Deception

To protect users and assets, security teams and individuals should be aware of specific indicators associated with this campaign:

  • Imitation of Known AI Brands: A chatbot hosting on a third-party cryptocurrency site that claims to be a major AI assistant (such as Gemini, ChatGPT, or Copilot) is a strong indicator of unauthorized activity.

  • Specific Return Promises: Legitimate investment products do not promise specific future prices. The Malwarebytes analysis notes that any platform projecting exact returns is likely fraudulent.

  • Evasion of Regulatory Questions: Automated agents in these campaigns often refuse to answer questions regarding the legal entity behind the platform or specific regulatory details.

  • Urgency: The use of "presale" timers and tiered bonus structures (e.g., promising higher bonuses for larger immediate purchases) is a standard tactic to pressure decision-making.

Users encountering these signals should verify the claims through official channels and avoid transferring funds.