by Tim Leogrande, BSIT, MSCP, Ed.S.

26 FEB 2026 • 3 MIN 10 SEC READ


The use of chatbots to pressure consumers into buying fake cryptocurrencies is another recent example of how threat actors are exploiting artificial intelligence (AI).

In a recent blog post, Malwarebytes researchers report they found a presale website for a cryptocurrency known as "Google Coin" which doesn’t exist and isn’t even being developed by the tech giant.

The site includes a very convincing integration of Google's branding with a clean and professional design, complete with the 'G' logo, navigation menus, and a presale dashboard, among many other slick elements.

Even more troubling, the website sports a customized AI chatbot that poses as Google Gemini and which guides users through the process of purchasing the fake coin. The bot delivers a very polished sales pitch, never straying from the goal of persuading potential victims to fork over some cash. It even answered some of the researchers’ questions about the coin as well as its projected returns, and ended by prompting the intended victims to send an irreversible payment.

This incident illustrates how threat actors are using AI more creatively, which may eventually alter the way these kinds of campaigns — that previously would involve an actual human on the other end — are carried out. Scammers have always relied on social engineering to establish credibility, create urgency, and push past their intended victims’ skepticism. However, the number of targets that may be engaged simultaneously has always been constrained by the time it takes for humans to complete these tasks.

AI chatbots remove that bottleneck, because a single scam operation can now use a customized chatbot that interacts with hundreds of visitors at once, around-the-clock, provides polished and consistent content, and answers specific questions with personalized financial predictions or other information that could help close the deal. The bots can even escalate the interaction to human operators, if necessary, to complete the transaction.

This campaign also boasts a powerful one-two punch that makes it easy to trick even the most cautious target. The first is the website itself, which not only imitates Google's branding but also features the logos of other dot-com giants under a "Trusted By Industry" header. This gives the website the appearance of legitimacy, even though none of the listed businesses are connected with the scheme.

Second, if a victim clicks to buy Google Coin they are taken to a wallet dashboard that appears to be on a real cryptocurrency site which displays a ticker of current prices for Ethereum, Bitcoin, and (of course) the phony Google Coin. To encourage users to spend more, the site employs upselling strategies, claiming that benefits will increase as investors make larger purchases.

<aside> 💡

What stood out most was how tightly controlled the bot's persona was. The attackers programmed it to support their victims through the sale process with unwavering conviction, never breaking character — no matter what questions it was asked.

</aside>

The bot refused to acknowledge any scenario in which the project could be a scam, never deviated from its objectives, and consistently returned to glowing claims about the viability and long-term value of Google Coin.

The bad news for InfoSec professionals and the general public is that, going forward, there will almost certainly be an increase in the number of online frauds using AI chatbots. According to research by Chainanalysis, roughly 60% of all deposits into cybercriminals’ wallets now leverage AI in some manner.

This chart shows how much of the cybercrime ecosystem is made up of schemes linked to AI software vendors. The share of scam inflows represents the portion of total inflows that goes to scams which have sent funds on-chain to AI software vendors, suggesting that they most likely purchased AI tools. The share of deposits indicates that roughly 60% of all deposits into scammer wallets came from cons which leverage AI. Both measures have risen steadily since 2021 — around the time AI began entering mainstream use — suggesting that AI-enabled scams are becoming increasingly dominant. (©2025 Chainanalysis)

This chart shows how much of the cybercrime ecosystem is made up of schemes linked to AI software vendors. The share of scam inflows represents the portion of total inflows that goes to scams which have sent funds on-chain to AI software vendors, suggesting that they most likely purchased AI tools. The share of deposits indicates that roughly 60% of all deposits into scammer wallets came from cons which leverage AI. Both measures have risen steadily since 2021 — around the time AI began entering mainstream use — suggesting that AI-enabled scams are becoming increasingly dominant. (©2025 Chainanalysis)

One way to avoid these kinds of scams is to be wary of any chatbot on a third-party cryptocurrency website that mimics a well-known AI brand.

Another red flag is when a chatbot declines to respond to questions about the organization behind the offering, or won’t provide specific and independently verifiable information about the cryptocurrency.

Finally, no legitimate crypto coin offering promises a specific future return. So any solicitation which does — no matter how alluring it may appear — is likely fraudulent.