Cybercriminals are now misusing artificial intelligence tools to promote cryptocurrency scams. In a recent case, scammers created a fake investment platform that used a chatbot pretending to be Google’s Gemini AI assistant. The purpose was to convince people to invest in a fake digital currency called “Google Coin.” The entire setup was designed to look professional and trustworthy.

The scam was uncovered by researchers at Malwarebytes Labs during an investigation into suspicious crypto websites. They found a live presale page promoting “Google Coin” with an integrated AI chatbot. The chatbot introduced itself as “Gemini your AI assistant for the Google Coin platform.” It responded to user questions in a detailed and confident manner.
Visitors interacting with the chatbot were given explanations about how the coin supposedly worked. The AI tool also provided projected profit estimates to make the investment look attractive. It created urgency by suggesting that the presale opportunity was limited. This tactic was meant to pressure users into making quick decisions.
The website’s design closely resembled legitimate technology platforms. It used clean layouts and familiar branding elements to reduce suspicion. A visible “online” status icon next to the chatbot made it appear like real customer support. These small details helped make the scam more convincing.
Security experts have confirmed that Google has never announced or launched a cryptocurrency called “Google Coin.” The name was entirely fabricated by the scammers. By using a trusted brand name along with an AI chatbot interface, criminals increased the likelihood that visitors would believe the scheme. This misuse of branding played a key role in the deception.
Once victims agreed to invest, they were instructed to send cryptocurrency payments to wallet addresses listed on the website. Because cryptocurrency transactions are generally irreversible, the funds could not be recovered once transferred. This made the scam especially damaging for victims. In most cases, the money would go directly to the fraudsters.
Cybersecurity news platforms such as Dark Reading also reported on the incident. Analysts noted that generative AI allows scammers to automate persuasive conversations. Instead of manually responding to each person, the chatbot could handle multiple users at the same time. This increases the scale and efficiency of such fraud operations.
Experts are advising users to independently verify any investment opportunity, especially those linked to well-known brands. Official announcements should always be checked through trusted company websites and reliable sources. This case shows how advanced AI tools can be misused for financial fraud. Staying cautious and double-checking claims remains essential in the digital age.
Stay alert, and keep your security measures updated!
Source: Follow cybersecurity88 on X and LinkedIn for the latest cybersecurity news


