Cybercriminals have found a new way to launch phishing attacks, by using a generative AI tool called v0, created by the web development platform Vercel. This tool was designed to help developers build websites easily using plain language prompts. But now, attackers are using it to quickly create fake login pages that can trick users into giving away their passwords.
According to a recent report by Okta’s threat intelligence team, hackers have started using v0 to generate realistic-looking login pages for popular platforms like Okta itself. The attackers simply asked the tool to “build a copy of the website login.okta.com,” and within seconds, the AI produced a fake but functional version of the login page. This kind of phishing method doesn’t require advanced skills, which makes it even more dangerous.
The phishing websites are so convincing that they could easily fool users who don’t notice small differences in URLs or page design. And because the process is so fast, less than 30 seconds per page, attackers can build many different fake pages in a very short time. While Okta hasn’t confirmed whether any user credentials have actually been stolen through these AI-generated pages, the risk is very real.
Even more concerning is that clones of the v0 AI tool are now being shared on platforms like GitHub. That means even if Vercel takes down the original tool, attackers can still use these copied versions to continue making phishing sites. The spread of these clones shows how hard it is to contain this kind of threat once the technology is out there.
Vercel has responded to the situation by removing the known phishing pages from its platform and is now working with Okta to improve how abuse is reported. While this is a good step, the bigger issue remains: AI tools are now making phishing faster, easier, and more scalable than ever before.
Security experts at Okta warn that traditional methods of spotting phishing attacks, like checking for misspellings in URLs or spotting poor design, are becoming useless. AI-generated pages are clean, accurate, and almost identical to the real thing. It’s harder than ever for the average person to know they’re being tricked.
Because of this, Okta is strongly recommending that organizations shift towards passwordless authentication. This means using things like biometrics, security keys, or one-time passcodes instead of standard username and password logins. Passwordless methods can’t be easily faked or stolen in the same way, and they make phishing attempts much harder to pull off.
This incident is a clear example of how generative AI can be misused. What was meant to be a helpful tool for developers is now being turned into a weapon for cybercriminals. And since these tools are getting more advanced every day, the risks will only grow unless companies take strong action now.
It’s important for tech companies like Vercel to build better safeguards into their AI products. Tools that generate web content need limits and monitoring to prevent misuse. At the same time, businesses and users must stay alert and rethink how they protect sensitive information.
In the age of AI, phishing isn’t just a shady email with bad grammar anymore. It’s fast, smart, and powered by technology that anyone can access. If we want to stay ahead of these new threats, we have to adapt just as quickly.
Now more than ever, companies need to review how they handle user authentication and take browser-based security seriously. And users must learn to rely on secure login methods rather than assuming that a professional-looking website is always the real one.
Stay alert, and keep your security measures updated!
Source: Follow cybersecurity88 on X and LinkedIn for the latest cybersecurity news



