Cybersecurity researchers have discovered multiple vulnerabilities in ChatGPT that could allow hackers to steal private user data. These flaws were found in features that connect ChatGPT to other apps and services. Experts say the weaknesses expose how integrated AI systems can unintentionally leak sensitive information. The findings have raised serious concerns among privacy and security professionals worldwide.

The most severe vulnerability, named “ShadowLeak,” was uncovered by researchers at Radware. It affected ChatGPT’s “Deep Research” feature, which links with email and browsing tools. Hackers could exploit it by sending a malicious email that automatically triggered data theft. Shockingly, the attack required no user action no clicks, no downloads.

Researchers explained that the data theft occurred entirely on OpenAI’s cloud servers rather than a user’s personal device. Because of this, traditional security systems could not detect or stop the breach. Victims would remain unaware while attackers quietly extracted sensitive content from email accounts and chat sessions. This made ShadowLeak a particularly stealthy and dangerous flaw.

In another independent report, cybersecurity firm Tenable Research identified seven additional ChatGPT vulnerabilities. These included issues like indirect prompt injection, data leaks through persistent memory, and methods to bypass OpenAI’s safety filters. Some of these flaws could even be chained together, creating powerful tools for large-scale data theft.

Experts warn that such exploits are especially dangerous because they often require little or no user interaction. Hidden instructions placed inside documents or web pages could trick ChatGPT into revealing private details. Sensitive business data, client information, or personal conversations could all be exposed without any visible warning signs.

For companies using ChatGPT in office environments or customer-facing systems, the risks are significant. If connected to email or internal databases, a successful exploit could leak confidential corporate files or trade secrets. Cybersecurity teams are urging organisations to review how AI tools are deployed and to disable unnecessary integrations immediately.

OpenAI has confirmed that it patched the ShadowLeak vulnerability soon after its disclosure. The company said it is cooperating with independent researchers to fix the remaining issues. While some flaws have already been resolved, experts caution that others may still exist. OpenAI has promised to improve transparency and strengthen its future security testing.

These incidents highlight a growing challenge in the age of artificial intelligence. As AI models like ChatGPT become more advanced and interconnected, their attack surface also expands. Security experts believe that companies must treat AI systems as high-risk software, requiring constant monitoring and protection. The ChatGPT vulnerabilities serve as a powerful reminder that innovation and security must evolve together.

Stay alert, and keep your security measures updated!

Source: Follow cybersecurity88 on X and LinkedIn for the latest cybersecurity news