AI has revolutionized many sectors, and cybersecurity is no exception. As cyber threats and the cost of breaches are increasing, AI has made its way into the cybersecurity professionals’ arsenal. It is now used in a way that was previously unimaginable, like enhancing threat detection, reducing the response time in case of a breach, and much more. 

At the same time, Hackers have also adapted to AI to launch more complex and targeted attacks. This has taken their cat vs mouse game to another level, with both sides continuously improving their tactics. In this new era, the question remains: Who’s winning this battle- AI or hackers? Let’s discuss that in this blog.

Rise of AI in Cybersecurity

In 2024, AI artificial intelligence played an important role in cybersecurity like a double-edged sword. While AI enables security professionals to fight cybercrime more easily, it offers cybercriminals unprecedented new capabilities. The AI security market is estimated to reach 133.8 billion USD by 2030, highlighting the need for AI in modernizing cybersecurity efforts.

Artificial intelligence’s primary strength is analyzing large volumes of data quickly to produce accurate results. Machine Learning, a subset of AI, can find anomalies and patterns that human analysts missed in the past. This was proven by a study conducted by the Ponemon Institute, where 70% of cybersecurity professionals agreed AI is effective in detecting threats that would have been missed by humans.

While AI is proving its worth in the cybersecurity sector, its adoption within organizations is still in the early stages. According to a recent survey, 53% of security teams reported their organization is still in the early stages of integrating AI-based cybersecurity tools. Moreover, AI is now employed in specific areas, such as threat detection and vulnerability management, rather than being used across all cybersecurity operations.

Last year, Gartner reported that AI is being employed by various security vendors to detect threats faster and improve the efficiency of security operations. The key benefit here is the reduction in false positives, which has challenged traditional security tools. AI is not only reshaping how cybersecurity professionals defend systems, but it’s also democratizing hacking, even low-skilled actors to launch highly targeted and effective attacks.

Related Reading: The Impact of AI on Modern Cybersecurity Solution

Hackers and their use of AI

As any new tool in cybersecurity can be quickly turned into a double-edged sword, that was no exception to AI(artificial intelligence). Cybercriminals and nation-states started using AI for offensive purposes and as a way to gain an edge over cybersecurity professionals. Here’s how AI is being leveraged by cybercriminals

1.Malware Development

Traditional security solutions such as EDR (Endpoint Detection and Response) depend on anomaly detection and data intelligence solutions to flag an activity as malware intrusion. However, an experimental project last year demonstrated the advancement of AI-generated malware dubbed as black mamba. This malware was able to evade top EDRs used by many organizations. The project successfully created a command and control(C2) using intelligent automation that exploits a large language model to generate a polymorphic code at a given time to modify code at runtime to avoid detection.

As of now, there are no polymorphic malware detected in the wild, but it is only a matter of time before nation-state actors like the Lazarus group catch up.

In 2024, HP discovered a large-scale email campaign using a standard malware payload delivered using an AI-generated dropper. The entire dropper is suspected to be written by Chatgpt. Since chatgpt’s introduction in 2023, it has made neural network code synthesis freely available to everyone, including threat actors.

Github repositories such as this one (https://github.com/Azure/counterfit/wiki/Abusing-ML-model-file-formats-to-create-malware-on-AI-systems:-A-proof-of-concept), which are publicly available with POC on how to create a malware using AI systems, making easier for AI to create malware.

2.Social Engineering

Social engineering-the art of manipulating individuals by exploiting human error to gain private information, access, and more. Hackers have been using social engineering techniques, such as pretexting, baiting, or impersonation, long before AI. With the help of AI, hackers have been using these techniques more effectively.

AI algorithms are being used to scrape publicly available data from LinkedIn, Instagram, and other sources to create highly targeted attacks. Some threat actors have been found using dumped data from underground forums. After the introduction of ChatGPT, there’s been a 1265% increase in phishing emails and a 967% increase in credential phishing, as per the study. 

Earlier, crafting a phishing email was more challenging for non-native English speakers, such as hacking groups from the Middle East, Russia, and China. However, with ChatGpt and FraudGpt- a bot exclusively made for hackers and scammers- this task has become much easier. Last year, an IBM report found that the click-through rate for AI-generated mail was 11%.

3.Deep Fakes

One of the most disturbing applications of AI by hackers is the creation of realistic audio and video content with the help of deepfake. For instance, in February of this year, A finance worker at an MNC was tricked into paying 25 Million USD to a fraudster who used deepfake to pose as the company’s CFO(Chief financial officer) in a conference call. Similar incidents are now happening more frequently, as wiz employees are also tricked like this, talking to the WIZ CEO and in another instance where a Microsoft solutions consultant was deceived by a deepfake Google support call. 

Gartner estimated that around 20% of cyberattacks in last year included deep fake as part of campaigns. In another report, it’s found that there is a 3000% increase in deep fake attempts. The only thing holding hackers back is the computing requirements involved in this.

AI in Cybersecurity(Defense)

AI is getting integrated into cybersecurity as a part of threat detection, vulnerability management, and incident response. A survey by The Economist Intelligence Unit found that 48.9% of global executives and security experts view AI and machine learning as a tool to combat security threats. This stat highlights the growing recognition of AI in improving cybersecurity posture. Here’s how AI is being used by cybersecurity professionals.

1.Anomaly detection

AI is very effective in detecting anomalies in user behavior or network traffic. Traditional security methods often rely on known signatures of malicious activity, but AI allows security professionals to identify new and previously unseen threats. Now, cybersecurity professionals can detect insider threats or advanced persistent threats(APTs) that might go undetected by traditional methods. Pillsbury’s report highlighted that 44% of global organizations already leverage AI to detect security intrusions.

Behavioral analysis is another key area where AI helps a lot. By learning the normal behavior of users and devices in the network, AI can detect even small deviations from the regular pattern. For example, if an employee logs in at unusual hours and uploads sensitive data, a threat detection system with AI can raise an alert. One such service is AWS Guard Duty, a threat detection system used by AWS that analyzes various data sources to detect a breach. This includes detecting unusual spikes in API usage, unusual network traffic patterns, and unauthorized access.

2.Real-Time Automation

AI-driven automation systems are rapidly employed by organizations in responding to cybersecurity threats due to their faster detection, containment, and mitigation of threats. One of the most powerful advantages is its ability to reduce the impact of the attack. For instance, when it detects signs of a breach or intrusion, AI can raise an alert and automatically trigger predefined methods. More importantly, these systems can isolate the affected systems from the rest of the network or block the IP address.

A real-world example of this type of automation is seen in PayPal’s cybersecurity strategy to combat fraudulent transactions. PayPal employs AI to process millions of transactions, scanning for fraudulent transactions.PayPal uses ML to analyze the transaction in real-time for red flags such as unusual spending patterns and mismatched geographical locations. Etc. When any suspicion is detected, the transaction is blocked.Moreover, automation using AI reduces the response time to cyberattacks but also limits the damage caused by a breach.

3.Malware and Phishing detection

AI-based cybersecurity systems outperform traditional signature-based malware systems, with AI-based cybersecurity systems success rates of 80% to 90%. AI-powered systems using ML algorithms can analyze malware based on patterns and behaviors rather than relying on signatures. This approach allows cybersecurity professionals to detect the malware but also compare it with other malware to learn the similarity and connection between the threat actors.

AI is also very effective in combating phishing attacks. Traditional anti-phishing solutions rely on predefined rules or keywords to filter the emails. However, this approach is limited, as hackers are now using Gen AI for phishing. By learning from previous campaigns, AI can identify small changes in email addresses or crafted links that might be missed by traditional anti-phishing solutions.

Related Reading: Top 5 LLM’s for cybersecurity Use Case

Who’s Winning The Battle

The battle between AI and hackers is still going steadfast, and it’s hard to declare a winner.AI is a double-edged sword: while it provides security professionals some powerful tools to detect and respond to cyber threats, it also equips hackers with new tools like WormGpt, FraudGpt, etc to launch much more sophisticated attacks.

On the defensive side, AI improved the defensive system’s ability to detect anomalies and automate repetitive tasks. However, as AI tools become more accessible, hackers are now using AI to automate cyberattacks, craft an entire dropper using generative AI, highly sophisticated phishing scams, etc. Cyber Attacks that are carried out using AI are much harder to detect and defend against.

Despite the advantages AI offers to hackers, it has also become an irreplaceable tool for security professionals. A recent study found that AI almost doubles an ethical hacker’s productivity and effectiveness by up to 40%. This is mainly due to the automation of routine tasks like reconnaissance and scanning, allowing them to focus more on pentesting and security audits.

More than 64% of ethical hackers incorporate AI into their security workflows, with OpenAI’s ChatGpt being the most popular(used by 98% of the respondents in the survey), followed by Google’s Bard and Microsoft Bing at 40%. This shows while threat actors use AI to create chaos, security professionals are using the same technology to stay one step ahead.

As the capabilities of AI continue to empower, it’s clear that both will continue to innovate and adapt. On the offensive side, hackers will continue to redefine the use of AI for creating more sophisticated attacks, while defenders like ethical hackers will improve their AI-powered tools to counter these threats posed by AI. Ultimately, AI will level the playing field between both hackers and defenders.

Conclusion

In conclusion, the ongoing battle between AI and hackers will not be over, and as AI matures, it will play an important role in shaping the future of cybersecurity in ways we can’t predict. But one thing everyone should understand: AI is not just a tool for defenders; it is also an offensive weapon for cybercriminals.