The AI Cybersecurity Arms Race: Do Attackers or Defenders Have the Upper Hand?

Explore the current cybersecurity AI arms race between hackers and defenders: how it’s being used, who has the edge, and what it means for the future.

We are currently in an intriguing era in AI cybersecurity, where both ethical defenders and malicious attackers are leveraging artificial intelligence for opposing benefits.

As cybersecurity teams harness AI to protect against increasingly sophisticated cyber threats, cyber criminals simultaneously exploit AI’s powerful capabilities to launch more aggressive and complex attacks, sparking an ongoing arms race between attackers and defenders.

But who actually has the upper hand in this incredibly volatile time?

This blog will explore how both attackers and defenders are concurrently using AI tools to leverage their existing strategies, as well as explore who actually has the edge in this race to harness increasingly complex technology.

Attackers use of AI

Cyber criminals frequently employ AI in their attacks, most notably using generative AI to carry out phishing, vishing, or deepfake scams.

AI technology is also used in cyberattacks to expedite and automate the attacks themselves, enabling more aggressive attack strategies with the intention of harassing and exhausting victims.

Here are some of the key ways criminals are exploiting AI to bolster their cyberattacks:

AI-Generated Deepfakes, Phishing and Vishing

By now, you’ve seen how convincing generative AI tools can be at creating life-like videos, audio clips, and text excerpts. Most of these tools are publicly accessible, meaning they can be equally harnessed for malicious intent as much as they can be used to generate a video of bunnies jumping on a trampoline.

Cybercriminals weaponise the generative capacity of AI to create convincing ‘deepfakes’- videos in which a person’s face or body is digitally altered to impersonate someone else, often used to spread false information- to target businesses and manipulate employees into disclosing sensitive data.

Password Cracking

Hackers can also use AI algorithms in cyber attacks to accelerate the process of password cracking. Using machine learning techniques, cybercriminals can train AI to analyse password patterns and predict likely combinations.

This is also why good password hygiene is critical for businesses, and why employees should enforce stronger password policies by using unpredictable passwords that include special characters, numbers, and upper-case letters.

Data Breaches and Data Mining

Hackers can also manipulate AI systems to help identify common and existing vulnerabilities through code scanning and machine learning. Once in, leveraging AI can also expedite harvesting information from an organisation’s internal infrastructure, potentially causing critical data breaches at unprecedented speed.

Defender’s use of AI

Naturally, as cybercriminals harness AI to enhance their attacks, cybersecurity teams and defenders equally employ AI to strengthen threat detection and response. Mediated with human involvement from security professionals, AI can be an excellent addition to fortifying business security strategies, automating repetitive and enumeration-based tasks to allow experts to focus on high-level, complex threats.

Current integration of AI models in security processes

Cybersecurity professionals are integrating AI security tools and automation into their defensive strategies to enhance existing security measures. Whether applied to malware detection, penetration testing automation, or intrusion detection, AI is proving to be an essential component, offering continuous and resilient defences against cyberattacks.

Here are some of the main ways security teams are currently implementing AI in the automation arms race:

Proactive Threat Hunting and Anomaly Detection

AI-driven threat intelligence enables proactive, real-time threat detection before they escalate into full-scale attacks. Rather than reacting to known incidents, advanced algorithms use training data to analyse behavioural patterns and system anomalies to flag suspicious activity.

Threat intelligence allows security teams to detect anomalies that may indicate insider threats, credential misuse, or early-stage cyber attacks. Unlike traditional reactive approaches, this method continuously learns and adapts to new attack techniques, especially in the wake of continual AI cybersecurity threats.

Security professionals can then focus on investigating high-priority alerts and more complex tasks, rather than having to constantly manually review logs.

Proactive threat hunting reduces the risk of undetected intrusions or breach attempts, helping organisations achieve network security in the wake of increasingly sophisticated cyber threats.

Continuous Vulnerability Scanning and Risk Prioritisation

Automated vulnerability scanning tools now utilise AI to continuously monitor network traffic for weaknesses. By continuously evaluating endpoints, servers, and applications, these automated tools are swift at identifying threats such as outdated software, misconfigurations, and unpatched systems.

Machine learning and AI algorithms prioritise vulnerabilities based on exploitability and potential impact, reducing disruption and ensuring that teams focus on the most critical cyber security risks.

This dynamic scanning approach shortens the window between vulnerability discovery and remediation, mitigating the chance of exploitation. Unlike periodic manual scans, continuous monitoring ensures that new vulnerabilities are addressed in real time, supporting compliance and improving the organisation’s overall security posture.

AI Security Threats

The use of AI itself is not without potential threats of its own. While it can be greatly beneficial to automate security processes and identify potential threats more rapidly than manual human testing teams alone, AI-related risks still prevail internally if there is too minimal human intervention. This is because human intelligence is essential in mediating, evaluating, and optimising AI systems. Without this blend of automated tooling and human oversight, attempts at AI security implementation can actually leave businesses even more exposed to potential threats.

It’s important to keep this in mind when looking to implement AI as a component of an existing security strategy. Be sure to invest in manual penetration testing that is supplemented by AI tooling, and prioritise the human aspect of any security team before branching out into AI-powered systems.

Who has the upper hand?

Of course, the question we’re all asking is: Who has the upper hand?

Even with considerations for both defenders and attackers taken into account, the answer is still surprisingly complex and circumstantial.

For example, many would argue that the attackers presently hold the upper hand because scripting (for good and malicious intent) is now simpler than ever, thanks to AI software like ChatGPT and ClaudeCode. In that sense, anybody could be a hacker, meaning there is a vaster pool of attackers than defenders. The automative possibilities of AI also mean that attacks can be more relentless in their attacks, sending potentially tens of thousands of phishing emails at any one time and heightening the chances of a successful breach.

However, it’s not all bad news: AI systems now enable defenders to identify threats more quickly than ever before, drastically shortening the window of opportunity for hackers to breach an organisation’s systems. Defenders can also scale more rapidly than ever before, fortifying existing security practices quickly and effectively.

Moreover, increased resources, education, and technological familiarity have led to a heightened awareness of cybersecurity, meaning most of us by now can identify inauthentic content or suspicious requests accurately and flag these to IT before they can become a potential security breach.

Ultimately, it is humans who are the first and last line of defence: hackers can try as they might to inundate your team with phishing emails and false requests, but as long as your employees are well educated and equipped to deal with these attempts, your security posture is far more formidable. As long as your security professionals are well-informed of current threats and can implement tooling and education to protect against these, defenders largely maintain the upper hand.

How can we expect cybersecurity to develop as AI technology becomes more intelligent?

Cybersecurity professionals continue to rapidly integrate AI into security operations, enhancing capabilities across the board. Cybersecurity professionals will likely continue to evolve their AI tooling in line with emerging threats by optimising their own approaches. For example, traditional machine learning algorithms may produce false positives, but advanced deep neural networks can reduce these- saving professionals from wasting time investigating false positives and more accurately flagging a potential security breach.

Despite AI’s growing role, human oversight will remain essential to ensure accuracy and protect sensitive information. Human intelligence provides critical context and judgment that AI systems alone cannot replicate, especially when dealing with sophisticated cyber attacks.

Additionally, regulatory focus on access management and multi-factor authentication (MFA) is expected to increase, further strengthening security measures in the wake of challenging new threats.

How does OnSecurity’s pentesting approach blend human testing with AI automation?

OnSecurity’s pentesting approach blends human expertise with AI automations and threat intelligence tooling to provide robust defence against evolving threats.

Protect sensitive data from cyber attacks with our platform-based pentesting programme, and empower your existing strategy with comprehensive, CREST-approved pentesting that saves time and money without compromising quality.

Related Articles