How can generative AI be used in cybersecurity?
Generative AI creates text, images, and code from simple prompts. This technology is changing cybersecurity quickly, giving both attackers and defenders powerful new tools to work with.
Cybercriminals can now craft convincing phishing emails in seconds and generate malicious code automatically. On the flip side, security teams are using that technology to detect threats faster, automate incident responses, and predict attack patterns before they strike.
This guide covers both sides of AI in cybersecurity. Learn the risks, practical ways to use AI defensively, and what’s coming next for AI-powered security.
Generative AI in cybersecurity: Offensive vs defensive
Generative AI works for both sides in cybersecurity. The same technology that helps security teams can also make attackers more dangerous.
AI helps defenders by:
- Spotting threats faster across large networks
- Writing incident reports and security documentation automatically
- Automating repetitive security tasks
- Analysing attack patterns to predict future threats
However, attackers have also realised its uses. Cybercriminals are using AI to:
- Craft convincing phishing emails without language barriers
- Create malware variants that dodge signature-based detection
- Run large-scale social engineering campaigns
- Build attack tools without deep technical knowledge
The gap between these capabilities is narrowing fast. Security teams need to properly understand how attackers use generative AI to build better defences. Those that don’t will struggle to keep up with growing threats, putting their business at risk.
Let’s take a closer look at how both sides are (or could be) leveraging the technology.
How cybercriminals use generative AI for attacks
Attackers have quickly adopted AI tools to make their operations more effective and more challenging to detect.
AI-powered phishing at scale
Criminals use AI to write convincing phishing emails in any language, tailored to specific companies or roles. These emails often pass basic spam filters because they’re grammatically correct and contextually relevant. Attackers can generate thousands of variations, making detection much harder.
Malware creation and evolution
AI helps criminals write malicious code faster and create variants that dodge signature-based detection. They can modify existing malware automatically or generate new attack scripts without deep programming knowledge. Some tools can even suggest ways to make malware more effective.
Exploiting insecure AI-generated code
Many developers now use AI coding assistants, but AI-written code often contains security flaws. Attackers scan for these vulnerabilities in applications and exploit prompt injection techniques to manipulate AI systems.
Advanced social engineering
Deepfake audio and video tools let criminals impersonate executives or trusted contacts. AI chatbots can maintain convincing conversations with targets over long periods, gathering information for more elaborate social engineering attacks.
Current limitations
However, despite these applications and the huge potential, research shows that truly new, AI-written malware remains rare. Most criminals still rely on existing tools and techniques, using AI mainly to scale their operations rather than create groundbreaking attacks: at least for now.
Defensive cybersecurity use cases for generative AI
Security teams are finding practical ways to use AI that deliver real results without replacing human expertise.
Automated threat detection
AI analyses network traffic patterns and log files to spot unusual behaviour that traditional rules might miss. Automated continuous vulnerability scanning removes the guesswork in detecting vulnerabilities in your internet-facing infrastructure, and certain tools can process millions of events and flag genuine threats while filtering out false positives.
AI-augmented penetration testing
AI accelerates security testing by automatically generating test cases, identifying potential attack vectors, and analysing code for vulnerabilities. Expert, CREST-accredited testers can use AI-augmented penetration testing to simulate more attack scenarios in less time, giving organisations a clearer picture of their security gaps without the classic resource constraints.
Incident response acceleration
When breaches happen, AI speeds up the response by summarising attack timelines, correlating events across multiple systems, and suggesting containment steps. Security teams can understand what happened in minutes rather than hours, helping them stop attacks before they spread.
Security Operations Centre (SOC) workflow automation
AI handles the repetitive tasks that might burn out analysts. It writes investigation summaries, creates ticket updates, and handles initial triage of security alerts. This frees up skilled staff to focus on complex threats that need human judgment.
Data security and governance
AI tools scan for sensitive information flowing into unsecured systems or public AI platforms. They can classify data automatically and prevent employees from accidentally sharing confidential information with external AI services.
Compliance support
AI streamlines compliance by automatically generating audit trails, tracking data handling practices, and ensuring security measures align with frameworks, including ISO 27001, SOC 2, and PCI DSS requirements.
Pattern recognition and threat intelligence
AI processes vast amounts of threat intelligence data to identify attack patterns and emerging tactics. It can spot connections between seemingly unrelated incidents that human analysts might miss across large, complex networks.
Managing the risks of generative AI in cybersecurity
While generative technology can massively support cybersecurity teams, the AI arms race also means attackers will keep getting better tools.
As a response, many companies are quickly adopting AI tools without proper risk assessments, creating blind spots that attackers can exploit. It’s crucial to close these governance gaps before they become security problems.
Best practices for safe AI adoption include:
- Setting clear usage policies for your teams about which AI tools they can use and for what purposes
- Vetting AI platforms before deployment, checking their security features, data handling practices, and compliance credentials
- Training staff on secure prompt engineering to avoid accidentally sharing sensitive information
- Auditing AI-generated code, reports, and decisions regularly for security flaws or biased outputs
- Monitoring what data flows into AI systems and ensuring it doesn’t include confidential information
Whenever you incorporate a new AI tool, it’s best to start with pilot programs in low-risk areas. This enables you to test the tool thoroughly before expanding its use across critical systems. You gain experience with generative AI’s many benefits while still keeping security risks manageable.
What is the future of generative AI in cybersecurity?
The next few years will bring major shifts in how both attackers and defenders use AI:
- More sophisticated deepfake social engineering, as voice and video generation improve
- Fully automated attack chains, from reconnaissance to payload delivery, with minimal human involvement
- AI-accelerated vulnerability discovery that finds security flaws faster than defenders can patch them
- Defensive AI platforms that coordinate security responses across multiple systems without human input
- Real-time autonomous defenders that can contain threats instantly
Regulators are already drafting and rolling out AI compliance requirements (such as the EU AI Act), especially for organisations that handle sensitive data or critical infrastructure. New frameworks will likely mandate AI transparency and accountability in security decisions.
Successfully using generative AI in cybersecurity comes down to balancing innovation with security fundamentals, particularly while the technology is still maturing. Organisations that start building AI capabilities now, with proper safeguards, will have significant advantages over those that wait, but those that rush in without preparation risk creating new attack surfaces.
Stay ahead of AI-powered threats with OnSecurity’s AI-augmented penetration testing. Get an instant quote today and discover how we help your business build stronger defences and safely use the technology in the age of generative AI.