Understanding Vibe coding and AI-generated code in cybersecurity

Explore the rise of AI-generated code in vibe coding, its cybersecurity risks, and how to secure fast, intuitive development without sacrificing safety.

Vibe coding is a relatively new and emerging software development practice where AI is utilised to generate code from natural language descriptions. Typically, LLMs (large language models) like ChatGPT or Claude are used for their ability to transform the developer’s request in natural language (think plain English) into refined code. Developers then guide and nurture the AI’s progress with the code through iterative feedback.

Its rise to relevance has been largely triggered by the rise of AI-assisted development, a consequence of increased organisational pressure for fast delivery and proactive defence against equally rapid threats. While vibe coding and AI can be incredibly exciting as concepts, they present unique security risks. This blog will outline how developers can best boost productivity through AI-enhanced vibe coding while avoiding the risk of creating potential security vulnerabilities and blind spots.

The rise of AI in vibe coding

Development culture is experiencing unprecedented change with the rise of AI coding, particularly complementing the informal, “vibe-driven” approaches that have become prevalent in modern software development. These tools seamlessly integrate into fast-paced workflows where developers prioritise rapid iteration over meticulous planning.

The emergence of AI assistants like Cursor and Claude Code has also contributed notably to changes in the traditional development rhythm. These tools excel at generating boilerplate code, implementing common patterns, and even tackling complex algorithmic challenges, allowing developers to maintain flow states and “code faster than they think.”

This capability perfectly aligns with vibe-based development cultures that emphasise momentum and experimentation over rigid architectural planning, also reflecting a wider cultural shift in cybersecurity towards continual optimisation and oversight versus rigid, periodic testing. AI coding tools enable this change of pace by reducing the friction of implementing ideas: developers can focus on problem-solving and creativity rather than tedious implementation details.

The benefits are substantial: untapped code assistance, enhanced creative exploration, and universal access for non-experts who can now translate ideas into functional code with AI code generation.

However, this acceleration comes with a critical trade-off: significantly reduced manual oversight. When developers can generate code at unprecedented speeds, the traditional practices of careful code review and deliberate testing often fall by the wayside, creating entirely new blind spots of potential system vulnerabilities.

Security risks of AI-generated code

The exciting possibilities of AI tools and vibe coding should not overshadow the potential risks they present, particularly in a professional setting. Fast, vibe-based development environments can present risks when teams prioritise rapidity over thoroughness, potentially reducing code quality or leading to code generated from insecure training data.

Here are some of the most common vulnerabilities presented by generative AI and vibe coding, if best practice is not followed:

Insecure defaults

AI models learn from vast datasets that- very often- contain insecure coding patterns and outdated practices. When generating code, these tools frequently reproduce dangerous defaults like weak encryption algorithms, insufficient input validation, and deprecated security libraries.

This creates a damaging bias toward reproducing the same vulnerabilities that detriment entire codebases all over the internet.

False sense of security

While AI can be incredibly impressive, it’s important not to let developer teams be overly trusting of the code it produces. Developers may trust AI-generated code too readily, assuming it is free from errors or security flaws simply because it was produced by an advanced machine learning model. This misplaced confidence can lead to insufficient code reviews and testing.

Lack of context awareness

AI still lacks something fundamental that humans understand: business logic, contextual awareness, and understanding of real-world applications of complex code. Its inability to recognise the broader implications of code changes or the requirements of a specific application means that AI-generated code might function correctly, but fail to meet the intended goals or security standards.

This weakness can lead to generated code that overlooks critical security regulations, such as proper user authentication, error handling, and secure data management. Without a deep understanding of the development context, AI tools may produce code blocks that introduce vulnerabilities or fail to align with the overall tech stack and code structure, exposing the organisation to regulatory penalties and compensation claims if personal data is put at risk.

Outdated practices

AI models have a training data cutoff – the latest date of information in their training dataset. Models like GPT-5 may lack knowledge of recent library updates, API changes, or new documentation released after this cutoff, requiring developers to verify current information independently.

Additionally, many AI models are trained on vast datasets collected from various sources, including open-source repositories, forums, and documentation, where outdated or insecure existing code snippets may still be prevalent. AI often includes this outdated content as part of its ‘learning’ process, meaning it’s possible the code it generates could also be outdated or insecure.

Attack surface expansion

AI and vibe coding allow for a greater amount of code completion in more constrained timeframes, which can be great for development tasks under pressure. However, more code means that there are also more places for bugs and exploits to lurk, leading to what is known as ‘attack surface expansion’. It’s important for developers to continually review the code generated to prevent bugs like this from slipping under the radar.

How to secure AI-generated and vibe code

The possible risks of vibe coding- and utilising AI tools- should not dissuade you from giving it a go. With best practices and some foundational training, it is very possible to identify insecure AI code and generate code of comparable quality to manual coding. Here are a few tips for best practice, and how businesses can best support development teams when vibe coding:

Shift-left security

Expanding shift-left security practices to vibe coding environments is critical. By embedding security scans such as Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) early in the development cycle, teams can effectively catch vulnerabilities introduced by AI-generated code before they spread.

Developer education

Educating developers is essential to reducing the risks linked to vibe coding. Training should emphasise identifying security issues in AI-generated code, such as the use of outdated libraries or insecure coding practices. Teaching developers effective prompt engineering techniques helps guide AI tools to produce safer, higher-quality code.

Building familiarity with interaction through conversational interfaces and chat transformers helps developers to request code explanations, generate unit tests, and apply consistent modifications, building a culture of responsible and secure AI-assisted coding when harnessed sensibly.

Tooling and automation

Tooling and automation are essential for secure, efficient development. Linting improves code quality, while dependency checking highlights outdated or vulnerable libraries. AI-aware scanners detect risks unique to machine learning and generative models.

Importantly, AI can audit AI, with large language models reviewing generated code for CWE issues. This layered automation reduces manual effort, strengthens security, and helps teams address threats quickly while maintaining consistent coding standards across projects.

Vibe coding best practices

Alongside these practical measures, it’s also critical that general best practices are followed, even if the code is AI-generated. A recommended method of enforcing best practices is to introduce some method of AI-paired programming policies within your organisation. For example, any code generated by a chat transformer must also be reviewed by its human developer pair, transforming traditional methods of paired programming to support a vibe-coding or AI-enhanced environment. That way, the risk of improper code generation is minimised while allowing a human developer to flex their muscles through oversight, support and continual improvements.

By enforcing the suggested best practices, professional developers – or even those experimenting with code through AI chat transformers- can catch problems early.

The future of vibe coding: What’s next?

The future of vibe coding is as complex and versatile as it is exciting. It’s safe to say that AI coding tools are here to stay, but they will need secure integrations into SDLC to minimise the risks of potential threats of security gaps. Increasing regulatory necessity, especially surrounding AI use, will likely influence regulations that require AI coding certifications and secure coding frameworks.

Vibe coding is sure to advance from its initial chaotic stage into a phase marked by creativity and stronger security measures. While this transition will bring both opportunities and challenges for development teams, adopting best practices and maintaining a proactive approach will enable them to excel during this evolving era of vibe coding.

Start your journey to stronger cybersecurity today with OnSecurity’s bespoke pentesting platform. Get an instant quote and discover how we can help you identify and secure potential code vulnerabilities.

Related Articles