AI coding assistants are among the early success stories of the generative AI revolution in business. Increasingly adopted, programming copilots are making inroads into development processes, enhancing developers’ productivity and helping stand up rudimentary projects quickly.
But they’re also a security issue, and the anticipated volume of code they will soon be producing is a could-be nightmare in the making for security leaders, experts suggest.
Add to the usual discussion of gen AI hallucinations an increased likelihood of exposing secrets such as API keys, passwords, and tokens in AI-generated code.
According to a recent study by GitGuardian, GitHub Copilot-enabled software repositories are more likely to have exposed secrets than standard repos, with 6.4% of the sampled repositories containing API keys, passwords, or tokens at risk of theft compared to the 4.6% of all repos exposing secrets.
This translates to a 40% higher incident rate of secret leakage, GitGuardian researchers say, warning that the use of coding assistants may be pushing developers to prioritize productivity over code quality and security, adding that code generated by large language models (LLMs) may inherently be less secure than conventionally written software.
Underlying flaws bedevilling AI-powered software development
Security experts quizzed by CSO agree that use of AI coding assistants is resulting in less secure code and other security risks.
David Benas, associate principal consultant at application security vendor Black Duck, said these security issues are a natural consequence of training AI models on human-generated code.
“The sooner everyone is comfortable treating their code-generating LLMs as they would interns or junior engineers pushing code, the better,” Benas said. “The underlying models behind LLMs are inherently going to be just as flawed as the sum of the human corpus of code, with an extra serving of flaw sprinkled on top due to their tendency to hallucinate, tell lies, misunderstand queries, process flawed queries, etc.”
While AI coding assistants such as GitHub Copilot increase developer speed, they also introduce new security risks, John Smith, EMEA chief technology officer at Veracode, told CSO.
“These tools often lack contextual awareness of security practices and, without proper oversight, can generate insecure code and persistent vulnerabilities,” Smith said. “This becomes a systematic issue as LLM-generated code spreads and creates flaws throughout the supply chain, with over 70% of critical security debt now stemming from third-party code,” according to a recent report from Veracode.
Overlooked security controls
Mark Cherp, a security researcher at CyberArk Labs, said AI coding assistants often fail to adhere to the robust secret management practices typically observed in traditional systems.
“For example, they may insert sensitive information in plain text within source code or configuration files,” Cherp said. “Furthermore, because large portions of code are generated for early stage products, best practices such as using secrets managers or implementing real-time password and token injection are frequently overlooked.”
Cherp added: “There have already been instances where API keys or public keys from companies such as Anthropic or OpenAI were inadvertently left in the source code or uploaded in open-source projects, making them easily exploitable. Even in closed-source projects, if secrets are hard-coded or stored in plain text within binary files or local JavaScript, the risk remains significant, as the secrets are easy to extract.”
Establishing secure AI-assisted development practices
Chris Wood, principal application security SME at cybersecurity training firm Immersive Labs, described GitGuardian’s warning on the dangers of AI coding assistants as a “wake-up call.”
“While AI offers incredible potential for boosting productivity, it’s crucial to remember that these tools are only as secure as their training data and the developers’ vigilance,” Wood said.
CISOs and security leaders need to formulate comprehensive secrets management strategies as a first step. In addition, enterprises should be establishing clear policies around using AI coding assistants and providing developers with specific training on secure AI-assisted development practices.
“We must equip developers with the knowledge and skills to identify and prevent these types of vulnerabilities, even when AI assists with code creation,” Wood said. “This includes a strong foundation in secure coding principles, understanding common secret leakage patterns, and knowing how to properly manage and store sensitive credentials.”
“By empowering developers with the proper knowledge and fostering a security-first mindset, we can harness the benefits of AI while mitigating the potential for increased security vulnerabilities like secret leakage,” Wood concluded.
Proactive countermeasures
The more LLM code produced, the more developers will trust it — further compounding the problem and creating a vicious cycle that needs to be nipped in the bud.
“Without proper security testing, insecure AI-generated code will become the training data for future LLMs,” Veracode’s Smith warned. “Fundamentally, the way software is built is changing rapidly, and trust in AI should not come at the expense of security.”
The development of AI will continue to outpace security controls unless enterprises take proactive steps to contain the problem rather than attempting to rely on reactive fixes.
‘‘CISOs must move fast to embed security guardrails, automating security checks and manual code reviews directly into agentic and developer workflows,” Smith advised. “Auditing third-party libraries ensures AI-generated code does not introduce vulnerabilities from unverified components.”
Integrating automated tools such as secret scanners into the CI/CD pipeline, followed by a mandatory human developer review, should be used to screen software developed using AI coding assistants.
“All AI-generated code should be continuously monitored and sanitized — with a prompt incident response plan in place to address any discovered vulnerabilities,” CyberArk’s Cherp advised.
Enterprises continuing to struggle with credential management
Credential management of API keys, passwords, and tokens is a long-established problem in application security that recent innovations in AI-powered code development are compounding rather than creating.
GitGuardian’s State of Secrets Sprawl 2025 report revealed a 25% increase in leaked secrets year-over-year, with 23.8 million new credentials detected on public GitHub in 2024 alone.
Hardcoded secrets are everywhere, but especially in security blind spots like collaboration platforms (Slack and Jira) and containers environments where security controls are typically weaker, according to GitGuardian.
Despite GitHub’s Push Protection helping developers detect known secret patterns, generic secrets — including hard-coded passwords, database credentials, and custom authentication tokens — now represent more than half of all detected leaks. That’s because unlike API keys or OAuth tokens that follow recognizable patterns, these credentials lack standardized patterns, making them nearly impossible to detect with conventional tools, GitGuardian warns.
GitGuardian highlights the 2024 US Treasury Department breach as a warning: “A single leaked API key from BeyondTrust allowed attackers to infiltrate government systems,” according to GitGuardian CEO Eric Fourrier. “This wasn’t a sophisticated attack — it was a simple case of an exposed credential that bypassed millions in security investments.”
Remediation lags
The study also found that 70% of leaked secrets remain active even two years after their first exposure. Delays tend to arise because remediation is complex, according to security experts.
“Leaked API keys, passwords, and tokens are often overlooked because detecting them is only part of the solution; effective remediation is complex and frequently delayed,” said Mayur Upadhyaya, CEO of cybersecurity tools vendor APIContext. “The reliance on static keys, often embedded in code for convenience, continues to be a major weak point.”
Upadhyaya added: “Best practices like rotating keys, implementing short-lived tokens, and enforcing least-privilege access are well understood but hard to sustain at scale.”
Enterprises should look to erect stronger guardrails — automated scanning tools, proactive monitoring, and better developer support to ensure secure practices are followed more consistently, Upadhyaya concluded.