JPMorgan Chase’s CISO Patrick Opet didn’t mince words in his April open letter to software suppliers. He wasn’t just raising concerns—he was sounding an alarm that most security leaders are still ignoring.
The 2025 Verizon Data Breach Investigations Report should make every CISO lose sleep: 30% of breaches now involve third-party components, doubling from last year. But here’s the kicker that should really get your attention: this explosion in supply chain risk is happening just as AI begins writing a massive portion of our code.
What’s happening right now
Google just revealed that AI now generates 30% of their code. One of the world’s most sophisticated tech companies is letting machines write nearly a third of their software. Meanwhile, most security teams are still using tools built for a world where humans wrote everything.
The AI coding market tells the whole story. MarketsandMarkets projects it will explode from $4 billion in 2024 to nearly $13 billion by 2028.
AI coding assistants like GitHub Copilot, CodeGeeX, and Amazon Q Developer fundamentally differ from human developers in critical ways. They lack developmental experience, contextual understanding, and human judgment—qualities that are essential when it comes to distinguishing secure code from vulnerable implementations.
These AI tools also train on vast repositories of historical code, some of which contain known vulnerabilities, deprecated encryption methods, and outdated components. AI assistants incorporate these elements into new applications, which introduce software supply chain security risks that traditional security tools weren’t designed to detect.
Why traditional security tools fall short
Traditional security tools like Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) focus primarily on known vulnerability patterns and component versions. These tools cannot effectively evaluate AI-specific threats, such as data poisoning attacks and memetic viruses, which can corrupt machine-learning models and lead to the generation of exploitable code.
While there are some newer startups in the AI security space, they too have similar limitations as legacy solutions related to file size and complexity. They also cannot comprehensively analyze a model for all its potential risks, such as malware, tampering, deserialization attacks on formats.
Another area where these traditional security tools fall short is that they typically analyze code during development rather than examining the final, compiled application. This approach creates blind spots where malicious modifications introduced during the build process or through AI assistance remain undetected.
What security teams need to do next
As organizations increasingly incorporate AI coding tools, they must evolve their security strategies. AI models can be gigabytes in size and generate complex file types that traditional tools simply can’t process. Addressing these emerging risks requires analysis capabilities as well as comprehensive software supply chain security measures capable of:
- Verifying the provenance and integrity of AI models used in development
- Validating the security of components and code suggested by AI assistants
- Examining compiled applications to detect unexpected or unauthorized inclusions
- Monitoring for potential data poisoning that might compromise AI systems
The organizations that adapt their security strategies by implementing comprehensive software supply chain security, which can analyze everything from massive AI models to the compiled applications they help create, are the ones that will thrive. As for those that don’t, they will become cautionary tales in next year’s breach reports.
Sources: Verizon Data Breach Investigations Report 2025, MarketsandMarkets, JPMorgan Chase CISO Open Letter, The Verge
