SmartBear announced AI enhancements for API testing, UI test automation, and test management across its product suite, the SmartBear Application Integrity Core™.
Within many organizations, AI has quietly become the fastest developer on the team, generating functions, tests, and even complex integrations faster than anyone can review them.
The productivity increase is undeniable. Microsoft and Google have claimed that over 30% of their enterprise code is now AI-generated, and that figure is rapidly climbing. Yet, as teams embrace these tools to accelerate releases, they are discovering a new challenge. The speed of AI-generated development has outpaced security and governance models that were designed for human output.
For DevOps teams bridging code, pipelines, and production, this gap is becoming the new battleground for software security.
The AI Velocity Gap
AI-generated coding is exponentially faster. In some enterprise scenarios, productivity has increased by up to 4X, meaning that developers are checking in four times more code, often without four times the security coverage.
That productivity increase creates a counterintuitive security challenge. Even when AI writes better code than humans, sheer volume overwhelms the quality gains.
Consider the math: A developer generating 15 vulnerabilities per thousand lines of code at 4X AI productivity produces 60 vulnerabilities instead of 15 in the same time frame. If AI improves quality by 66% per line, that's still 20 total vulnerabilities instead of 15. At 10X productivity with the same quality improvement, you're at 50 vulnerabilities.
AI doesn't write worse code. Rather, the velocity gap comes from the exponential volume increase outpacing validation capacities designed for human output.
Traditional DevSecOps workflows — build, test, deploy, scan — were designed for linear pipelines. AI-generated code breaks that model by producing output continuously and contextually, across IDEs and CI/CD stages. Security can no longer play catch-up downstream.
But speed isn't the only challenge. AI adoption is happening across multiple vectors simultaneously. Developers use ChatGPT in browsers, AI libraries get embedded in code repositories, and autonomous agents deploy in production environments. Traditional security tools can't see this full picture because AI doesn't respect traditional perimeters. Network monitoring catches browser-based tools but misses code dependencies. Code scanning detects libraries but can't see what employees access through edge devices.
Shadow AI and Ungoverned Adoption
The real security challenge is determining which AI tools employees are using without IT oversight.
When a developer embeds an AI library into a repository, uses Claude for research, or deploys an autonomous agent, they're introducing third-party risk that security teams can't see. These aren't just productivity tools. They're systems with access to proprietary code, customer data, and intellectual property.
Unlike traditional software where procurement processes provide visibility, AI tools proliferate through individual adoption. A finance team member can deploy an AI-powered application to production in minutes, complete with access to payroll data, without security review or approval.
The TPRM parallel is clear: organizations wouldn't allow employees to onboard vendors without risk assessment. Yet that's exactly what's happening with AI.
Managing AI as Infrastructure
Leading organizations are treating AI adoption like any other third-party risk: requiring visibility, assessment, and approval workflows before deployment.
This means:
■ Multi-source detection: Aggregating signals from network traffic, endpoints, code repositories, and cloud environments to understand the complete AI footprint.
■ Centralized inventory: Maintaining a system of record for every AI tool and agent in use, with risk profiles and compliance status.
■ Streamlined approvals: Enabling security teams to assess AI requests quickly without becoming organizational bottlenecks.
■ Continuous monitoring: Tracking changes to AI tool security postures and triggering reassessment when risk profiles change.
The key is making AI governance as frictionless as possible while maintaining defensible oversight.
Toward Secure-by-Default AI Development
The long-term goal is to govern AI intelligently, not restrict its use. Organizations that succeed in this area focus on tracking and validating how AI tools are used rather than blocking them. They work to understand how much of their codebase is AI-generated, whether those modules have higher defect or incident rates, and which tools developers rely on, both approved and unsanctioned.
By treating AI-generated code as its own asset class within application security posture management, enterprises gain visibility and control. This will be the next stage of DevSecOps, where AI becomes both a productivity multiplier and a managed risk category.
The Bottom Line
AI disruption is here to stay in software engineering. Developers are already coding with it, often without formal approval. DevOps leaders can't stop this shift, but they can secure it.
Managing security at machine speed requires new thinking about visibility, validation, and governance. Security checks should be built into the same automated workflows that drive CI/CD pipelines, ensuring protection keeps pace with coding acceleration.
Organizations that move quickly will establish AI governance frameworks now, before board-level questions about AI exposure become urgent fire drills. The winners will be those who gain comprehensive visibility across all AI adoption vectors (not just code repositories) and implement approval workflows that enable secure innovation rather than blocking it.
AI will remain your fastest developer. The question is whether you'll have visibility into what else it's becoming.
Industry News
JFrog announced its partnership with iZeno Pte Ltd, a Singapore-headquartered enterprise technology solutions provider.
Red Hat announced an expanded collaboration with Google Cloud to help organizations accelerate application modernization and cloud migrations.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the contribution of SQLMesh, an open source data transformation framework, to the Foundation by Fivetran.
Check Point® Software Technologies Ltd. released the AI Factory Security Architecture Blueprint — a comprehensive, vendor-tested reference architecture for securing private AI infrastructure from the hardware layer to the application layer.
CMD+CTRL Security won the following awards from Cyber Defense Magazine (CDM), the industry’s leading electronic information security magazine: Most Innovative Cybersecurity Training and Pioneering Secure Coding: Developer Upskilling.
Check Point® Software Technologies Ltd. announced the Check Point AI Defense Plane, a unified AI security control plane designed to help enterprises govern how AI is connected, deployed, and operated across the business.
Oracle announced the latest updates to Oracle AI Agent Studio for Fusion Applications, a complete development platform for building, connecting, and running AI automation and agentic applications.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced that Istio has launched a host of new features designed to meet the rising needs of modern, AI-driven infrastructure while reducing operational complexity.
Chainguard announced Chainguard Repository, a single Chainguard-managed experience for pulling secure-by-default open source containers, dependencies, OS packages, virtual machine images, CI/CD workflows, and agent skills that have built-in, intelligent policies to enforce enterprise security standards.
Backslash Security announced new cross-product support for agentic AI Skills within its platform, enabling organizations to discover, assess, and apply security guardrails to Skills used across AI-native software development environments.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Kyverno, a Kubernetes-native policy engine that enables organizations to define, manage and enforce policy-as-code across cloud native environments.
Zero Networks announced the Kubernetes Access Matrix, a real time visual map that exposes every allowed and denied rule inside Kubernetes clusters.
Apiiro announced AI Threat Modeling, a new capability within Apiiro Guardian Agent that automatically generates architecture-aware threat models to identify security and compliance risks before code exists.
GitLab released GitLab 18.10, making it easier and more affordable to use agentic AI capabilities across the entire software development lifecycle.




