SmartBear announced AI enhancements for API testing, UI test automation, and test management across its product suite, the SmartBear Application Integrity Core™.
Artificial intelligence tools are becoming essential to software development, and developers find themselves at a crossroads.
On the one hand, they're adopting AI faster than ever, using it to streamline tasks, enhance productivity, and drive innovation.
On the other hand, there is growing distrust and frustration with AI's outputs, particularly with those handling critical tasks.
Recent findings from Stack Overflow's 2025 Developer Survey illustrate this tension. While more than 80% of developers now regularly use AI tools on the job, confidence in the accuracy of the code and information generated by these tools is lacking. This discrepancy points to a widening trust gap that is hard to ignore. To keep teams effective and productive in the age of AI, leaders must understand why developer trust in AI is falling and proactively address and implement strategies to rebuild it.
What's Behind the Lack of Trust in AI
This erosion of trust stems largely from developers' personal experiences with unreliable AI outputs. While adoption of AI tools has climbed, these tools, which are designed to make developers' workflows easier, have frequently generated solutions that are incomplete, lack nuance, or contain incorrect information. This is the major pain point for developers using AI, with 66% expressing frustration that AI often provides "almost right" answers. This naturally leads to developers' second-biggest frustration with AI: spending more time debugging and fixing generated code than if they had written it themselves from scratch.
Adding to these frustrations, AI struggles with tasks that require deeper context or complexity. As a result, developers remain hesitant to assign AI to critical functions, such as project deployment and monitoring (76%) and project planning (69%). These critical tasks often demand nuance that AI solutions have not yet mastered and so they require a human to manage them. While some heralded AI agents as a boon, delivering significant productivity gains for tasks like this, these AI agents are currently more aspirational than practical. Despite a great deal of industry hype, most developers have held back from adopting agents due to concerns over accuracy, security risks, and potential costs. And while some users report notable personal productivity gains, few report meaningful improvements in broader team collaboration.
The Real Opportunity to Build Trust: Knowledge Sharing and Skill Building
Where AI shines today is in helping developers, especially those newer to the role, with access to knowledge and the ability to develop skills more rapidly. AI coding tools and LLMs have lowered the barrier to entry for some developers. It easily simplifies complex coding information and can help explain different coding suggestions, which promotes learning and allows developers to stay current in fast-moving areas of technology.
Leaders who harness AI's strength in education and knowledge-sharing can equip their teams for greater long-term success, but they must be intentional in addressing the trust gap. To do this, leaders and practitioners of AI can emphasize a shift in focus towards building human-centered processes, prioritizing transparency, accuracy, and continuous learning. Below are actionable steps that leaders can implement:
■ Create a Culture of Knowledge Generation and Verification: Your first task should be to create an environment where your developer teams routinely use their AI insights, capture lessons learned, and collaborate on outcomes with one another. Create and encourage the use of knowledge repositories. Also, implement structured peer reviews and foster open dialogue to refine AI usage across your DevOps teams. A trusted repository of human knowledge and shared experience will aid developers in learning to use and trust AI in their day-to-day tasks.
■ Balance Human Oversight with Observable AI automation: Keep people in charge of high-risk software stages such as deployment, monitoring, and strategic planning. Build and train your AI to handle low-risk work: documentation, routine scripting, and simple code generation, while ensuring humans are validating outputs to safeguard quality and reliability, rolling back quickly when needed. By empowering engineers to use AI to do the work they want done but don't want to do such as translating code from one programming language to another, automating processes in the build pipeline, or analyzing large amounts of data, AI can provide value and over time build trust and allow your engineers to find new ways to harness AI for good. Additionally, your AI-powered automation must be transparent and secure. Prioritize implementing monitoring tools and robust data privacy rules around your AI, making sure your team can quickly identify and resolve issues.
■ Invest in Developer AI Education: Given that developers express frustration around "almost right" AI outputs, engineering leaders can solve this by investing in prompt training. While AI has great potential, it will always be limited by how it is prompted, which can make or break a response. Training developers on how to properly prompt should be mandatory and will help teams become proficient, confident AI users, reducing errors and building trust.
Looking Ahead: Turning Skepticism into Shared Advantage
At the end of the day, trust will decide whether AI becomes a force multiplier or a drag on software development. The developers that pull ahead will be the ones that treat every AI interaction as both output and input: output that accelerates delivery, and input that feeds a team's living knowledge base. Instead of chasing the latest AI tools and feature set, invest in workflows that capture prompts, decisions, and post-mortems so hard-won lessons compound over time. Pair observability with active peer review to keep humans (and their context) at the center of critical tasks. As your developers refine their questions and prompts, have them share what they learn with others. AI will learn with them, becoming more accurate, closing today's trust gap and opening tomorrow's runway for innovation.
Industry News
JFrog announced its partnership with iZeno Pte Ltd, a Singapore-headquartered enterprise technology solutions provider.
Red Hat announced an expanded collaboration with Google Cloud to help organizations accelerate application modernization and cloud migrations.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the contribution of SQLMesh, an open source data transformation framework, to the Foundation by Fivetran.
Check Point® Software Technologies Ltd. released the AI Factory Security Architecture Blueprint — a comprehensive, vendor-tested reference architecture for securing private AI infrastructure from the hardware layer to the application layer.
CMD+CTRL Security won the following awards from Cyber Defense Magazine (CDM), the industry’s leading electronic information security magazine: Most Innovative Cybersecurity Training and Pioneering Secure Coding: Developer Upskilling.
Check Point® Software Technologies Ltd. announced the Check Point AI Defense Plane, a unified AI security control plane designed to help enterprises govern how AI is connected, deployed, and operated across the business.
Oracle announced the latest updates to Oracle AI Agent Studio for Fusion Applications, a complete development platform for building, connecting, and running AI automation and agentic applications.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced that Istio has launched a host of new features designed to meet the rising needs of modern, AI-driven infrastructure while reducing operational complexity.
Chainguard announced Chainguard Repository, a single Chainguard-managed experience for pulling secure-by-default open source containers, dependencies, OS packages, virtual machine images, CI/CD workflows, and agent skills that have built-in, intelligent policies to enforce enterprise security standards.
Backslash Security announced new cross-product support for agentic AI Skills within its platform, enabling organizations to discover, assess, and apply security guardrails to Skills used across AI-native software development environments.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Kyverno, a Kubernetes-native policy engine that enables organizations to define, manage and enforce policy-as-code across cloud native environments.
Zero Networks announced the Kubernetes Access Matrix, a real time visual map that exposes every allowed and denied rule inside Kubernetes clusters.
Apiiro announced AI Threat Modeling, a new capability within Apiiro Guardian Agent that automatically generates architecture-aware threat models to identify security and compliance risks before code exists.
GitLab released GitLab 18.10, making it easier and more affordable to use agentic AI capabilities across the entire software development lifecycle.




