SmartBear announced AI enhancements for API testing, UI test automation, and test management across its product suite, the SmartBear Application Integrity Core™.
Cloud computing has become the cornerstone for businesses looking to scale infrastructure up or down quickly according to their needs with a lower total cost of ownership. But far from simplifying the IT executive's life, the expanded use of the cloud is introducing a whole new level of concept and complexity.
Research has found that the average enterprise uses 2.6 public clouds and 2.7 private clouds. Meanwhile, the average business is using 110 different cloud applications in 2021, up from 80 in 2020. Digital transformation has exacerbated the problem, with organization's of all sizes now faced with an abundance of technology ‘choice' which is actually only serving to hinder cloud transformations.
It gets even more challenging when IT people need to communicate crucial aspects of an organization's cloud infrastructure to non-technical decision makers. And as more and more workloads are migrated to different clouds, the unique requirements of different processes, and the effects of configuration "drift" start to become clearer.
Put simply, cloud infrastructure is becoming harder to control manually.
The question is, how can organizations start to establish visibility into what are becoming more complex, harder to track environments, and how, with cloud configurations changing all the time, can teams develop repeatable processes that will enable them to take back control of their infrastructure and catch the drift before it gets out of hand?
The Rise of Automation and Infrastructure As Code
First, a brief history lesson. Those of a certain vintage might remember the halcyon days when you had to buy and maintain your own servers and machines. We evolved from this era of computing around 2005 with the widespread adoption of something called virtualization, a way of running multiple virtual machines on a single physical server.
Virtualization not only created infrastructure that was more efficient and easier to manage, it also allowed for the development of new technologies, such as cloud computing — something which has revolutionised the way businesses operate. But alongside all the benefits of the cloud — the flexibility, scalability, and cost efficiencies — organizations soon found themselves encountering scaling problems.
This is because provisioning, deploying and managing applications to the hybrid-cloud is costly, time-consuming and complex. For one thing, manually deploying instances of an application to multiple clouds with different configurations is prone to human error. Scale too quickly and you might end up missing key configurations and have to start all over again. Fail to configure an instance correctly and it could prove too costly to ever fix.
These problems necessitated the development of Infrastructure as Code (or IaC). With IaC, it is possible to provision and manage infrastructure using code instead of manual processes, allowing for greater speed, agility, and repeatability when provisioning infrastructure, enabling IT teams to automate cloud infrastructure deployment and processes further than ever before.
With IaC, you write a script that will automatically handle infrastructure tasks for you, saving not only time but also reducing the potential for human error. Simple, right? Problem solved! Well yes and no …
Managing IaC at Scale
Expectations often have a habit of not aligning with reality. If you've started using IaC to manage your infrastructure, you're already on your way to making cloud provisioning processes more manageable. But there's a second piece to the infrastructure lifecycle: how do you know what resources are not yet managed by IaC in your cloud? And of the managed resources, do they remain the same in the cloud as when you defined them in code?
Changes to cloud workloads happen all the time. Increasing the amount of workloads running in the cloud means an increasing number of people and authenticated services interacting with infrastructures, across several cloud environments. As IaC becomes more widely adopted and IaC codebases become larger, it becomes more and more difficult to manually track if configuration changes are being accounted for, which is why policies are required to keep on top of everything.
Misconfigurations are, of course, rife across cloud computing environments, but most organizations are not prepared to manually address the issue. If there are differences between the IaC configuration and the actual infrastructure, this is when cloud or configuration drift can occur.
Drift is when your IaC state, or configurations in code, differ from your cloud state, or configurations of running resources. For example, if you use Terraform — one of the leading IaC tools that allows users to define and manage IaC — to provision a database without encryption turned on and a site reliability engineer goes in and adds encryption using the cloud console, then the Terraform state file is no longer in sync with the actual cloud infrastructure. The code, in turn, becomes less useful and harder to manage.
Now imagine this volume of code, at scale, across multiple clouds. Tricky. So what can be done?
Prescriptive vs Declarative: Establishing an IaC baseline
The issue lies in the approach.
Today, the majority of organizations are operating in what could be described as a prescriptive way, building environments where processes are still manual, even with the introduction of IaC. People are still needed to connect to platforms to patch, enhance, and configure infrastructure. The issue, as we've seen, is managing the delta between the current automation of a platform and the rest of its life cycle.
A shift is needed to catch the drift. A shift towards a declarative approach that removes people from the equation completely so that what is versioned is what is in production. This removes one of the biggest concerns for organizations, which is how to react to problems when they occur and how to reproduce infrastructure. Establishing an IaC baseline of an environment that is deemed to be the known state means that what is versioned becomes the constant source of trust and the desired state of an organization's infrastructure.
Once this has been established it opens doors to bigger and better automation and improved all-round experience for developers who no longer need to worry about taking time away from developing to configure infrastructure and troubleshoot issues. Retro-engineering IaC in a way that enables organizations to define what infrastructure needs to look like in a declarative way would enable organizations to start taking back control of their IaC and create visibility into drift long before it gets out of hand.
What often first started in many organizations as just one cloud instance has now spiralled, ploomed, morphed and mushroomed into a hydra's head of cloud applications that often lacks coherence or defined procedures. The only two certainties are that cloud applications are here to stay and that companies that fail to manage their cloud portfolio are going to face some difficult times. The 100-plus cloud applications used by businesses today require active and planned management that eliminates human intervention and provides a consistent approach to the processes involved. The good news is through IaC the tools are at hand to do the job.
Industry News
JFrog announced its partnership with iZeno Pte Ltd, a Singapore-headquartered enterprise technology solutions provider.
Red Hat announced an expanded collaboration with Google Cloud to help organizations accelerate application modernization and cloud migrations.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the contribution of SQLMesh, an open source data transformation framework, to the Foundation by Fivetran.
Check Point® Software Technologies Ltd. released the AI Factory Security Architecture Blueprint — a comprehensive, vendor-tested reference architecture for securing private AI infrastructure from the hardware layer to the application layer.
CMD+CTRL Security won the following awards from Cyber Defense Magazine (CDM), the industry’s leading electronic information security magazine: Most Innovative Cybersecurity Training and Pioneering Secure Coding: Developer Upskilling.
Check Point® Software Technologies Ltd. announced the Check Point AI Defense Plane, a unified AI security control plane designed to help enterprises govern how AI is connected, deployed, and operated across the business.
Oracle announced the latest updates to Oracle AI Agent Studio for Fusion Applications, a complete development platform for building, connecting, and running AI automation and agentic applications.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced that Istio has launched a host of new features designed to meet the rising needs of modern, AI-driven infrastructure while reducing operational complexity.
Chainguard announced Chainguard Repository, a single Chainguard-managed experience for pulling secure-by-default open source containers, dependencies, OS packages, virtual machine images, CI/CD workflows, and agent skills that have built-in, intelligent policies to enforce enterprise security standards.
Backslash Security announced new cross-product support for agentic AI Skills within its platform, enabling organizations to discover, assess, and apply security guardrails to Skills used across AI-native software development environments.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Kyverno, a Kubernetes-native policy engine that enables organizations to define, manage and enforce policy-as-code across cloud native environments.
Zero Networks announced the Kubernetes Access Matrix, a real time visual map that exposes every allowed and denied rule inside Kubernetes clusters.
Apiiro announced AI Threat Modeling, a new capability within Apiiro Guardian Agent that automatically generates architecture-aware threat models to identify security and compliance risks before code exists.
GitLab released GitLab 18.10, making it easier and more affordable to use agentic AI capabilities across the entire software development lifecycle.




