Ex-Snowflake engineers say there’s a blind spot in data engineering — so they built Tower to fix it
AI coding assistants might have made it easier to generate software, but getting that code to run reliably — packaging it, deploying it, scaling infrastructure, collecting logs, and fixing failures when they occur — is where the real challenge lies.
That gap is what Tower, a fledgling German startup founded by former Snowflake engineers, is trying to address. The company is building a platform designed to run Python data pipelines and AI-driven data applications in production environments – something that co-founder and CEO Serhii Sokolenko calls the “last mile” of modern data engineering.
“Shipping the first version of a data app to production is still very difficult,” Sokolenko tells The New Stack. “Imagine you just finished working with Claude on the code of your data pipeline. The next step is much harder: You need to package the code, deploy it into some cloud environment, provision a bunch of AWS resources, or buy and learn tools like Spark or Kubernetes. You also need to instrument the app so it emits logs and metrics, make sure secrets are not stored in Git, and scale resources up or down depending on how much data is coming in.”
The problem doesn’t end once the code ships, either. Production failures have to be traced back to the code, translated into fixes, and pushed through review before the system can be redeployed.
Tower’s approach is to keep that feedback loop inside the platform itself, integrating with coding assistants so developers can ship Python data applications into a managed runtime, observe how they behave in production, and use those signals to improve the code.
The story so far
Founded out of Berlin in late 2024, Tower is the handiwork of two founders with long résumés in the data infrastructure and cloud spheres.
Sokolenko has held senior product roles across companies, including Microsoft, AWS, Google, Snowflake, and Databricks, where he worked on database engines, streaming analytics, and large-scale data processing systems.
Brad Heller, the company’s CTO, has spent much of his career building infrastructure platforms as an engineer and founder. Before Tower, he was a senior engineering manager at Snowflake, and earlier co-founded data visualization startup Reflect Technologies, which was acquired by Puppet in 2018.
The duo met while working at Snowflake, where they collaborated on performance improvements to the company’s data platform. Tower emerged from their shared view that the current data infrastructure landscape leaves many teams choosing between building complex systems themselves or adopting large vendor platforms.
Operating a modern data stack is not something teams set up once and forget. It requires ongoing maintenance.
The situation can be summarized simply: operating a modern data stack is not something teams set up once and forget. It requires ongoing maintenance — patching system images, rotating credentials, keeping clusters healthy, and responding when jobs fail in the middle of the night. The spadework involved in keeping those systems running is often underestimated.

Tower Founders: Brad Heller and Serhii Sokolenko
Of course, large commercial data platforms promise to remove that burden. But the trade-off can be costly and lead to contractual lock-in that smaller teams struggle to justify.
“There’s no real middle ground today – you either build and maintain the data infrastructure yourself, or you hand over your budget (and flexibility) to a heavyweight vendor,” Sokolenko wrote in a blog post last year.
Tower’s pitch is to offer a simpler alternative between those two extremes. Instead of forcing teams to assemble their own infrastructure or commit to a large data platform, Tower provides a managed environment where developers can run Python data pipelines and applications directly.
This reflects a broader shift in the data ecosystem: most modern data processing and AI tooling is built as Python libraries. Frameworks such as dbt, Polars, dlt, and LangChain all run within the same runtime and are often combined into a single Python application.
“But users do not need to think about any of that. They just start application runs, and we handle the rest.”
Tower builds on that reality. Developers can run existing Python code largely unchanged — often with nothing more than a configuration file describing the application and its parameters. Behind the scenes, Tower creates a virtual environment for the code, packages it into a container, and runs it inside a Kubernetes cluster.
“But users do not need to think about any of that,” Sokolenko says. “They just start application runs, and we handle the rest.”
Tower released an early alpha in January 2025, followed by a beta launch in August of the same year. Along the way, Tower says it has begun attracting early users, including teams at Ford, Pyne, and CosmoLaser.
The startup also today announced that it has raised $6.4 million in early funding, from investors including Speedinvest and DIG Ventures, as it prepares the platform for general availability.
Meet me in the middle
The kind of customer Tower is targeting likely sits somewhere between early-stage experimentation and full-scale data infrastructure. A typical example, Sokolenko says, might be a midsize manufacturer running legacy business software and relying on a small internal data team.
At that stage, companies face a dilemma: modernize their analytics stack themselves — assembling orchestration tools, compute clusters, and storage layers — or adopt a large platform built for much larger engineering organisations.
“This is a real kind of company we work with,” he explained. “At some point, you need to make a serious choice: modernize your analytics so you can compete faster, or risk becoming irrelevant.”
“At some point, you need to make a serious choice: modernize your analytics so you can compete faster, or risk becoming irrelevant.”
For those teams, the question is whether to build a complex data platform that they may struggle to operate, or adopt something simpler that still allows them to modernize their analytics workflows.
“Maybe you have only one person in the company who is really a data scientist,” Sokolenko continued. “Do you still go and buy a huge data platform with streaming analytics, real-time event processing, and large Spark clusters, just to feel like you are now ‘modern’?”
Tower’s bet is that many organisations sit in that middle ground — beyond ad-hoc scripts, but without the engineering headcount to run large distributed data systems.
“They are past the stage of random scripts, but they also do not have a team of Java Spark engineers from 2015,” Sokolenko says.
Running data workloads
Rather than requiring teams to manage Spark clusters or Kubernetes infrastructure themselves, Tower allows developers to run Python workloads without managing the underlying infrastructure. A simple configuration file defines how an application runs, while the platform handles packaging the code, provisioning the runtime environment, and scaling the infrastructure behind the scenes.
The system also supports Apache Iceberg as its storage foundation, allowing customers to keep their data in an open lakehouse format compatible with major analytics engines. That means pipelines executed on Tower can still feed into platforms such as Snowflake or Databricks, rather than locking users into a proprietary storage layer.
It’s worth noting that Tower does overlap somewhat with several existing categories of tooling. Workflow orchestrators such as Apache Airflow and Dagster help coordinate data pipelines across distributed systems. At the other end of the market, platforms such as Databricks and other Spark-based environments provide large-scale infrastructure for analytics workloads.
Tower positions itself somewhere between those two camps. Rather than defining pipelines through external workflow graphs, developers can write orchestration logic directly in Python, with Tower executing those workloads inside its runtime environment.
Put simply, it’s a modern, code-first alternative to legacy orchestrators.
However you look at it, there’s little question that AI is upending how data systems are built — and, in turn, the kinds of operational problems teams face once that code reaches production. Generating a pipeline is easy, but running it reliably is another matter.
“When you run the pipeline in production, and it fails, you have to go back to the AI assistant, fix the issue, get the changes reviewed, and deploy again,” Sokolenko says.
At the same time, he argues, AI-assisted development is expanding the pool of people experimenting with data systems — from product managers to marketers — creating new collaboration dynamics inside engineering teams.
“We are seeing more ‘tech-curious’ non-technical users experimenting with building data systems,” Sokolenko continued. “But this is more complex than generating a Lovable website. This code will run your business.”
That means experienced engineers still play a central role supervising and refining the code produced by AI tools — a workflow Tower is trying to support by feeding runtime logs, metrics, and production signals back into the development process.
Early traction and what comes next
Tower’s early usage numbers suggest the idea is gaining traction. The company says its Python SDK now sees roughly 70,000 downloads per month, while the platform processes more than 200,000 jobs across roughly 30,000 applications.
Many of those users are software companies building their own data-powered products. Tower’s APIs allow them to embed the platform inside their own services, effectively using it as the execution layer for analytics pipelines, jobs, and customer-facing data applications.
Other teams use Tower as their primary data platform, running integration and transformation workflows without assembling a patchwork of orchestration tools, compute clusters, and storage systems. Today, those workloads range from traditional batch pipelines to short-running serverless-style jobs and interactive applications such as notebooks, dashboards, and API endpoints.
With a fresh $6.4 million in the bank, the company is now preparing Tower for general availability while continuing to expand the platform.
“Our goal is to make it very easy for midsize businesses — and for the software vendors building for them — to operate a full data platform on Tower: compute, storage, and orchestration,” Sokolenko says. “And to connect it with your favorite AI agent.”