AI Has Landed. And it might be here to "to serve humankind", but not in a good way.
Enterprise AI has officially moved past "Should we?" and landed squarely in "Cool, now who's actually in charge?"
Welcome back to The Point in 2026. I've refactored it a little to be tighter, but still full of analysis to help you make sense of what happened on a Tuesday.
Three signals worth paying attention to:
1. Red Hat & NVIDIA: Making Enterprise AI Less Fragile (Finally)
Red Hat and NVIDIA expanded their partnership around rack-scale AI infrastructure — validated stacks, lifecycle tooling, the works. The pitch: reduce the friction that kills AI projects between proof-of-concept and production.
Stefan's Take: This isn't about faster GPUs. It's about fewer roadblocks to deployment.
At HumanX, we've seen that most enterprise AI failures don't happen because the model underperformed but because the surrounding infrastructure was brittle, undocumented (nearly 100% of the time!), and held together by three contractors and a prayer. Red Hat and NVIDIA are going after that ugly middle layer: OS stability, drivers, orchestration, day-two operations.
To sum it up - they're trying to make AI boring again so that you trust it in the same way you turn on a light switch and something generally happens, and that thing isn't a blender at your ex-wife's house suddenly spinning up.
If they pull it off, enterprises stop treating AI like an exotic science project and start treating it like real infrastructure -- something you can deploy, monitor, secure, and explain to an auditor without sweating through your shirt.
I think we all get that enterprise advantage doesn't come from the fanciest model. It comes from the stack that breaks (or hallucinates) the least at 3am.
Red Hat wants to be the Linux of enterprise AI: predictable, supportable, deeply unglamorous. NVIDIA owns the performance ceiling. Together, they're making a play to be the default. They want to be the thing enterprises standardize on so they can stop arguing about plumbing and start shipping value.
If your AI stack still looks like a Frankenstein of open-source repos, custom scripts, and tribal knowledge, you don't have an AI strategy. You have a future support ticket.
2. SAP's Tabular AI Model: Teaching AI the Language of Business
SAP introduced RPT-1, a transformer built specifically for tabular enterprise data: ledgers, invoices, inventories, supply-chain records. Unlike text-centric LLMs, this thing is trained to understand structured business data natively.
Stefan's Take: Wait don't stop reading yet just because I mentioned tabular data and SAP! This is really, really important. As you know, most LLMs treat spreadsheets like an awkward cousin: technically related, but deeply misunderstood. SAP is asking: what if the model spoke the native grammar of enterprise data from day one?
For the last couple years, enterprises have been duct-taping LLMs onto structured data and calling it progress. It sort of worked as long as you didn't expect it to work 100% of the time.
Recommended by LinkedIn
RPT-1 is different because it doesn't interpret business data - it natively understands it. That matters when you're optimizing cash flow, forecasting demand, or explaining variance to a CFO who does not care how epic your prompt was.
This is AI growing up and learning the language of the people who actually run companies.
It will certainly not be as fluid or easy to use as prompt-a-palooza, but structured business data deserves purpose-built intelligence. I think this is one way how AI moves from "interesting" to indispensable.
3. WitnessAI Raises $58M to Secure Enterprise AI Agents
WitnessAI raised $58M to build security and monitoring tools for AI agents — systems that don't just assist humans, but act autonomously across data, workflows, and applications.
Stefan's Take: I know, agents were supposed to free us from all boring work in 2025. It's like cold fusion - always ALMOST there.
But just because they aren't ready, it doesn't mean enterprises aren't giving machines real access — credentials, APIs, decision rights — and realizing they don't actually know how to observe or constrain those systems once they're running. WitnessAI is positioning itself as the guardrail layer for a new class of risk (namely, machines that do things, not just suggest them.)
This importance can't be overstated: most companies are securing AI like it's a fancy calculator, not like it's a junior employee with root access.
Once AI agents can trigger workflows, touch systems of record, or make decisions that affect customers, the risk profile changes entirely. "We trust the model" is a lawsuit waiting to happen.
WitnessAI's raise shows someone in the market finally gets this. The next generation of enterprise AI won't be differentiated by autonomy alone, but by governed autonomy. It needs to have clear permissions, observability, rollback, and accountability.
If you can't answer "what did the AI do, and why?" in real time, you're not ready for agentic systems.
The Bottom Line
The hype phase of enterprise AI is over.
What's left is the hard, important work: infrastructure that doesn't break, data that AI can actually understand rather than 'reason' it's way through, and governance that scales without breaking autonomy.
We'll be diving into so much of this at HumanX 2026 in SF- just 80 days away! Get your passes now before they sell out - http://humanx.co/register
Like this new and more concise newsletter.