🚨 AI4 Vegas attendees, don’t miss this 🚨 On August 13 at 10am PT, TensorWave CEO Darrick Horton 🌊 takes the stage to share how to build scalable AI infrastructure for long-term success. Get a front-row look at what we’re building at TensorWave, the AMD GPU Cloud.
TensorWave
Technology, Information and Internet
Las Vegas , Nevada 6,747 followers
The AI & HPC Cloud powered by AMD Instinct™ Series GPUs. 🌊
About us
TensorWave is a cutting-edge cloud platform designed specifically for AI workloads. Offering AMD MI300X accelerators and a best-in-class inference engine, TensorWave is a top-choice for training, fine-tuning, and inference. Visit tensorwave.com to learn more. Send us a message to try it for free.
- Website
-
https://www.tensorwave.com
External link for TensorWave
- Industry
- Technology, Information and Internet
- Company size
- 11-50 employees
- Headquarters
- Las Vegas , Nevada
- Type
- Privately Held
Locations
-
Primary
Las Vegas , Nevada, US
Employees at TensorWave
-
Darren Haas
Stealth Company, Voltron Data, Amazon, GE, Apple, Siri founder, Change.org founder, Stanford Research, UC Berkeley Labs, IC Community Advocate and…
-
Ryan Anderson
IBM CTO for Palo Alto Networks; IBM Architect in Residence, San Francisco; Cambridge University; VC Investor and Advisor
-
David Lam
Deep Technology Investor and Board Member
-
Andrew Oliver
Internet infrastructure and technology leader
Updates
-
Heading to AI4 in Las Vegas? Stop by Booth 101 and meet the TensorWave team, home of the AMD GPU Cloud built for serious AI.
-
-
The MI325X isn't just a spec bump over the MI300X—it's a targeted evolution designed to solve real problems for AI teams at scale. With 256GB of HBM3E memory and 6 TB/s of memory bandwidth, the MI325X allows large models (70B+ parameters) to run entirely in-memory—no sharding required. This means less complexity in your codebase and fewer GPUs needed to get the job done. The increased bandwidth feeds the compute units more data, reducing bottlenecks in memory-intensive tasks. Whether you're doing large-batch inference or training with large context windows, this speed matters. Importantly, ROCm support is first-class on both the MI300X and MI325X, so existing MI300X-optimized software will run on MI325X with full compatibility. There's no proprietary lock-in—you can use standard PyTorch, TensorFlow, and other frameworks on ROCm just as you would with CUDA, making integration into your AI stack straightforward. For AI teams building frontier workloads—LLMs, agents, real-time apps—the MI325X offers more memory, more bandwidth, and better performance per dollar. Full breakdown here 👇
-
-
TensorWave reposted this
#beyondcuda With platforms like Modular and the Mojo programming language, AI is now portable, performant—and no longer chained to a single vendor. Run the same code across #AMD #Instinct & NVIDIA GPUs with equal ease, higher performance, and far less complexity. Unified, open, and efficient. This is what the post-CUDA era looks like. Shoutout to Modular and TensorWave for FREE access to AMD Instinct GPUs on TensorWave.
🚀 HUGE ANNOUNCEMENT 🚀 Super excited about today's launch with AMD - checkout the video below! ⬇️ 💥 We've partnered with AMD to truly unlock portability across compute. 🔥 We've enabled AMD silicon, like the MI325, to have EQUAL PERFORMANCE to NVIDIA H200's - all on a SINGLE platform! ⚡ PLUS we announced: ➡️ Mammoth for massive GenAI scaling 📈 ➡️ Mojo in Python 🐍 ➡️ FREE compute thanks to TensorWave! 🎉 Huge congrats to team Modular, and the world can join us to build the future of AI! #startup #ai #genai #llm #launch #modular #amd #tensorwave #developer #engineering https://lnkd.in/gDdQUx7F
The Future of Compute Portability
https://www.youtube.com/
-
This month's company wide Lunch and Learn was hosted by none other than our CEO Darrick Horton 🌊. Today, Darrick took the company through a deep dive on Data Center architecture and what sets TensorWave's AMD GPU Cloud apart from the rest. 🎓
-
-
📈 From Poker Chips to Computer Chips: Nevada’s High-Tech Breakthrough Thrilled to see TensorWave’s Piotr Tomasik 🌊 Co‑Founder & Pesident, featured in the Las Vegas Review-Journal as part of a compelling story on Nevada’s booming tech ecosystem. The rise of Nevada’s tech scene continues to show that with the right blend of talent, infrastructure, and investment—even places outside coastal tech hubs can lead the AI revolution. 🎰 TensorWave is rooted in Nevada, powering the future of high-performance AI compute. 📌 Check out the feature here: https://lnkd.in/gyMtGW8Y
-
-
🚀 Run Multi‑Node Training on AMD GPUs — the Easy Way Training large models across multiple nodes doesn’t have to be complex. It can be as simple as running a single command. In our latest guide, we show you how to: ✅ Launch containerized multi-node jobs with Pyxis + Slurm (no custom setup required) ✅ Use AMD MI300X GPUs for distributed RCCL or MPI training ✅ Deploy private Docker containers securely via Enroot & squashfs Whether you're scaling LLM training or benchmarking parallel performance, we give you bare-metal speed with cloud-like simplicity. 📘 Dive into the guide: https://lnkd.in/g7-g6Nrs
-
-
🚨 We're hiring at TensorWave! If you're passionate about building the future of AI infrastructure — and want to work at the intersection of cutting-edge compute and real-world AI applications — we want to hear from you. We're expanding our team across multiple roles, including: 🔹 Infrastructure Engineering 🔹 DevOps & Reliability 🔹 Product Management 🔹 Developer Relations Why TensorWave? ⚡ We’re breaking boundaries with AMD-powered AI cloud infrastructure. 🌍 We support open-source innovation and accessibility in AI compute. 📈 Backed by $143M in funding to scale the next generation of AI workloads. Be part of the team that’s redefining how AI gets built and deployed. 👉 Explore open roles: https://lnkd.in/gFSK7VB3
-
-
Today, we’re launching Beyond CUDA, a new monthly interview series hosted by TensorWave co-founder and Chief GPU Officer, Jeff Tatarchuk 🌊. This series is about the next wave in AI: open-source, permissionless, unconstrained. Each month, Jeff sits down with the builders pushing AI forward outside the walled gardens and exploring what they’re building, which frameworks matter most, and how the community can stay free, fast, and future-focused. We’re kicking things off with Gregory Diamos, one of the early CUDA engineers, now working on ScalarLM, an open-source framework rethinking how foundation models are built and trained. Watch the full conversation → https://lnkd.in/gcNUWFpd
-
Explore how VOID RUN, the first hyperreal AI film, was built using AMD MI325X GPUs on TensorWave’s cloud. The future of GenAI is here.
What does it take to build an AI-generated film? Go behind the scenes of VOID RUN with Jeff Tatarchuk 🌊 from TensorWave, Alex Mashrabov from Higgsfield AI, and AMD as we explore how we brought Em to life using custom models and MI325 GPUs. Watch the full video here: https://lnkd.in/gxRxEqAM