Zyphra is building open superintelligence in the heart of San Francisco and looking for more exceptional people to join our fast growing team. We’re pushing the frontier of AI and hiring across research, engineering, product, and go-to-market. Join us! jobs.ashbyhq.com/zyphra
Zyphra
Technology, Information and Internet
San Francisco, California 2,654 followers
About us
Zyphra is a full stack AGI company based in San Francisco, California
- Website
-
https://zyphra.com/
External link for Zyphra
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
Locations
-
Primary
Get directions
San Francisco, California 94105, US
-
Get directions
London, England, GB
Employees at Zyphra
Updates
-
Zyphra reposted this
ZYPHRA TECHNOLOGIES' - COULD BE A GAME CHANGER- TEXT TO CODE FOR BCIs 🧠 Zyphra just released ZUNA, a foundation model trained on real-world EEG data. This could set it up to become part of the EEG standard preprocessing pipeline that downstream models depend on, whether those models are for seizure detection, sleep staging, cognitive state classification, or future BCI systems. In the announcement, relating to Zuna, the company describe it as “an early technical foundation for thought-to-text,” which they define as direct communication between human thought and AI systems enabled by brain-computer interfaces (BCIs). 🧠 They believe the next major modality in AI, beyond language, audio, and vision, will be thought-to-text enabled by noninvasive BCIs. That’s the long-term vision. ⚡ GAME CHANGER? The more immediate move is simpler and more practical, and it may matter more for EEG preprocessing than BCI decoding, at least for now. If adopted broadly, it could become part of the standard layer that downstream models are built on. First adopters will likely be research labs and noninvasive BCI startups. ZUNA focuses on reconstructing messy EEG data. EEG is valuable and widely used, but the reality is that channels drop out, signals get noisy, and electrode layouts differ across systems. Most teams still patch those gaps using standard interpolation. ZUNA replaces that step with a learned reconstruction model. Instead of mathematically filling in missing channels, it estimates what the full signal likely looked like based on patterns learned across diverse datasets and hardware configurations. 👉 The goal is straightforward: improve the quality and consistency of EEG signals themselves. Zyphra’s CEO, Krithik Puthalath, has positioned the company around open models and infrastructure. Releasing ZUNA under an open license fits that strategy. Making it easy to adopt lowers friction and increases the chances that it becomes embedded in real workflows. ⁉️ The real test is whether EEG device companies and clinical neuro software teams start treating learned reconstruction as acceptable, and whether it consistently performs better than established interpolation methods across different datasets and devices. The thought-to-text ambition will get attention. The more grounded question is whether improving signal reconstruction is what ultimately accelerates practical noninvasive BCI development. ⚡ If you're in this space - what degree of trust and acceptance might this get ? Is the tried and trusted approach better ? what are the risks, implications, barriers of learned reconstruction adoption? ⚡ Let me know. https://lnkd.in/eMBWC2eD
-
Zyphra reposted this
Today we’re releasing ZUNA, our first BCI foundation model for EEG data, a significant milestone in the development of noninvasive thought-to-text and a key modality in our mission to create human-aligned superintelligence. ZUNA is fully open source, Apache 2.0. ZUNA delivers immediate value for EEG practitioners working with real-world data affected by noise, channel dropout, and sparse electrode coverage. It is also well-suited for those looking to upgrade EEG signal quality without changing hardware. ZUNA is a 380M-parameter diffusion autoencoder model built for EEG channel reconstruction. While EEG data is widely available and non-invasive, it’s often messy and incomplete. ZUNA reconstructs high-fidelity brain signals from imperfect data, improving diagnostics, research workflows, and BCI applications. It also predicts missing channels from sparse inputs and electrode coordinates, scaling seamlessly from consumer headsets to 256-electrode research systems, with no retraining. Organizations or researchers interested in collaborating with us to improve future versions for specific needs or use cases should reach out. Learn more: - Technical paper: https://lnkd.in/gFUTdWsH - Hugging Face: huggingface.co/Zyphra/ZUNA - Technical Blog: zyphra.com/post/zuna - GitHub: github.com/Zyphra/zuna - PIP Install Package - pip install zuna
-
-
Today we’re releasing ZUNA, our first BCI foundation model for EEG data, a significant milestone in the development of noninvasive thought-to-text and a key modality in our mission to create human-aligned superintelligence. ZUNA is fully open source, Apache 2.0. ZUNA delivers immediate value for EEG practitioners working with real-world data affected by noise, channel dropout, and sparse electrode coverage. It is also well-suited for those looking to upgrade EEG signal quality without changing hardware. ZUNA is a 380M-parameter diffusion autoencoder model built for EEG channel reconstruction. While EEG data is widely available and non-invasive, it’s often messy and incomplete. ZUNA reconstructs high-fidelity brain signals from imperfect data, improving diagnostics, research workflows, and BCI applications. It also predicts missing channels from sparse inputs and electrode coordinates, scaling seamlessly from consumer headsets to 256-electrode research systems, with no retraining. Organizations or researchers interested in collaborating with us to improve future versions for specific needs or use cases should reach out. Learn more: - Technical paper: https://lnkd.in/gFUTdWsH - Hugging Face: huggingface.co/Zyphra/ZUNA - Technical Blog: zyphra.com/post/zuna - GitHub: github.com/Zyphra/zuna - PIP Install Package - pip install zuna
-
-
Zyphra reposted this
🚀 Excited to Share a New Chapter I’ve joined Zyphra as a Member of Technical Staff, where I’ll be working on post-training Research and alignment for our next-generation models. Zyphra’s mission is ambitious and inspiring — to build the best open-source AI models in the world. Zyphra is building cutting-edge open-source AI, and I’m grateful for the opportunity to contribute to this mission. Check out some of our recent releases here: 🔗 Hugging Face: https://lnkd.in/gmzyHv_G 🔗 Twitter/X: https://x.com/ZyphraAI Our latest release — ZAYA-1 Base (https://lnkd.in/gK2sFxUE) — is especially exciting: it’s one of the first frontier-quality models trained entirely on integrated AMD hardware, demonstrating a compelling alternative to traditional compute stacks. We’re hiring across multiple roles. If you’re passionate about open-source AI, scalable training, or post-training research, feel free to reach out to me or - Krithik Puthalath. Looking forward to what we build next. 🚀
-
-
Zyphra reposted this
I couldn’t be more proud of the Zyphra team today. ZAYA1 marks a major milestone not just for our company, but for the entire AI ecosystem — the first large-scale foundation model trained on a full-stack AMD platform (compute + networking + ROCm software), in close partnership with AMD and IBM. This work proves for the first time that AMD is a viable end-to-end platform for frontier AI training. This achievement is the result of relentless engineering, deep research, and a shared belief that efficiency, co-design, and open innovation are the future of AI. A special thank you to Beren Millidge, Quentin Anthony, and Tomás Figliolia, PhD for leading the research and systems architecture that made this breakthrough possible. And thank you to Steven Brook and Paul White for driving the strategic business development efforts that brought these partnerships together. To the entire Zyphra team — research, engineering, product, BD, and operations — I’m deeply proud of what we’ve built together with our close partners at AMD and IBM. We’re just getting started. Philip Guido Vamsi Boppana Emad Barsoum Andrew Dieckmann Negin Oliver Mathew Hein Lisa Su Alan Peacock Brendan Kinkade J. Jubran Kareem Yusuf Ph.D Arvind Krishna
A major milestone in frontier-scale AI just landed, powered by AMD. Zyphra has successfully trained ZAYA1-Base, the first large-scale Mixture-of-Experts (MoE) foundation model trained entirely on the AMD platform, from compute to networking to software. This achievement validates what we’ve been building toward: AMD Instinct MI300X GPUs + Pensando Pollara networking + ROCm software = a production-ready, high-performance alternative for large-scale AI training. Together with Zyphra and IBM, we co-designed and deployed a cluster delivering 750+ PFLOPs (Max Achievable FLOPs), leveraging 192 GB HBM memory, 400 Gbps Pollara NICs, and a fully optimized ROCm software stack. The result: ⚡ A frontier-scale MoE model (8.3 B total / 760 M active) ⚡ Competitive reasoning, math, and coding performance ⚡ Efficient long-context training (up to 32 K) ⚡ Breakthroughs in networking, kernels, and system design This proves what’s possible when the AMD ecosystem comes together. The AMD platform is ready for the next wave of AI innovation. And this is just the beginning. Read the blog from Emad Barsoum and Karim Bhalwani here: https://lnkd.in/gDwvcNAn
-
-
Zyphra is excited to announce the first large-scale MoE model trained on an integrated AMD hardware, software, and networking stack with the release of ZAYA1-base. ZAYA1-base demonstrates competitive performance against leading open-source models despite operating at a fraction of the time to first token and active parameter count. Learn more at these links: - Press Release: https://lnkd.in/gfqkJPnw - Technical Blog: zyphra.com/post/zaya1 - Whitepaper: arxiv.org/abs/2511.17127 - Hugging Face: https://lnkd.in/gH4WUgwY Thanks to AMD and IBM for their support and collaboration on this project.
-
-
Today we’re excited to announce our partnership with IBM and AMD to power the next era of open-source, enterprise superintelligence. Stay tuned for our next-generation frontier multimodal foundation models trained on IBM Cloud infrastructure with AMD GPUs! Read more at the blog post here: https://lnkd.in/e97r-TAw
-
Zyphra is releasing our first RL (reinforcement learning) based reasoning model, ZR1-1.5B. This small but powerful reasoning model excels at both math and code, making it one of the best models in these categories for its size. It also uses 60% less reasoning tokens than comparable models. 🆓 Apache 2.0 license. On hard coding evaluations, ZR1-1.5B achieves rough parity with Claude3-Opus and Gemma2-27B-instruct and improves over the base R1-Distill-1.5B model by over 50%. ZR1-1.5B is SoTA on coding evaluations for its size, greatly outperforming code reasoning models such as OpenHands. On math benchmarks, we find that ZR1-1.5B performs close to the SoTA DeepScaleR-1.5B despite having more general coding abilities and using less reasoning tokens. On many competition math tasks, ZR1-1.5B performs about the level of Qwen2.5-72B-math-instruct. 👇How'd we do it?👇 We train using the PRIME framework and use iterative context lengthening to encourage efficient reasoning traces. Even with responses truncated to 2048 tokens, ZR1-1.5B still outperforms Qwen2.5-Math-7B-Instruct! Use the model for free in our Playground by going to 'Zyphra Chat' and selecting ZR1-1.5B: playground.zyphra.com Read our blog: https://lnkd.in/gcZFM2HC Download the model on HuggingFace: https://lnkd.in/gEjvaXGE
-
-
-
-
-
+1
-