Theori’s cover photo
Theori

Theori

Computer and Network Security

Austin, TX 3,335 followers

Empowering Innovation with Security.

About us

Theori is a cybersecurity firm with a mission to make the world more secure by conquering the most difficult cybersecurity challenges. We empower innovation with security. As a leader in offensive cybersecurity, we always strive to stay one step ahead of attackers. We secure the future by solving the impossible with technology-driven approaches, and serve as a hub that leads positive impact and innovation in the cybersecurity field.

Website
http://theori.io
Industry
Computer and Network Security
Company size
51-200 employees
Headquarters
Austin, TX
Type
Privately Held
Founded
2016
Specialties
Offensive Security, Security Consulting, Blockchain Security, Vulnerability Research, Bug Bounty, Cybersecurity Education, Security Audit, Penetration Testing, Web3 Security, and Ethical Hacking

Locations

Employees at Theori

Updates

  • View organization page for Theori

    3,335 followers

    🚨 PSA to all Linux users Full rundown of Copy Fail(CVE-2026-31431), a logic flaw in the Linux kernel's authencesn cryptographic template. An unprivileged local user can trigger a deterministic, controlled 4-byte write into the page cache of any readable file on the system. 👉 https://copy.fail/ 👉 https://lnkd.in/gCNv4QEx

    View profile for Brian Pak

    732 bytes of Python. Root on every major Linux distribution shipped since 2017. Today we disclose CVE-2026-31431 — "Copy Fail" — a logic flaw in the Linux kernel's authencesn cryptographic template. An unprivileged local user can trigger a deterministic, controlled 4-byte write into the page cache of any readable file on the system. The same script gets root on: • Ubuntu 24.04 LTS • Amazon Linux 2023 • RHEL 10.1 • SUSE 16 No race conditions. No per-distro offsets. No version checks. 100% success rate. A few things make this one interesting: → It doesn't touch disk. The page cache is corrupted in memory, so on-disk checksums and file integrity tools miss it entirely. A disk image won't show that root was taken. → The page cache is shared across the host, including across container boundaries. One pod can compromise the entire Kubernetes node. (Part 2 of the writeup covers the container escape.) → It's been silently exploitable for ~9 years. The bug sits at the intersection of three changes between 2011 and 2017, each reasonable on its own. Nobody connected the dots. How we found it: Taeyang Lee, a Theori researcher who had previously mapped the AF_ALG attack surface in kernelCTF, suspected that scatterlist page provenance was an underexplored source of bugs. He pointed Xint Code — our autonomous vulnerability analysis platform — at the Linux crypto subsystem with a one-line operator prompt. About an hour later, Copy Fail came back as the highest-severity finding. The same scan surfaced additional high-severity bugs, still in coordinated disclosure. This is the workflow we keep proving out: a researcher (optionally) sets the direction, Xint Code covers the depth and breadth no human team has bandwidth for. Coordinated disclosure with the Linux kernel security team wrapped cleanly — the fix landed in mainline on April 1. If you run Linux infrastructure, please patch. Full root-cause analysis, demo, and exploit: 📄 https://copy.fail 🔗 https://code.xint.io

    • No alternative text description for this image
  • dawn of a new cybersecurity era ✨

    View organization page for Xint

    398 followers

    'Before [Xint security researcher Tim Becker] started working on automatic bug finding with AI, he worked on vulnerability research, finding zero days and reporting them to maintainers. He said it used to take him weeks or months to find a high-impact vulnerability in a brand-new codebase, and now it only takes hours. “I just drop the code into our AI bug-finding tool [Xint] and in a couple hours I get a report with a bunch of candidate vulnerabilities, and most of them end up checking out and being real issues,” he said. “The bar to diving into a new million-line codebase and finding a bug is so much lower than it used to be.”' Great report from The Verge quoting Xint security researcher Tim Becker looking into the new era of cybersecurity, where even non-technical attackers can use AI to find the weaknesses in the apps and networks of organizations faster and at a scale never thought possible before. https://lnkd.in/enwBztJw

  • Theori reposted this

    We're excited to join OpenAI's Trusted Access for Cyber program. This selective programs ensures frontier capabilities for organizations defending the internet's critical infrastructure. For Xint customers this means they are securing their applications and codebases with the most advanced models available

  • Is your AppSec strategy ready for the AI era? 🤖 Join us for a hands-on session on how to avoid common AI bug-finding pitfalls and achieve better results with scaffolding. 👉 https://lnkd.in/ge-xKbHF

    View organization page for Xint

    398 followers

    Join award-winning security researcher Tyler Nighswander on Techstrong TV for this hands-on workshop for product security practitioners. In this workshop he will: 1) go deep into how AI-native AppSec differs from traditional tools and methods 2) share the pitfalls of poorly harnessed AI bug finding 3) and provide a demonstration of how the scaffolding (and not the model) is what will provide superior results for what product security looks like in the real world https://lnkd.in/dTjnT2ba

    • No alternative text description for this image
  • View organization page for Theori

    3,335 followers

    Xint vs. Anthropic Mythos: Who wins in the real world? 🥊 Finding a bug is easy; finding the right bug in 9 million lines of code is the hard part. Check out our latest whitepaper on how Xint found every single bug in the Mythos report plus 12 additional mid to high severity vulnerabilities. #Xint #Mythos

    View organization page for Xint

    398 followers

    Anthropic is (rightfully) generating a lot of attention for Mythos’s ability to find 0days, BUT the hard problem is not whether an LLM can recognize a bug when pointed at it; it is whether a system can find the right code to examine across a 9-million-line codebase, distinguish the one real vulnerability from the hundreds of theoretical weaknesses the model will flag along the way, and deliver output a developer can act on without wasting a week on false positives. This is something Xint has been doing since our wins at AIxCC and #ZeroDayCloud last year. We wanted to see if using publicly available models with the right scaffolding would reach the same performance as the latest limited-release frontier model under **real world conditions** In this research paper not only did we find all the same bugs highlighted in Anthropic’s report, but found an additional 12 mid- to high-severity vulnerabilities not included in their public disclosures. Check out the full report here: https://lnkd.in/gSwsNuJe

  • Theori reposted this

    Yes, the quality of our results wins head-to-head competitions against the world's best hacker teams, but it's our practical experience in offensive security that makes Xint frictionless and actually useful for product security: 🤖 Xint Code requires no code packaging or harnessing before it analyzes your codebase. Just upload your repo and in less than 12 hours get pentesting-like results across your entire codebase 🥷 Xint Web is just as easy. Share the URL that you want to analyze (with or without sign-in credentials) and Xint Web acts like a real hacker to attack and figure out the weak points with the highest exploit payouts

  • Theori reposted this

    Theori team has identified critical Remote Code Execution (RCE) and Local Privilege Escalation (LPE) vulnerabilities in femtocell equipment managed by a major Korean telecom vendor and widely deployed across the country. You may recall the "micropayment fraud incident" in Korea last year, where attackers drove around with rogue Chinese-made femtocells to intercept nearby victims' mobile communications. The vulnerabilities we discovered go far beyond that scenario — they enable remote compromise of every affected device in operation, both domestically and internationally, without any physical proximity. Once compromised, an attacker could intercept and manipulate network traffic of mobile devices connecting through these femtocells, and potentially leverage tens of thousands of internet-connected "zombie femtocells" for large-scale DDoS attacks. We completed our research, developed working exploits, and demonstrated full exploitability back in September–October of last year. We had hoped to work closely with the manufacturer and carrier to verify, remediate, and coordinate disclosure as quickly as possible. Unfortunately, despite our efforts, the process has been repeatedly delayed — far beyond what we initially expected. Given the national security implications of leaving these vulnerabilities unpatched, and after months of waiting for cooperation that may never come, we've decided to formally report our findings to KISA (Korea Internet & Security Agency) and the National Intelligence Service. We remain committed to responsible disclosure and look forward to working with the relevant parties to protect users and critical infrastructure.

  • Theori reposted this

    Source code for Claude Code is leaked*. Naturally, the first thing we did was run it through Xint Code. Unsurprisingly, the vibe-coded app surfaced quite a few vulnerabilities within minutes, including vuln 101-level bugs (e.g., .includes() instead of .startsWith()). I guess Anthropic wasn't kidding when they said "90% of the code written at Anthropic is written by Claude." What I'm really curious about is where Anthropic draws the security boundary. Claude Code asks whether you trust the workspace at the very start, and you essentially can't use the tool without consenting. From that point on, all responsibility shifts to the user. Consent once, and running Claude on a directory becomes a 0-click RCE vector in multiple ways. The question is whether Anthropic considers these security vulnerabilities at all - or just user responsibility. As AI-generated code becomes the norm, the question isn't just who owns the security responsibility -- it's whether anyone is reviewing this code at all. AI generates code at a pace that manual review can't match. Scalable, automated security audits aren't optional anymore; they're the ONLY way to keep up.

    • No alternative text description for this image

Affiliated pages

Similar pages

Browse jobs

Funding

Theori 2 total rounds

Last Round

Series unknown

US$ 15.9M

See more info on crunchbase