Software Analyst Cyber Research’s cover photo
Software Analyst Cyber Research

Software Analyst Cyber Research

Technology, Information and Media

Toronto, Ontario 9,173 followers

Empowering cybersecurity leaders with actionable insights and in-depth industry analysis of the cybersecurity industry

About us

Software Analyst Cybersecurity Research delivers in-depth analysis of the ever-evolving cybersecurity industry. Our mission is to empower security leaders, operators, investors, and cybersecurity professionals with the knowledge they need to navigate this complex field.

Website
https://softwareanalyst.substack.com/
Industry
Technology, Information and Media
Company size
2-10 employees
Headquarters
Toronto, Ontario
Type
Public Company
Founded
2020
Specialties
Finance, Equity Research, stocks, Investing, Technology, and Cybersecurity

Locations

Employees at Software Analyst Cyber Research

Updates

  • 𝗜𝗻𝘃𝗶𝘀𝗶𝗯𝗹𝗲. That's what most enterprise AI agents are to your security team right now. At #RSAC this past week, every analyst briefing we attended circled back to the same inflection point: 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗮𝗱𝗼𝗽𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗳𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗰𝗮𝗻 𝗰𝗮𝘁𝗰𝗵 𝘂𝗽. The identity and ownership risks are not theoretical. They are running on your endpoints today! In our new report, we laid out three agent types with their security implications: 1️⃣ 𝗛𝗼𝗺𝗲𝗴𝗿𝗼𝘄𝗻 𝗮𝗴𝗲𝗻𝘁𝘀 on AWS Bedrock, GCP Vertex, or LangChain give you architectural control. They run in managed infrastructure, but the primary risk is scale: engineering teams can spin up hundreds of agents across multiple repositories and cloud accounts, creating sprawl that outpaces centralized governance. 2️⃣ 𝗦𝗮𝗮𝗦 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀 like Microsoft Copilot Studio, Salesforce Agentforce, and ServiceNow let business users build and deploy agents. When a non-technical employee creates an agent with their own credentials, every downstream user inherits that access. This is the maker identity problem. 3️⃣ 𝗟𝗼𝗰𝗮𝗹 𝘁𝗼𝗼𝗹𝘀 like Cursor, Claude Code, and Windsurf run directly on employee workstations, connecting to community MCP servers that may never touch enterprise infrastructure. They bypass proxies, skip cloud IAM registration, and store credentials in plaintext on the endpoint. This category represents the largest blind spot in most enterprise agent security programs today. This is exactly why our latest research by Kevin He, Lauren Place, and Shachar Ram emphasizes why Runtime Identity Security is critical for AI agents. We have analyzed vendor positioning across all three categories to help security leaders map out the best solutions: Aembit, aizome, Apono, Astrix Security, ConductorOne, Cyata, Descope, Entro Security, Keycard, Microsoft Entra Community, Noma Security, Runlayer, Oasis Security, Okta, Silverfort, and Token: Go through our report here: https://lnkd.in/ewbpTkhZ #AgenticAI #AIAgents #CISO #IdentitySecurity #CloudSecurity #RSAC #AIGovernance #CyberSecurity

    • No alternative text description for this image
  • Is exposure management finally moving from visibility to real decision support? Two weeks from now, at #Adapt26 Conference, in NYC, our own Francis Odum will dig into that question during a fireside chat on “Exposure Management". Drawing on our market research and real-world experience, he’ll share why context is becoming essential to making exposure management more effective across security teams. In this session, Francis will discuss: • Why siloed control planes and static inventories often fall short of delivering decision-grade insight • How the market is evolving across discovery, prioritization, and remediation • What security leaders should consider as they move from fragmented visibility to more informed action If you are in #NYC on April 15th, this will be an important conversation to be a part of! See you there: https://lnkd.in/dsKRFpnB #ExposureManagement #CyberSecurity #SecurityLeadership #RiskPrioritization #SecurityOperations #SoftwareAnalystCyberResearch

    • No alternative text description for this image
  • San Francisco, you did not disappoint 🚀 What a week at #RSAC26! We were proud to co-host two successful CISO events, including our leadership event on what it really takes to step into the CISO seat. Here are a few takeaways for aspiring leaders that stood out: ▪Know what industry you're getting into before deciding on becoming a CISO ▪Becoming a CISO is not for everyone. You have to make sure this is the next step in your career that you want to take ▪Becoming a CISO is becoming a BUSINESS LEADER. Remember that when you take this next step We want to give a BIG shout-out to our expert panelists - Patti Degnan, Assaf Keren, and Helen Patton for sharing such honest and valuable perspectives about their careers, and quick tips to the next generation of CISOs, and our very own Francis Odum for being a wonderful host! *** Special thanks to the Okta team for their partnership for the event! Stay tuned to hear more about our upcoming events very soon ✨ #CISO #CyberSecurityLeadership #SecurityLeadership

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • #RSAC2026 recap: the future of "agentic security" with CrowdStrike ✨ Last week in San Francisco, our team sat down with CrowdStrike for a deep product briefing. We walked away with one clear takeaway: we are entering the "𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗲𝗿𝗮" of cybersecurity. Here is what that looks like in practice: 1/ AI agents are being deployed faster than security teams can govern them. Employees are using unapproved AI tools. Internal models are being exposed to prompt injection. "AI coworkers" are being onboarded with no security review. This is not a future problem. It is happening now. 2/ CrowdStrike's response to this is 𝗙𝗮𝗹𝗰𝗼𝗻 𝗔𝗜 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 & 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 (𝗔𝗜𝗗𝗥), a framework built to govern and protect AI agents as they become part of the enterprise environment. What caught our attention was the 𝗦𝗵𝗮𝗱𝗼𝘄 𝗔𝗜 𝗗𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝘆, which automatically identifies unapproved AI tools in use and monitors the associated risk. Protecting internal models from data poisoning and prompt injection is no longer optional. Two other areas stood out: ✔️ 𝗙𝗮𝗹𝗰𝗼𝗻 𝗗𝗮𝘁𝗮 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 treats data as something that moves, not something that sits still. This is a meaningful modernization of how DLP has traditionally been framed. ✔️ Strategic acquisitions tell a story. SGNL brings 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗶𝗱𝗲𝗻𝘁𝗶𝘁𝘆 𝘃𝗲𝗿𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻. Seraphic Security adds 𝗯𝗿𝗼𝘄𝘀𝗲𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆. Identity and the browser remain two of the most abused attack surfaces in enterprise environments. Why does it matter now? “Good enough” security will not hold as adversaries apply AI to accelerate both scale and speed. The consolidation opportunity here is real, especially if these modules reduce stack complexity without introducing blind spots. A genuine thank you to Jennifer Johnson, Daniel Bernard, Mitesh Shah, and Kristina M. (AR) at CrowdStrike for the detailed product walkthroughs. Your time and depth of knowledge made this briefing exceptionally valuable. 🙏 👏 *** Where are you seeing the biggest gaps today: AI agent governance, Shadow AI sprawl, identity abuse, or browser-borne risk? Let us know in the comments below 👇 #Cybersecurity #CISO #AISecOps #ShadowAI #IdentitySecurity #CloudSecurity #ZeroTrust #CyberRisk

    • No alternative text description for this image
    • No alternative text description for this image
  • Most security teams are drowning in alerts. The real problem isn't volume. It's that detections are built to signal, not to decide. A detection is a hypothesis: "This activity may matter." It only creates value when it translates machine-scale telemetry into a work unit that a human can act on. It is not a verdict. It is not an alert. Treating alerts as the unit of success leads to noise and inconsistent outcomes because it optimizes for signalling rather than decisions. A mature detection taxonomy optimizes for decision-grade work. Cases that arrive with enough context to answer three questions: 1️⃣ What happened? 2️⃣ Why does it matter here? 3️⃣ What do I do next? In our recent research by Sean Sosnowski, we discuss the detection engineering lifecycle, which spans four phases: creation, testing, deployment, and tuning. But having a lifecycle isn't enough. Detections fail when they surface technique presence without environmental relevance, flag anomalies without an investigative path, or generate signals without enough evidence to route and respond. The next generation of detection platforms will be judged on one thing: whether they reduce time-to-decision or increase manual assembly work. As part of this research, SACR analyzed four platforms shaping detection engineering today: Vega, Panther, Cribl, and Artemis. Here is our full report: https://lnkd.in/eJFUNSSD #CyberSecurity #SecurityOperations #DetectionEngineering #CISO #SIEM #ThreatDetection #SOC #MITREAttack #SecOps

    • No alternative text description for this image
  • Software Analyst Cyber Research reposted this

    View organization page for 1Password

    110,569 followers

    Day 3 at #RSAC26 wrapped 🙌 Thank you to everyone who stopped by, joined our sessions, and engaged with our team in conversations on the critical shifts happening in identity security. 🔐 🎤 Highlights from today’s sessions: "The moment any secret material is part of the context window, it's game over. You have to assume the credentials could be exposed." - Fotios (Fotis) Chantzis, Agent Security Lead OpenAI in conversation with Nancy Wang, CTO at 1Password, on why over-permissioning is catastrophic for agents. "Blameless is a great culture because we can learn from incidents." - Travis McPeak, CISO Cursor on who is responsible for the risk when AI becomes a developer. "Human error is a window into a system problem—and AI is going to spotlight that problem. Whether it's a person or tool, it has a massive impact on your organization. We have to go in and fix the security issues." - Jason Meller, VP of Product at 1Password “Our role as leaders is to ensure that the systems we use are secure by design.” - Tal Peretz, Co-Founder and CTO at Runlayer "With agents, you have autonomy running wild, making decisions. That brings a new authorization and governance model we've never needed before. Intent and authorization need to happen inline, in production flow." - Jacob DePriest, CISO at 1Password unpacks why authorization at runtime is table stakes. "The risk is what happens after login—when an agent takes an action or does a workflow and you don't know what's happening." - Sanjay Ramnath, VP of Product Marketing at 1Password, on why visibility and auditability matters. "Get your identity hygiene in place first. If you haven't fixed your human identity challenges, your agentic solutions are going to be a mess. You need to build on a good foundation." - Francis Odum, Cybersecurity Analyst and Founder at Software Analyst Cyber Research, on the identity security foundation every enterprise needs now. Over-permissioning, static credentials, lack of visibility — security issues become exponentially riskier when AI agents operate at machine speed with autonomy. The conversations had at #RSAC are moving a path forward for modern identity security. What was your biggest takeaway? Tell us in the comments 👇 #IdentitySecurity #AI #1Password #CyberSecurity

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Most security teams are measuring detections incorrectly. Not because they lack tools. Because they are measuring the wrong thing entirely. A detection is not an alert. An alert is just a delivery mechanism. A detection is the underlying hypothesis and the evaluation process that produces a signal worth attention. It asserts, "This activity may matter," with enough structure that a team can validate the hypothesis, understand what it implies, and decide what to do next. That distinction matters more than most teams realize. When you treat alerts as the unit of success, you optimize for volume. More rules. More signals. More noise. And you get inconsistent outcomes because the system is built to signal, not to decide. Here is what detections actually are: ✔️ Detections are logic over observables and behaviors that translate machine-scale activity into actionable work ✔️ They are a decision-support mechanism, not a verdict ✔️ They only create value when they reliably surface a hypothesis that a human or system can act on This definitional clarity is the foundation of our latest report on detection engineering. In this report, we analyzed four vendors actively building in this space: Cribl, Panther, Vega, and Artemis. At scale, the teams that get this right measure time-to-decision and outcome consistency, not alert counts. The ones that get it wrong keep tuning volume while the real gaps stay invisible. Read the full report here by Sean Sosnowski: https://lnkd.in/eJFUNSSD #DetectionEngineering #SOC #CloudSecurity #Cybersecurity

    • No alternative text description for this image
  • Day 3 at #RSAC26: This morning's event, hosted by 1Password, drove a candid, high-energy conversation with a room full of security leaders finally saying what they've been thinking. Users, devices, SaaS apps, and AI agents are multiplying fast. But the identity models that many teams still rely on? They were built for a different era. Our very own Francis Odum, joined Jacob DePriest, CISO at 1Password, for a fireside chat moderated by Sanjay Ramnath, VP of Product Marketing at 1Password. The conversation was honest and direct: the gaps between how access is granted, used, and governed today are real, and they are growing across both human and non-human environments. A few things stood out: 1️⃣ Legacy IAM wasn't designed for AI agents as identity principals 2️⃣ Fragmented access models create blind spots that attackers are already exploiting 3️⃣ Trust can no longer be assumed at the perimeter. It has to be earned at every layer As AI agents become part of the enterprise identity fabric, the stakes get higher. The next phase of IAM won't just be about managing access. It will be about rethinking what trust means when your "users" include bots, pipelines, and autonomous systems. Big thanks to the 1Password team for hosting a conversation that actually moved the needle, and to everyone who joined and added to the discussion. #IdentitySecurity #IAM #AI #Cybersecurity #AccessManagement #SecurityLeadership

    • No alternative text description for this image
  • Most Zero Trust strategies are failing. Not because of bad intentions, but because of broken architecture. Organizations have spent years layering security tools on top of each other. Network security here. Identity management there. Endpoint detection somewhere else. The result is a siloed stack that was never designed to work as one. That fragmentation is now the root cause of exploitation. Here is the pain that security leaders are living with every day: 1️⃣ No real-time correlation: A network anomaly fires after a user has already authenticated. The Identity Provider never gets the signal. That gap is exactly what attackers exploit. 2️⃣ No behavioral baseline: Without a unified view, the subtle drift that signals an advanced threat disappears into noise. Security teams are drowning in false positives while real threats move quietly. 3️⃣ No intent awareness: Tools score individual events. They do not track the continuous, evolving intent behind a user's or AI agent's actions. Against sophisticated adversaries, that is a critical blind spot. And the problem is compounding. AI agents now operate across tools, data, and workflows with no consistent verification. Non-human identities have exploded in volume. Platforms like Microsoft are at the center of these environments, and the identity silos that slow security teams down are the same ones attackers use as leverage. Our latest research from Lawrence Pingree introduces the 𝗔𝗰𝗰𝗲𝘀𝘀 𝗙𝗮𝗯𝗿𝗶𝗰: a composable, identity-centric architectural layer that unifies identity, network, and endpoint context into a single living system of trust. At its core is an Access Graph, a unified intelligence engine that enables continuous, real-time risk evaluation across every identity and transaction, human and non-human alike. Read the full report here: https://lnkd.in/eT4tmGnr #ZeroTrust #Cybersecurity #IdentitySecurity #CISO #NonHumanIdentity #CloudSecurity #AISecOps #RiskManagement #AccessFabric #CybersecurityResearch

    • No alternative text description for this image
  • This week's #VendorWatchSeries signals something bigger than a product launch. Wiz just introduced the AI Application Protection Platform (AI-APP), and it is reframing how enterprise security teams need to think about AI risk. AI risk is no longer just about finding a vulnerable piece of code or a misconfigured server. It is about understanding emergent behavior. The core finding is that security teams are no longer limited by a lack of data, but by the "human bottleneck": the inability to investigate, validate, and remediate risks at the same speed they are generated. Wiz's solution is a trio of specialized AI agents that act autonomously but are grounded in the Wiz Security Graph. The Three Core Agents Wiz has introduced, each focused on a different stage of the security lifecycle: • 🔴 Red Agent (Offensive): An AI-powered attacker that "thinks" like a human pentester. It doesn't just scan for known vulnerabilities; it analyzes API specifications, reasons about application behavior, and chains multi-step attacks to find logic flaws and authentication bypasses that traditional tools miss. • 🔵 Blue Agent (Defensive): Acts as an automated threat investigator. When a threat is detected, it immediately gathers evidence from cloud telemetry and runtime signals to provide a severity verdict, effectively performing the initial triage for a SOC team. • 🟢 Green Agent (Resolution): Focuses on "driving the fix." It identifies the root cause of a risk (e.g., a misconfiguration in code). It generates environment-specific remediation steps or pull requests, ensuring the fix is durable and sent to the right owner. Siloed security tools fail to protect modern AI architectures because they lack cross-layer context. An AI agent, a connected API tool, and access to a sensitive database might all appear secure when evaluated individually. When strung together, they create a highly exploitable attack path. The signal for CISOs is clear. The architecture of AI risk has outpaced the architecture of AI security. The tools defending it need to catch up. Read more about their AI agents here: https://lnkd.in/ej_KANgX #Cybersecurity #AISecurity #CISO #CloudSecurity #RiskManagement #CNAPP #ThreatDetection #EnterpriseAI #CyberRisk #VendorWatch

    • No alternative text description for this image

Similar pages

Browse jobs