Traces for Teams is coming soon, a new privacy-centric view of your team's sessions with coding agents. Traces gives your engineering team: - A new view on how teams use coding agents with session-level detail - Ability to share manually or via API and skills from 10+ supported agents - A new way to handoff and continue sessions between team members We've also built Traces so you can adopt progressively. Start by sharing one session with your team, then directly from agents via skills, all the way to sharing sessions on every git commit. Visibility into how your code was generated is the real signal for how your team is using AI. If you're interested in trying it out before it's launched, let us know with a comment below!
More Relevant Posts
-
AI has removed the ceiling on how fast teams can write code. It hasn't removed the consequences of shipping code that breaks things. On Weds, March 25th, we're hosting a live session on what it takes to govern AI-generated code in production without slowing down the teams producing it. Nnenna Ndukwe ~ AI and Emerging Tech and Shlomo Dalezman will discuss how to codify implicit engineering standards, enforce them automatically across every repo and team, and build a review layer that understands your codebase well enough to surface issues that matter. Sign up here → https://lnkd.in/eg3vTuuh
To view or add a comment, sign in
-
-
€5.5m to fix what AI coding tools keep breaking. Tower.dev, a Berlin startup founded by two ex-Snowflake engineers, just closed a €5.5 million round. The thesis is sharp: AI can generate a data pipeline in minutes, but getting it to run reliably in production is a different problem entirely. Tower calls it the "last mile" of AI-assisted development: testing, debugging, deploying, and operating AI-generated code at scale. Read more: https://lnkd.in/dG5DcCDD
To view or add a comment, sign in
-
-
UPDATE: This event has been rescheduled to April 1, from 1:00–2:00 PM ET. AI agents are changing how software gets built — and it’s happening fast. On March 31, our software engineers David Thomas and Nick Trombley are breaking down how high-velocity teams are actually using AI agents today. They’ll cover: • What AI agents are and what they can actually do • Why teams are adopting them now • How to use them effectively • What’s coming next Coding with AI Agents 📅 March 31 ⏰ 1:00–1:45 PM ET 🎙️ Featuring David Thomas 🍊 and Nick Trombley Save your spot at the link in the comments.
To view or add a comment, sign in
-
-
Don't miss my charismatic coworkers talk! They're on the cutting edge of what AI Agents have to offer teams to build faster and solve problems more completely in less time than we've ever been able to before!
UPDATE: This event has been rescheduled to April 1, from 1:00–2:00 PM ET. AI agents are changing how software gets built — and it’s happening fast. On March 31, our software engineers David Thomas and Nick Trombley are breaking down how high-velocity teams are actually using AI agents today. They’ll cover: • What AI agents are and what they can actually do • Why teams are adopting them now • How to use them effectively • What’s coming next Coding with AI Agents 📅 March 31 ⏰ 1:00–1:45 PM ET 🎙️ Featuring David Thomas 🍊 and Nick Trombley Save your spot at the link in the comments.
To view or add a comment, sign in
-
-
AI‑assisted development has rapidly shifted from an experiment to an everyday part of how we build software. Next Thursday, Alessio Salvadorini will dive into "a biased attempt to reduce stochasticity using mono‑purpose agents" - Codemates office starting at 5. https://lnkd.in/dAz9ZqE5
To view or add a comment, sign in
-
Every codebase captures what the code does, Nobody captures what actually made it that way. Why that particular event is consumed for that particular case, why can't some other thing work? Why that API design. Why two teams interact the way they do, why it has to be coupled? That knowledge lives as tribal knowledge in Slack threads and people's heads. Leaves when they leave. I'm exploring building a memory layer that sits above the codebase — so AI agents can actually understand your org, not just your code. Very early, just thinking through it. If you've felt this pain drop a comment, would love to pick your brain.
To view or add a comment, sign in
-
Looking to level up your context engineering? Join us live on March 24! Most AI tools struggle with complex systems. You try to compensate by manually feeding repos, docs, and other context, but the output still falls short. That’s because highly interconnected systems are difficult to represent through prompts alone. In this session we’ll show how Tetrix creates a persistent knowledge layer for your codebase. You’ll see how it can successfully reason across large repos and feed the right system knowledge to tools like Cursor and Claude while keeping that context available across sessions and tools. If you rely on AI-assisted development and work with large codebases, this session will show you a more reliable way to run your workflows. 📅 March 24, 10 AM EST 🔗 https://luma.com/gssl6616
To view or add a comment, sign in
-
-
Thanks to James Spiro and Forbes Israel for taking the time to chat about how to know if AI is actually moving the needle in your engineering org. Genuinely enjoyed this one! For teams rolling out AI tools, adoption looks great, but when someone asks whether engineering is actually improving, nobody has a clear answer. Code quality, review times, and delivery speed are where the real answers are, and the visibility that we’re providing Milestone. The engineers who figure that out first are going to look like a completely different animal compared to the ones who don't. You can find the full conversation here: https://lnkd.in/dpuCF28i And the full blog here: https://lnkd.in/dadvWDKS
To view or add a comment, sign in
-
The AI honeymoon phase is over. It's time to stop vibe coding and start engineering. Right now, 66% of developers report intense frustration with AI tools giving them solutions that are "almost right." We are spending seconds generating syntax, but hours reverse-engineering hallucinated, contextless logic. Vibe coding simply shifts your bottleneck from typing to debugging. By forcing your AI agents to follow a strict 4-step pipeline and relying on a version-controlled SPEC.md file, you can drop debugging time to under 10%. Swipe through the framework below to see exactly how to eliminate the AI code review crisis and move from spaghetti code to modular architecture.
To view or add a comment, sign in
-
I just watched Sysdig's new video on 𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗶𝗻𝗴, which offers a clear look at how AI-generated code is changing risk, culture, and review processes in engineering teams. AI tools are coding faster than we can review, creating security and ownership challenges. If you're using or trying out AI coding tools, this 5-minute breakdown is a must-watch. It's practical and shows that speed and security can coexist. Watch the full video on Youtube ➔ https://okt.to/BSzykU
Vibe Coding & AI Coding Assistants: Who Secures AI-Generated Code?
To view or add a comment, sign in