Sign in to view Mathias’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Mathias’ full profile
Mathias can introduce you to 10+ people at Netlify
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
5K followers
500+ connections
Sign in to view Mathias’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Mathias
Mathias can introduce you to 10+ people at Netlify
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Mathias
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Mathias’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Websites
- Personal Website
-
http://mathias-biilmann.net/
- Company Website
-
http://www.webpop.com
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
5K followers
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisEnterprise software isn't horseshoes and grenades, close doesn’t cut it. Vibe-coded prototypes don’t reference the npm package engineering works from or use the same branding standards, it uses design elements that look like your tokens, buttons, and nav elements, but aren’t. Engineering can’t use the code from vibe coding tools. This issue prevents org-wide adoption. And we fixed it. Now, you can put your design system into our platform. Upload your npm packages, Storybook, or GitHub repos into Bolt.new. Our agents render your design system within Bolt.new and make it accessible from the prompt box. Every prototype will be production-ready. Happy building.
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisSREs have been dreaming about self-healing systems for 20 years. The Google SRE book promised them. Scripting got us partway there. And now AI is getting there. So why does it feel like a threat? Dana Lawson, CTO at Netlify, has a hypothesis: your career was built around being the one who holds reliability together. That identity is real. The fear that comes with letting go of it is real too. But here's what she doesn't let people off the hook on: LLMs are modeled off human brains. The pattern matching, the intuition, the ability to spot something wrong before you can explain why. That's yours. "Look at coding agents as an extension of yourself. That's how you're going to have job security. But you've got to fight for it." Dana Lawson and Sylvain Kalache, Head of Rootly AI Labs, get into why the pushback against AI in reliability is really about identity, and what it takes to come out ahead. Listen to the full episode: https://lnkd.in/em6kH5vj
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisEvery agent benchmark measures how good the agent is. None of them measure how good the service is for agents. That is the gap and I built a framework to fill it last night. AXS (Agent Experience Score) is an open-source scoring framework that measures service-side Agent Experience across seven dimensions, scored 0 to 100: discoverability, schema quality, reliability, recoverability, latency, auth, and determinism. Objective. Reproducible. Protocol-neutral. Apache 2.0. This builds on the Agent Experience (AX) movement that Mathias Biilmann Christensen at Netlify started in January 2025. The community has established what good AX looks like. What was missing is a standardised way to measure it. AXS is that measurement layer. The framework is live on GitHub. First benchmark results (10 to 20 services scored) coming in the next two weeks. Built with Claude and Grok as thinking partners. The full story of how is in the post. Full post and GitHub repo links in the first comment. #AgentExperience #AX #AXS #OpenSource #AI #AIAgents #NPS #APIDesign
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisI rapidly created a script using Claude Code that will audit a GitHub org for the axios npm supply chain attack. I'm making it available here: https://lnkd.in/eViwrwFx GitHub will likely rate limit your API calls if you have a big org. YMMV since it's not been fully tested.
-
Mathias Biilmann Christensen shared thisGreat article from Sanity's Knut Melvær on Agent Experience in practice.Mathias Biilmann Christensen shared thisI've been thinking a lot about what "agent experience" actually means for teams who make developer tools. Not the theory, the specific design decisions. This is a daily conversation inside of Sanity as we're improving a pretty vast DX surface for agentic coding. So I wrote up a framework: four questions to run your system surfaces through: Error messages, CLI output, SDK abstractions, and API responses. tl;dr: skills files, docs, and llms.txt are signage. Your API design is the hallway. With well-designed hallways, you don't really have to think about where you need to go. Signage needs to be identified, read, and understood. A year ago, Mathias Biilmann coined the term Agent eXperience and recently offered a framework for it: Access, Context, Tools, Orchestration. Signage and hallways are dimensions in all of these. And if you are wondering about where to start, I'd start designing the hallways.Agentic Developer Experience starts with your system, not your promptsAgentic Developer Experience starts with your system, not your promptsKnut Melvær
-
Mathias Biilmann Christensen shared thisToday's agent run is not a new app, but my first go at playing around with Cheng Lou's new Pretext library. A pure JavaScript/TypeScript library for multiline text measurement and layout, that's been exploding all over twitter since Cheng released it. There’s been a lot of really fun demos with objects moving all around text blocks in ways that are super fun demonstrations of the power of the library, while being obviously unsuitable for any text anyone actually wants to read. But Cheng Lou also had a much simpler looking demo that really caught my interest: using pretext to implement optimal global line-breaking with syllable hyphenation — the Knuth-Plass text justification algorithm. I let a Netlify Agent Runner loose on my blog with a simple prompt, and then followed up by using pretext to make my headlines bolder and more impactful. Happy with the result for now and will let it loose on my blog for a bit. Link to post in thread.
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisIf you've built something with agent runners on Netlify, we want to hear about it! What did you start with? What surprised you? What changed on the second agent run? The Netlify Creator Program is open for blog posts and videos. Editorial support, author credit, swag, and 3 months of Pro included. If you've got a workflow worth sharing: https://lnkd.in/dyQ_Q_NN If you're just getting started: https://ntl.fyi/412VS7a
-
Mathias Biilmann Christensen reposted thisMathias Biilmann Christensen reposted thisI've been thinking a lot about what "agent experience" actually means for teams who make developer tools. Not the theory, the specific design decisions. This is a daily conversation inside of Sanity as we're improving a pretty vast DX surface for agentic coding. So I wrote up a framework: four questions to run your system surfaces through: Error messages, CLI output, SDK abstractions, and API responses. tl;dr: skills files, docs, and llms.txt are signage. Your API design is the hallway. With well-designed hallways, you don't really have to think about where you need to go. Signage needs to be identified, read, and understood. A year ago, Mathias Biilmann coined the term Agent eXperience and recently offered a framework for it: Access, Context, Tools, Orchestration. Signage and hallways are dimensions in all of these. And if you are wondering about where to start, I'd start designing the hallways.Agentic Developer Experience starts with your system, not your promptsAgentic Developer Experience starts with your system, not your promptsKnut Melvær
-
Mathias Biilmann Christensen reposted thisLast week I attended the evening part of this AI startup hackathon. It was inspiring and motivating to speak with and hear from investors and the CEO (Mathias Biilmann Christensen) of Netlify about the future of engineering, the effects this has had directly on Netlify, and what this means for the future of startups and companies generally. Looking forward to the next TAG event! 🤘Mathias Biilmann Christensen reposted thisTAG AI Hackathon Amsterdam was peak founder event - 100 builders - Real startups - Pitches & demos Heavy weight companies founders and partners from tier 1 VC And … electric scooter sumo racing Thank you all for the energy! Mathias Biilmann Christensen Jerrod Engelberg Marnix Broer
-
Mathias Biilmann Christensen liked thisMathias Biilmann Christensen liked thisThis month's biggest update: you can now start a Netlify project from a prompt. Plus reusable templates, a new Internal Builder role for team collaboration, Astro 6 support, AI usage controls, and Codex integration. Here's everything that shipped in March!
-
Mathias Biilmann Christensen liked thisMathias Biilmann Christensen liked thisEnterprise software isn't horseshoes and grenades, close doesn’t cut it. Vibe-coded prototypes don’t reference the npm package engineering works from or use the same branding standards, it uses design elements that look like your tokens, buttons, and nav elements, but aren’t. Engineering can’t use the code from vibe coding tools. This issue prevents org-wide adoption. And we fixed it. Now, you can put your design system into our platform. Upload your npm packages, Storybook, or GitHub repos into Bolt.new. Our agents render your design system within Bolt.new and make it accessible from the prompt box. Every prototype will be production-ready. Happy building.
-
Mathias Biilmann Christensen liked thisMathias Biilmann Christensen liked thisSREs have been dreaming about self-healing systems for 20 years. The Google SRE book promised them. Scripting got us partway there. And now AI is getting there. So why does it feel like a threat? Dana Lawson, CTO at Netlify, has a hypothesis: your career was built around being the one who holds reliability together. That identity is real. The fear that comes with letting go of it is real too. But here's what she doesn't let people off the hook on: LLMs are modeled off human brains. The pattern matching, the intuition, the ability to spot something wrong before you can explain why. That's yours. "Look at coding agents as an extension of yourself. That's how you're going to have job security. But you've got to fight for it." Dana Lawson and Sylvain Kalache, Head of Rootly AI Labs, get into why the pushback against AI in reliability is really about identity, and what it takes to come out ahead. Listen to the full episode: https://lnkd.in/em6kH5vj
-
Mathias Biilmann Christensen liked thisMathias Biilmann Christensen liked thisEvery agent benchmark measures how good the agent is. None of them measure how good the service is for agents. That is the gap and I built a framework to fill it last night. AXS (Agent Experience Score) is an open-source scoring framework that measures service-side Agent Experience across seven dimensions, scored 0 to 100: discoverability, schema quality, reliability, recoverability, latency, auth, and determinism. Objective. Reproducible. Protocol-neutral. Apache 2.0. This builds on the Agent Experience (AX) movement that Mathias Biilmann Christensen at Netlify started in January 2025. The community has established what good AX looks like. What was missing is a standardised way to measure it. AXS is that measurement layer. The framework is live on GitHub. First benchmark results (10 to 20 services scored) coming in the next two weeks. Built with Claude and Grok as thinking partners. The full story of how is in the post. Full post and GitHub repo links in the first comment. #AgentExperience #AX #AXS #OpenSource #AI #AIAgents #NPS #APIDesign
Experience & Education
-
Netlify
***
-
**********
***
-
******
*******
-
********** ***********
******** ** ********** undefined undefined
-
View Mathias’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Projects
-
EventSource HQ
- Present
See projectEventSource HQ makes sending events from your app to your users' browsers as easy as it gets.
EventSource HQ is based on my open-source project "EventSource Broker" - an implementation of the HTML5 "Server Sent Events" API in Haskell. -
Webpop
- Present
Cloud based CMS platform for front-end developers and web designers.
Built with Ruby on Rails, JRuby, Haskell and lots of javascript.
99.9+ uptime since the earliest days of beta, while doing daily deploys. Built in server-side javascript extension engine. Innovative declarative template engine. All infrastructure automated with chef and running on top of Rackspace's Cloud.Other creators -
-
domestika.org
-
Domestika.org is the largest Spanish speaking design community. I mainly did systems administration, configuration management, load testing, database profiling, helped with performance optimizations, ongoing maintenance, minor feature additions and bug-fixes.
Other creatorsSee project
Languages
-
English
Full professional proficiency
-
Danish
Native or bilingual proficiency
-
Spanish
Professional working proficiency
-
German
Elementary proficiency
View Mathias’ full profile
-
See who you know in common
-
Get introduced
-
Contact Mathias directly
Other similar profiles
Explore more posts
-
Postman
554K followers
If your internal APIs live in tribal knowledge, good luck scaling. 🏗️ The Private API Network brings discovery and structure to internal work...before someone rebuilds what already exists. 🧭 Explore how searchable APIs save time and prevent duplication. https://lnkd.in/gBic8bAp
19
1 Comment -
Augment Code
18K followers
One of our customers from Tilt (formerly Empower) just shared how they wired the Augment CLI directly into their CI/CD pipeline for PR reviews. It’s a great example of what “agentic engineering” looks like in practice. Some lessons 👇 Their pipeline flow: - GitHub PR triggers Azure Pipeline - Pipeline hydrates a Docker image with PR code + metadata - Augment CLI runs review stages with access to MCP servers (Linear, LaunchDarkly, Notion, GitHub) 💡Lesson 1: Treat AI rules like code. They maintain “always apply” rules just for Augment. When something breaks, they triage it in Slack like a bug. Most teams fail by treating AI feedback as vibes. They treated it as engineering. 💡Lesson 2: Separate rules for humans vs. AI. - Humans get full docs in Notion (“why the rule exists”). - AI gets short enforceable rules (“always do X, never do Y”). The split makes both groups more effective. 💡Lesson 3: Hydrate context. With Notion + Linear + LaunchDarkly hooked up, Augment doesn’t just say “this violates a rule.” It explains why the rule exists, or checks tickets/flags directly. Feedback becomes coaching, not nagging. 💡Lesson 4: Modular prompts. They didn’t dump everything into one mega-prompt. Instead, the CLI runs in stages: - Code review - QA notes from Linear - Feature flag validation 💡Lesson 5: Centralize and version rules. - Rules live in the repo, next to the code they govern. - Same rules apply in IDE, CLI, and pipeline. No hidden logic. Developers see exactly what Augment enforces. The result: faster, higher-quality reviews that also teach developers the why. This is what agentic engineering feels like: less typing, more orchestration, agents enforcing consistency while explaining the reasoning. Big thanks to James Garrett for sharing this.
86
7 Comments -
Marcos Heidemann
symphony.is • 13K followers
While everyone was talking about Opus 4.6, for me the true killer feature of the recent Claude Code updates is the agent teams. It's been something i've been trying to achieve with customization for a while. Custom agents, orchestration scripts, specific CLAUDE.md instructions to coordinate work... with some degree of success. But what Anthropic shipped natively is a WHOLE different level. What makes this stand out is the inter-agent communication. We're not talking about simple fan-out/fan-in where you spawn workers and collect results. These agents talk to each other. Peer-to-peer messaging, dependency-aware task graphs that auto-unblock, agents that self-claim work from a shared task list. The lead can even enter Delegate Mode where it does ZERO implementation, only coordination. The image below is from one of my setups. A team manager orchestrating a librarian agent, a PhD lead, and 5 research sub-tasks with blocking dependencies. The librarian unblocks the research tasks, the PhD lead aggregates everything. All coordinated autonomously. And with this a whole new world of orchestration just unveiled itself. Distributing work across agents is the "easy" part. You break down tasks, assign owners, define dependencies. The HARD part, and what MOST stands out now, is aggregation. How do you take the output of 5 parallel agents, each with their own context window, and synthesize it into something coherent? That's the new skill. Anthropic themselves used 16 parallel agents to build a 100,000 line Rust C compiler that compiles the Linux 6.9 kernel. No human actively coding. ~$20,000 in API costs over ~2,000 sessions. We went from pair-programming with AI to managing AI engineering teams. The skills that transfer are the ones from engineering management: task decomposition, context management, knowing when to intervene vs let the team self-organize. This is a new paradigm, and i think it opens up several possibilities we haven't fully explored yet. ref.: https://lnkd.in/dVCe344z
68
12 Comments -
Kathleen DeRusso
Elastic • 990 followers
Chunking and snippet extraction has been a huge focus lately - my latest blog dives into some of the work we've done on this to date, including support for a chunk rescorer in our semantic reranking retriever, as well as some useful ES|QL primitives to get more visibility into chunks and snippets. #elasticsearch #semanticreranking #snippets #chunks #chunking #esql https://lnkd.in/e6UD7iii
13
-
ScyllaDB
28K followers
Tripadvisor's solution was originally built using Cassandra on-prem. But as their scale increased, so did the operational burden. After running a Proof of Concept with #ScyllaDB, the throughput was much better than Cassandra, and the operational burden was eliminated. We look at the technical specs here > https://ow.ly/5YMO50WkjP5
25
-
honeycomb.io
28K followers
🚀 New on the Honeycomb blog: Unlock real-time visibility into your SaaS tools. With Webhookevent Receiver, you can now ingest events from tools like GitHub, Auth0, and Vercel—without writing custom code. This post walks you through the step-by-step process to get started. 👉 Read it here: https://lnkd.in/eihSUCNJ #Webhook #OpenTelemetry #Kubernetes
15
-
Apollo GraphQL
20K followers
Indeed's counterintuitive API strategy: fund two teams to build competing GraphQL platforms, choose the winner based on real performance data. The result? OneGraph now processes 100 billion requests per month across 295 subgraphs, powering Indeed's global marketplace and external partner integrations. Mike Cohen, Technical Fellow at Indeed: "Like all great software companies, we built it twice." The lesson for engineering leaders: sometimes the fastest path to the right solution involves building competing approaches first. Swipe to see how Indeed turned duplication into discovery →
15
-
Ben Royce
AKQA • 9K followers
So which models produce HTML that has the least accessibility errors, and at what cost? Mapped it out here (bottom left is the best). This is helpful for understanding which models are most compliant for those with disabilities and do it efficiently. Qwen and Gemini 2.5 Flash lead the pack. Hat tip to Ben Ogilvie for pointing me to this: aimac.ai
55
2 Comments -
Marc Brooker
Amazon Web Services (AWS) • 18K followers
In a new blog post, Marc Bowes looks at how updates work in Aurora DSQL, and what it means for scalable schema design. Read it here: https://lnkd.in/gmGEK8Dv A couple take-aways for application builders: - You mostly don't have to worry about hot read keys in DSQL, even in read-write transactions. DSQL can scale out per-key read throughput, and the design requires no co-ordination between transactions reading the same key. (Thank you, MVCC and Time Sync!) - Scalable applications do need to avoid high-contention write keys when doing updates. Writes need to be coordinated and ordered (to implement the I in ACID), which limits per-key throughput. - Most applications can avoid hot write keys with the right schema design. Most applications have already been written this way, because high-contention keys perform poorly on relational databases of all kinds.
240
11 Comments -
Justin Gordon
ShakaCode • 5K followers
Any best guesses on how long until Claude Code can: 1. Fire up the browser (without needing the MCP configuration) to evaluate changes 2. Fully leverage the Chrome Dev Tools 3. Use the Ruby debugger I’m finding Claude Code is creating fixes that would have been too tedious or painful. The gap right now is testing and debugging.
10
4 Comments -
Percona
31K followers
On behalf of the entire Percona product team for MongoDB, we're excited to announce a significant enhancement to Percona Server for MongoDB: File Copy-Based Initial Sync (FCBIS). It's designed to accelerate your large-scale database deployment with a more efficient method for initial data synchronization, reducing the time and resources required by the initial sync process. https://hubs.ly/Q03zL3qj0
14
-
Jonathan Hansing
Wallabi • 3K followers
I've been using Claude Code plugins for months. Code review, client project management, building shared context that compounds over time. Less a tool, more an extended cognition ecosystem. A few days ago, Anthropic brought that same plugin architecture to Claude Cowork, and it's worth paying attention to. Here's why. Claude Code let developers shape AI to their specific context, workflows, and accumulated knowledge. Plugins made that shaping persistent. You teach Claude your world once and it shows up informed every time. But it required a terminal. Cowork plugins remove that barrier. Same composability, same feedback loops, now accessible without writing a line of code. 👉 This is the shift that matters: thinking with plugins and feedback loops, not static prompts. A prompt is stateless. When you switch chats, context evaporates. A plugin encodes your processes, your terminology, your decision frameworks. It persists. It compounds. And that compounding is what separates AI-native organizations from everyone else. https://lnkd.in/gJZEtNpR
15
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content