Sign in to view Paul’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Paul’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Brooklyn, New York, United States
Sign in to view Paul’s full profile
Paul can introduce you to 10+ people at InfluxData
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
3K followers
500+ connections
Sign in to view Paul’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Paul
Paul can introduce you to 10+ people at InfluxData
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Paul
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Paul’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Websites
- Portfolio
-
http://github.com/pauldix
- Company Website
-
http://benchmarksolutions.com
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Articles by Paul
-
Build the machine that builds the machine
Build the machine that builds the machine
Coding agents can produce the first version quickly. The difference between a demo and a deployable change is…
35
11 Comments -
2026: The Great Engineering DivergenceDec 31, 2025
2026: The Great Engineering Divergence
2025 will go down as the year code became cheap and programming changed forever. With agentic development tools—Claude…
246
22 Comments
Activity
3K followers
-
Paul Dix shared thisWe released InfluxDB 3.9 today! For our Enterprise customers, it has a beta of enhancements to our storage system enabling very wide & sparse tables and MUCH faster performance on single series queries. This brings the flexible schema and fast series lookups you love from v1 and v2 into v3 while maintaining support for infinite cardinality, scalable object store durability and the fully featured SQL query engine provided by DataFusion. Read all about it: https://lnkd.in/epi_2HwvWhat’s New in InfluxDB 3.9: More Operational Control and a New Performance PreviewWhat’s New in InfluxDB 3.9: More Operational Control and a New Performance Preview
-
Paul Dix reposted thisPaul Dix reposted thisWe are holding an Apache DataFusion meetup on Wed April 22 in Portland: https://luma.com/dsp3ud82 Join myself, Mustafa Akur , Weston Pace and Lu Qiu as we talk about DataFusion, Streaming Query Optimization, Reading without Row Groups, and Distributed Execution. This is right after https://lnkd.in/ef98gt_S so if anyone is still in town afterwards and wants to come hang out, please do. Yes, this is a day before the meetup in Seattle/Bellevue on April 23: https://luma.com/hxshbp0m Thanks as always to InfluxData and Evan Kaplan and Paul Dix for helping sponsor this event and supporting these projects, and my participation in them
-
Paul Dix reposted thisPaul Dix reposted thisAt GTC this week, Jensen Huang described structured data as the ground truth of AI. What stood out to me in the slide was the inclusion of open engines like DataFusion alongside the broader data ecosystem. That reflects a real shift. Execution engines are becoming a critical layer in how data is processed and prepared for AI. They sit between storage systems and downstream applications, shaping how data is structured, queried, and ultimately used. We made that bet early at InfluxData. InfluxDB 3 runs entirely on DataFusion, and it’s a big part of how we handle high-frequency, real-time workloads at scale. We’re not just using DataFusion, we’re helping build it. InfluxData is a major contributor to the project, and our Staff Engineer Andrew Lamb serves on the PMC. If structured data is the ground truth of AI, then the systems that process and shape that data are becoming just as important as the models themselves.
-
Paul Dix shared thisToday, we’re announcing another step forward for InfluxDB 3 on AWS with support for clusters up to 15 nodes, giving teams a new scale tier for large time series workloads. As telemetry volumes grow, time series systems face higher ingestion rates, more concurrent queries, and rapidly increasing cardinality. Larger clusters make it possible to scale these workloads while keeping query performance fast. We’re also introducing a direct upgrade path from InfluxDB 3 Core (open source) to Enterprise, so teams can scale to these clusters without downtime or data migration. Together, these changes give teams more room to scale their time series workloads on AWS. Full details on our blog: https://lnkd.in/eyD2xpNXA New Scale Tier for Time Series on Amazon Timestream for InfluxDBA New Scale Tier for Time Series on Amazon Timestream for InfluxDB
-
Paul Dix shared thisDataFusion as a building block for cutting edge DB research. It's not just for startups and big businesses, it's for academics too!Paul Dix shared thisApache DataFusion doesn't know Honda makes the Accord, LLMs do: can we use them to help with query plans? As Guy Lohman’s famous example shows (“WHERE Make = ‘Honda’ and Model = ‘Accord’”), sometimes world knowledge would drastically help with good estimates! Before telling you the answer, a small piece of lore about this bauplan + Stanford University + University of Wisconsin-Madison + Together AI collaboration is in order. Last year, reading DeepSeek on a flight, I got nerd-sniped by the idea of applying LLMs to a verifiable problem we see every day: OLAP queries. I immediately got the formidable Xiangpeng Hao and Federico Bianchi on the case; Mehmet Hamza E. joined a bit later and pushed hard to get to where we are today. As it turns out, #AI work is mostly systems engineering: exposing a novel API so that an engine can take a plan as input, inventing a concise way to serialize plans and apply patches to them, building a cloud loop for verification using sandboxes and object storage, all while trying to minimize variability in performance estimation. All our code is open source, and it underscores once again the importance of the DataFusion ecosystem for many research use cases. So what is the answer? Well, read the paper (including the fine print)! As for me, today, I’m just proud that we can share our little adventure with the community, and even prouder to have worked with Mehmet, Xiangpeng Hao, Federico Bianchi, Ciro Greco, and James Zou. In particular, Xiangpeng (whose PhD is partially funded by Bauplan) and Mehmet are young, brilliant researchers: the world is indeed their oyster. Finally, thanks to Erik Bernhardsson from Modal: while nothing is perfect for our niche use case, his support has been crucial in making progress (they were also the only sandboxes that actually worked, and I tried a bunch)! Preprint: https://lnkd.in/e48kgXZ7 Code (leave a star if you like the idea!): https://lnkd.in/e-x_xqKa See you, #LLM cowboys! #datafusion #rust #llm #queryplans #olap
-
Paul Dix shared thisRecorded this one at an interesting time. The day before Opus 4.6 and Codex 5.3 releases, right when I was busy cleaning up an AI slopfest by hand. Since then (7 days ago), I’m back to letting the AIs write code, but with a lot more oversight. For now…Paul Dix shared this🎙️ This week on The Changelog: Paul Dix from InfluxData! Paul Dix us to discuss the InfluxDB co-founder’s journey adapting to an agentic world. Paul sent his AI coding agents on various real-world side quests and shares all his findings: what’s going to prod, what’s not, and why he’s (at least for a bit) back to coding by hand. Enjoy the full conversation on 👇 Web: https://changelog.fm/676 Apple: https://changelog.fm/apple Spotify: https://lnkd.in/g9m8Ey84 YouTube: https://lnkd.in/gbctFd5P
-
Paul Dix shared thisDevelopers highest leverage right now is to build tooling for agents to create their software faster. My thoughts:
-
Paul Dix shared thisI wrote some thoughts on software delivery in 2026. It's going to be an exciting year!
-
Paul Dix shared thisChristmas came early this year for all you time series database nerds, InfluxDB 3.8 just shipped! Check the deets here: https://lnkd.in/ey2gp8M4What’s New in InfluxDB 3.8: Linux Service Management, Kubernetes Helm Chart, and Smarter Ask AIWhat’s New in InfluxDB 3.8: Linux Service Management, Kubernetes Helm Chart, and Smarter Ask AI
-
Paul Dix liked thisPaul Dix liked thisWhen Max Schireson joined Battery Ventures over a decade ago, the plan was for him to come in as an EIR and start another killer database company. I had already seen him grow MongoDB from 0 to tens of millions while I was on the board there, so everyone expected round two. Fortunately for us, he took a liking to the investing side and it's a great honor to officially name him as partner! Over the last 10+ years, I've watched Max sit with the founder of Databricks and go deep on open source metrics, then pivot to debating astrophysics and nuclear energy with a room full of engineers - and everyone walks away feeling like they were talking to one of their own. He's got this amazing versatile and broad spectrum paired with real operator instincts for helping companies navigate financing, growth, which a lot of people really enjoy, especially the founders who work with Max. As AI opens up massive oppty's in deep tech - foundation models, robotics, quantum, neuromorphic computing - Max has found an amazing cohort of founders - Fundamental, Quantum Art, Reflection AI and others building world-class foundational platforms in these spaces. We're excited to have him focused on these deeper tech ideas that we believe will potentially be game changers over the next 5-10 years!
-
Paul Dix liked thisPaul Dix liked thisA proud dad moment: our younger daughter Myra Thakker was just accepted to Carnegie Mellon University to study Cognitive Science and AI �� and I couldn't be more excited about the road ahead for her. We're living through a remarkable moment in human health. AI is beginning to reshape how we detect and slow Alzheimer's, how we personalize mental health care, and how brain-computer interfaces are unlocking new possibilities for people with neurological conditions. These deeply human problems demand someone who understands both how minds work and how intelligent systems are built. Myra's intellectual curiosity and empathy set her up well to make contributions to this field, and we could not be more proud as parents. Congrats Myra Thakker on starting your next chapter! Sherry Shah, Armaan Thakker, Nathaly Cobo Piza
-
Paul Dix liked thisPaul Dix liked thisOne of the most consistent pieces of feedback I've received from customers running time-series workloads on Amazon Timestream for InfluxDB 2: "I need better visibility into what's happening inside my database." Today, I'm proud to share that we've delivered on that ask. Advanced Metrics for Amazon Timestream for InfluxDB 2 is now available, and here's what makes it special: ✅ Metrics flow automatically to Amazon CloudWatch — no agents to install, no config to manage ✅ Track resource utilization, query performance, and system health in one place ✅ Works across both Single-AZ and Multi-AZ environments ✅ Build custom dashboards and set threshold-based alerts to stay ahead of issues For DevOps teams managing time-series applications, this is the kind of built-in observability that turns reactive firefighting into proactive optimization. Available now in all supported Regions. Check it out 👉 https://lnkd.in/gZYh-ukG
Experience & Education
-
InfluxDB
********* *** ***
-
******* ******
****** ****** *** ***** * **********
-
*** ******* ******** ******
******* *** *********
-
******** ********** ** *** **** ** *** ****
** ******** ******* undefined
-
View Paul’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Recommendations received
6 people have recommended Paul
Join now to viewView Paul’s full profile
-
See who you know in common
-
Get introduced
-
Contact Paul directly
Other similar profiles
Explore more posts
-
Ross Kelly
ITPro • 2K followers
Another development in the seemingly never-ending cacophony of conflicting messages for software engineers over the last three years. Engineers are "more important than ever" at Anthropic despite the fact Claude is doing a lot of the legwork with coding these days. Yet CEO Dario Amodei thinks AI will be doing "most, maybe all, of what software engineers do end-to-end" within six to 12 months. The script will likely flip next month.
7
1 Comment -
Scott Murtaugh
Growth Process Automation • 4K followers
Avoiding Claude Code because the terminal feels intimidating? Cowork changes that. Same power, zero command line. Anthropic released Cowork today and I've been testing it. Here's what makes it different: Cowork lets Claude interact with your computer through a visual interface. No typing commands. No scary black screens. What you can actually do with it: - Organize messy file folders automatically - Turn a pile of receipts into a clean spreadsheet - Draft reports by pulling data from multiple documents - Rename hundreds of files based on patterns - Create presentations from raw research notes This is Claude Code's capability through a visual interface. The same parallel processing that developers use, but accessible to non-technical users through Claude desktop. Worth exploring if you spend hours on repetitive computer work.
3
-
Craig McCosker
Australian Broadcasting… • 2K followers
This extension of the 2025 success of Cursor and Claude into more general purpose work were on the prediction lists for 2026. We have not even hit mid January and Anthropic has just released Cowork as a preview. Cowork can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks. Across 2025, LinkedIn was a great place to watch all the ways people have been adapting Claude Code for their daily work in product management and design. Anthropic's preview makes it more accessible outside the tech industry - media for example. These "Cursor-style" LLM apps represents a new layer of software that bundles and orchestrates language model capabilities for specific vertical industries. These apps act as a "thick" layer that organise generally capable models into "deployed professionals" by supplying them with the necessary tools and data. They will have four core components: 1. Context Engineering: The process of supplying the LLM with private data and feedback loops specific to a vertical. General-purpose models do not inherently know an organisation's "tribal knowledge" or specific history, so context engineering differentiates a company’s AI from its competitors and removes barriers to AI adoption. It also requires consolidating fragmented information— Slack threads, strategy docs, and customer records—into a format the agent can use to perform specialised tasks. 2. LLM Call Orchestration: These apps orchestrate multiple LLM calls behind the scenes. Rather than relying on a single prompt, the app manages "infinite minds" or teams of agents, queuing tasks to run autonomously over extended time horizons. 3. Application-Specific GUI: A "Cursor/Claude-style" app provides a native visual interface tailored for a human in the loop, moving beyond the traditional "command console" chat interface. 4. The "Autonomy Slider": This component allows the user to control the degree of independence the AI has, effectively shifting the human’s role from a "worker" to a "manager of agents". As AI capability moves toward "agentic autonomy," the slider enables a user to delegate long-horizon tasks while maintaining oversight. This helps move the human out of the immediate workflow loop and into a supervisory directors position, where they provide the final judgment and "escalation paths" for the AI.
2
-
Paul S.
Forkable • 2K followers
Here's an interesting one -- Sourcegraph is spinning out its AI coding agent Amp Code as a standalone business, with Sourcegraph CEO and co-founder Quinn Slack taking the reins at the new company. Main takeaway: AI-native coding tools are big enough and strategically important enough to warrant their own companies, especially when it's already profitable (as Amp says it is). Full story here: https://lnkd.in/eGP3n8bN
8
-
Matthew Faenza
Boom • 605 followers
Running evals on the new Claude 4.5 Haiku this morning—seemed like a good time to share what we’ve built for evals at Boom. The core implementation: an automated evaluation pipeline that runs every supported model against an ever growing set of test cases for our critical path inference operations. Quality, latency, and cost benchmarked in real time against production workloads. But here’s what this actually delivers for our customers: **Guaranteed uptime, not just promised uptime.** When Anthropic or Google has an outage, our system automatically routes to the next best model. Our customers’ workflows don’t stop. They often don’t even notice. We’ve decoupled their reliability from any single provider’s reliability. **Immediate access to model improvements.** Claude 4.5 Haiku dropped last week. By this morning when I finally got a few minutes to add it to our model directory, I had complete performance data across our stack. If it improves customer outcomes, we can integrate it today. Our customers get the benefit of frontier model advances within hours, not months. **Protection from silent degradation.** The harder problem isn’t when services go down—it’s when they stay up but quality degrades. Our eval system monitors this continuously. If we detect accuracy dropping on a live service, we route traffic away before it impacts customer results. The chart shows what I’m looking at right now: Haiku absolutely crushing latency on most of our critical path operations (e.g. 11s vs 26-64s for other models) while maintaining 99%+ accuracy. That’s a customer experience improvement we can ship immediately. The power is in having evals that actually matter. Test cases from realistic production scenarios. Metrics that map to customer outcomes. When you can run any model against your ground truth in minutes, you stop guessing and start knowing. Model selection becomes an empirical question, not an intuitive or philosophical one. Your customers care about outcomes, not which model you’re using.
27
2 Comments -
Tom O'Sullivan
Crimson Tree Software • 292 followers
Some real progress from Anthropic. https://lnkd.in/e64vKRwc "With Programmatic Tool Calling: Instead of each tool result returning to Claude, Claude writes a Python script that orchestrates the entire workflow. The script runs in the Code Execution tool (a sandboxed environment), pausing when it needs results from your tools. When you return tool results via the API, they're processed by the script rather than consumed by the model. The script continues executing, and Claude only sees the final output." One problem with older tool-calling approaches, like MCP, is that the agent is responsible both for orchestration and execution. Unimportant work-in-progress can quickly overwhelm an agent's context window. Eliminating the need for agents to handle execution allows them to focus on the orchestration tasks they do best. Think of your agents as programmers, not programs.
9
-
Boyuan Chen
Huawei Canada • 896 followers
A new paper from SJTU and GAIR (daVinci-Env) makes a compelling case that we've been looking at the wrong variable in SWE agent training. The premise is refreshingly honest: the open community doesn't lack clever RL algorithms or agent architectures. What it lacks is a large-scale supply of executable, verifiable training environments. So they built one. 45,320 Docker environments across 12.8k Python repositories, synthesized from 572K GitHub PRs through a multi-agent builder pipeline. The environments come with Dockerfiles, evaluation scripts, and distributed construction infra - all open-sourced. The engineering decisions matter more than the headline numbers: -> Builder decomposed into retrieval / setup / evaluation / analysis roles (not one monolithic agent) -> Dual-run validation protocol: test-only must fail, test-with-fix must pass -> Structured failure routing that becomes free process supervision for training -> Success-frequency curriculum: keep instances where the model succeeds 1-2 out of 4 attempts The most convincing result is the environment-source ablation: holding model and scaffold constant, swapping only the training data source, OpenSWE beats SWE-rebench in all four settings - up to +12.2 points. That's direct evidence that environment quality is a first-order training variable. Where the paper is weaker: the "difficulty-aware curation" story lacks clean ablation, the 32B headline slightly oversells relative to the strongest RL baseline (+1.0, not +4.6), and whether the curated trajectories are fully released remains unclear. The real takeaway for anyone building post-training pipelines: verifiable feedback loops and environment quality may matter more than the choice of optimization algorithm. This paper has quietly assembled every ingredient needed for an RL-for-code environment engine - real repos, executable sandboxes, verifiable outcomes, structured error attribution. The next step is online policy improvement on top of that substrate. https://lnkd.in/e2cFM-zu
5
-
Yuriy Mykhalchuk
Altura Codeworks • 1K followers
We did not build AlturaQuantera app for the DORA deadline. Deadlines are loud. Real work is quiet. When DORA hit, a lot of teams did what they had to do: scramble, document, align just enough to pass the first assessment. External consultants, war rooms, late-night evidence hunts. It worked, at least on paper. What I kept seeing after the rush was different: - controls that were implemented but not really owned - policies copied from templates no one had read twice - audits treated like single events instead of recurring checkpoints - compliance sitting in SharePoint, reality sitting in production That gap between "we did DORA" and "we live with DORA" is where most of the pain lives. Not in the law itself, but in the operational residue of doing it as a project instead of a system. AlturaQuantera came out of that pain. It is not a get compliant fast tool. It is a workspace for the months and years after the consultants leave, when systems change, vendors rotate, incidents happen, and someone has to explain how all of that still fits DORA. It is shaped for reassessments that do not start from zero. For evidence that ages alongside your architecture. For policies that evolve instead of accumulating as versions in a folder called Final. Most teams already survived the deadline. The real question now is quieter, and harder: will your compliance survive your normal operations? #DORA #compliance #SaaS #fintech #riskmanagement
5
-
Bradley Olson
The Wall Street Journal • 2K followers
A new era in the AI race came slowly, then all at once. Software engineers and tech first adopters have been raving for months about the capabilities of Anthropic’s Claude for coding. Some described how it completely revolutionized their work, leaving them in a state that alternated between awe and existential dread. Many also wondered: How long before it goes beyond software engineering? This week offered a resounding answer to that question after Anthropic released a handful of what it called “knowledge work plugins” on Github, a coding site. Far from end-to-end software solutions, the tools offered a whiff of possibility for what might lie in store across a whole range of job categories, including legal, sales, marketing, product management and more. The mere prospect of disruption for software companies tanked the stock market, leading to one of the biggest selloffs in years. Then, Anthropic released Super Bowl ads that took aim at rival OpenAI’s decisions to introduce advertising into ChatGPT (ads that OpenAI Chief Executive Sam Altman said were misleading and disappointing). Finally, Anthropic put out the latest model of Claude, Opus 4.6, which included coding improvements and other functions that appear to take further aim at knowledge work. The market rebounded Friday, but the episode showed how Anthropic has fought its way to the forefront of the AI race. Read the full story here on Anthropic’s wild week: https://lnkd.in/gXCiEjwT
35
5 Comments -
Alex Laats
www.PlanofRecord.org • 7K followers
In today’s installment of the Plan of Record Substack series, I break down the most common symptoms of prioritization failure in SaaS R&D orgs — from overcommitment and hidden work to heroic culture and black-box delivery — and begin to unpack the root causes behind the dysfunction. Here’s the link: https://lnkd.in/e6fdcNNj #SaaS #ProductManagement #EngineeringLeadership #R&D #Prioritization #Execution #CPO #CTO #CPTO
23
-
Austin Senseman
Caravan - AI Training • 21K followers
I'm not a hype person, I'm actually pretty cynical about software. (Who needs another app, right?) That being said, Claude Code is a big deal. A big part of what I do professionally is communicate technical topics in an accessible, practical way. I've found myself struggling to communicate why Claude Code is having such a large impact for people that try it. We talked about this yesterday in our training office hours with the folks from our December class. Someone shared this quote from an article: "But today, at least, Claude Code is the most important piece of AI technology on the market because it delivers on the core promise of AI: Dramatic acceleration of human potential and a contemporaneous democratization of opportunity." (https://lnkd.in/d8ykjG7k) And then today Ethan Mollick shared his thoughts. Ethan has consistently been one of my favorite voices in AI. He's been following things closely from the day that ChatGPT was launched. I actually hate getting on the internet over and over again and saying try Claude Code, but I'm going to keep doing it because it's the most important thing you can do today to understand what's possible. Let me say it a different way, if you haven't tried Claude Code, you have NO IDEA what AI is capable of. Take 30 minutes and try it. https://lnkd.in/dRzgtTk3
20
3 Comments -
Rafael Jesus Hernández Vasquez
Boom Entertainment • 2K followers
This week, Anthropic's Claude 4 seems to be again on a roller coaster: after pushing Amazon's new Kiro IDE (which was briefly available for download before an avalanche of demand slammed it shut), his coding performance plummeted noticeably. Tool usage has dropped. The reason? Degraded. And as usual at Anthropic, no one knows why. We just get silence and a fresh dose of usage limits. Meanwhile, Microsoft's Copilot is getting a makeover literally. The new "appearance" feature could soon allow it to simulate traits like age or personality. Whether it's charming or dystopian depends on your tolerance for digital companions with synthetic charm. But one thing is clear: AI is no longer just smart, it's present. Speaking of IAG, Meta appointed Shengjia Zhao to lead its new Superintelligence Lab. If that name doesn't ring a bell, it soon will: Meta is delving into fundamental models with AI ambitions, indicating that the "big three" (Meta, OpenAI, Google) are gearing up for the final showdown. But it's not all polished demos and ambitious projects. A troubling moment at Sketch.dev exposed the dark side of AI programming assistants: a subtle AI-written database optimization caused a crash in production under load. Reminder: just because your AI can write code doesn't mean it should. Test, test, and test again. Mistral's Codestral also launched, quietly stealing the show. With 22 billion parameters and support for over 80 languages, it's open-source, meaning it has no barriers. Elsewhere, Google's NotebookLM received a major update powered by Gemini. It now summarizes YouTube videos and drafts of content like your personal caffeine research companion. Think of it as a fusion of CliffsNotes and ChatGPT: optimized, hyper-efficient, and incredibly good at synthesizing ideas. In the worst-case scenario for cybersecurity, researchers have revealed AI-powered malware that thinks for itself. It interprets its environment, adapts, and executes evasive strategies with chilling precision. It's not HAL yet, but it's time we reconsider the true meaning of "autonomous threat." Finally, some news that caught my attention: A new concept unlike the transformers and tokens we know today, the Hierarchical Reasoning Model (HRM) concept promises to make the large models we know more efficient. In its first milestone, a small model of just 27M was able to beat o3-mini on pre-trained tasks. for more information follow me here: https://lnkd.in/gNVYeZST
4
-
John B.
Writers' Bloc EU • 3K followers
You're putting out articles, but AI systems are just scraping them into data chunks. They treat your work like a library subset, where the metadata often matters more than the actual writing. No bueno. This disconnect is damaging how content gets licensed and valued. Last time I wrote for Creative Licensing International (CLI), my piece "The Publishing Industry’s Napster Moment" ended up as their best performer so far. Quite proud of that, tbh. As a result, they asked me to do another one. In this latest article for CLI's Content Licensing Brief, "The Commodity Paradox: Why AI Eats Tokens, Not Articles," I dig into the key challenge for publishers and AI labs: why AI platforms license fragments ("chunks") instead of whole articles, and how that changes the leverage for publishers. Think of cases like the The New York Times' lawsuit against OpenAI leading to settlements, or the recent Anthropic agreement to payout $1.5 billion to authors in September 2025, yet publishers still scramble for fair leverage. Even relatively large deals, like Amazon paying the NYT around $20-25 million a year, often feel like scraps compared to the tech giants' gains. If content licensing, AI deals, or rights management is your world (and you're over the vague AI hype), I wrote this thinking of folks like you; practical breakdown, no fluff. Read here: https://lnkd.in/eA3wExuG P.S. If this is your day job, subscribe to CLI's Brief; they convert your IP into a real asset. Things are changing fast, don't get caught out.
26
1 Comment -
Benjamin Nickolls
Open Source Collective • 913 followers
Earlier this month Open Source Collective repurposed ~$27k of stranded project funds to support the foundation on which those projects were built. We used https://funds.ecosyste.ms/ to distribute money to the most critical components within several ecosystems, sending money to hundreds of maintainers. Details at https://lnkd.in/eyJSDea2
36
1 Comment -
JeongIk Lee
3SECONDZ - 쓰리세컨즈 • 393 followers
I turned 3 projects worth of AI agent lessons into a single npx command. My last post about building AI development teams got a lot of attention. The number one question: "How do I actually set this up?" The honest answer was painful — it took half a day just to configure Project C. Eight agent definitions, skill files, file ownership matrices, settings... all by hand. And if Claude Code's spec changed? Your carefully crafted prompts were already outdated. So I built create-agent-system. npx create-agent-system One command. Pick a preset. Get a working agent team. What it does: - 3 presets (solo-dev / small-team / full-team) — opinionated defaults, not blank canvases - 8 agent types with role isolation and file ownership - 8 skill packages (scoring, TDD, visual QA, code review, ADR writing...) - Intersection skill computation — agents only get skills relevant to their preset - /sync-spec skill that validates your setup against the latest official docs The key design decision: scaffold once, stay up to date forever. Most scaffolders generate boilerplate and walk away. create-agent-system bundles a sync-spec skill that checks your configuration against the latest Claude Code documentation via Context7 MCP. Your agents evolve with the platform. The technical highlight I'm most proud of: intersection skill computation. Each agent has default skills. Each preset activates certain skills. The agent gets the intersection — not the union. Why? Because irrelevant skills pollute the agent's context. I learned this the hard way when a Backend agent started demanding screenshots for visual QA checks. 5 things I learned building this: 1. Good defaults are the best documentation 2. Presets are opinions — opinionless tools are unusable 3. Scaffolding without sync is half the job 4. More agents = more important to filter skills precisely 5. There's a right level of abstraction — 100-line Handlebars templates are not it It's open source: https://lnkd.in/gmHwBYfA Full writeup with design philosophy, preset comparison, and code examples: https://lnkd.in/g62cPXCX #ClaudeCode #AI #OpenSource #DeveloperExperience #AIEngineering
4
-
Zachary Alexander
Enduring Advantage • 2K followers
This talk surfaces a number of important topics. One of them is, do you have to fight the AI Revolution with the whole company? To channel Clayton Christensen, why not pick a small, underserved sub-market and build a wholly-owned AI-first spin-off to serve it? Then use what you've learned to improve the entire business. The reason is that Foundation models double their capabilities every 7 months. Chances are, the company will have to shrink to survive. --Zachary
-
Ricardo Gomes
Polytechnic of Leiria • 3K followers
ccusage gave me my first real window into Claude Code, mostly the cost angle, how many tokens a session burns, how that scales across a week of work. Useful. But the more I looked at it, the more I realized cost is just one dimension. I still have almost no visibility into what Claude Code actually does during a session, which tools it calls, how many turns a task takes, where it gets stuck before finding a solution. That's the gap Langfuse is built to close. Langfuse is an open-source LLM engineering platform that gives you traces, metrics, prompt management, and evaluation tooling for AI applications. It was acquired by ClickHouse earlier this year, which tells you something about where the LLM observability space is heading. The self-hosted option is what caught my attention, all the observability data stays local, which matters when you're tracing sessions that include your own code and prompts. It has a native Claude Code integration via the hooks system, and I want to see what that actually surfaces in practice. Does seeing tool usage patterns change how I prompt? Are there session structures that correlate with better outcomes? Can I hook this up alongside other agentic tools to compare how they behave under similar tasks? The integration is still relatively new and the community is working out some rough edges around OpenTelemetry compatibility. That's part of what makes it interesting to explore. If you've already set this up, I'd be curious what you've learned from it. Link: https://langfuse.com 74/365 #AI #Observability
2
-
Marshall Van Beurden
myTomorrows • 9K followers
2026 - *Clarity* is the scarce resource. Over the holidays, I've been thinking deeply about what are the real constraints w/ current versions of LLM assisted development? Watching Mathew Berman's latest video https://lnkd.in/eC7McJAU among other research, it is clear that the scaffolding is starting to catch up to the models, i.e. agents, workflows, CLI/IDE integration, AI code reviews, test generation, and longer running automation that turns “helpful tooling” into real shipping capacity. I think that the natural outcome of this is that implementation gets cheaper and faster, but *clarity* becomes the scarce resource. The real shift in 2026 will not be limited by writing code but more by defining constraints that keep outputs correct, safe, and aligned with intent: specs, invariants, security/privacy boundaries, and what “good” looks like.
34
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content