We've signed a Memorandum of Understanding with the Australian Government to collaborate on AI safety research and support the goals of Australia's National AI Plan. We're excited to deepen our engagement with Australian customers, researchers, and policymakers. Read more on our blog: https://lnkd.in/ghehzEeS
Anthropic
Research Services
Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.
About us
We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.
- Website
-
https://www.anthropic.com/
External link for Anthropic
- Industry
- Research Services
- Company size
- 501-1,000 employees
- Type
- Privately Held
Employees at Anthropic
Updates
-
New from the Anthropic Economic Index: we study how people’s use of Claude changes with experience. Longer-term users are more likely to iterate carefully with Claude, and less likely to hand it full autonomy. They attempt higher-value tasks, and receive more successful responses. Our report also finds that since November 2025, consumer use has become less concentrated: the top 10 tasks now make up 19% of conversations, down from 24%. We also see a rise in personal queries, and continued convergence in adoption rates in the US. Read more here: https://lnkd.in/e_nw8bhy
-
New on the Anthropic Engineering Blog: How we use a multi-agent harness to push Claude further in frontend design and long-running autonomous software engineering. Read more: https://lnkd.in/gBi8Q6wt
-
Over one week in December, we invited Claude.ai users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people participated—the largest and most multilingual qualitative study of its kind. To do research at this scale, we used Anthropic Interviewer—a version of Claude prompted to conduct a conversational interview. We heard from people across 159 countries in 70 different languages. These interviews capture texture that surveys can’t. They render in detail how people worldwide are already experiencing AI's opportunities and risks. We plan to use Anthropic Interviewer regularly, on different topics, to help inform how AI can be of benefit to everyone. Read our full post here: https://lnkd.in/gX33KrvC And browse quotes from some of the many people we heard from here: https://lnkd.in/gba_v3ee
-
Anthropic is expanding to Australia & New Zealand. We’ll be opening an office in Sydney later this year—our fourth in Asia-Pacific after Tokyo, Bengaluru, and Seoul. We’ve begun hiring a local team and are exploring partnerships and investments in line with trends in local Claude use and Australia’s national AI priorities. We're excited to deepen our engagement with customers, researchers, and policymakers across the country. Read more: https://lnkd.in/ggCUQWN5
-
A statement from Anthropic CEO Dario Amodei: https://lnkd.in/e_6vm3Gm
-
A statement on the comments from Secretary of War Pete Hegseth: https://lnkd.in/e-guCny5
-
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War: https://lnkd.in/e7S682ph
-
Anthropic has acquired Vercept to advance Claude’s computer use capabilities. The Vercept team brings deep expertise in how AI systems see and interact with software, some of the most challenging problems in this space. We're excited to welcome them to Anthropic. https://lnkd.in/gEU8GJEm
-
We're updating our Responsible Scaling Policy (RSP) to its third version. Since it came into effect in 2023, we've learned a lot about the RSP’s benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency. We’re now separating the safety commitments we’ll make unilaterally and our recommendations for the industry. We’re also committing to publish new Frontier Safety Roadmaps with detailed safety goals, and Risk Reports that quantify risk across all our deployed models. Read more: https://lnkd.in/eqd8Vcr2