Databricks’ cover photo
Databricks

Databricks

Software Development

San Francisco, CA 1,174,598 followers

About us

Databricks is the Data and AI company. More than 20,000 organizations worldwide — including adidas, AT&T, Bayer, Block, Mastercard, Rivian, Unilever, and over 60% of the Fortune 500 — rely on Databricks to build and scale data and AI apps, analytics and agents. Headquartered in San Francisco with 30+ offices around the globe, Databricks offers a unified Data Intelligence Platform that includes Agent Bricks, Lakeflow, Lakehouse, Lakebase and Unity Catalog. --- Databricks applicants Please apply through our official Careers page at databricks.com/company/careers. All official communication from Databricks will come from email addresses ending with @databricks.com or @goodtime.io (our meeting tool).

Website
https://databricks.com
Industry
Software Development
Company size
5,001-10,000 employees
Headquarters
San Francisco, CA
Type
Privately Held
Specialties
Apache Spark, Apache Spark Training, Cloud Computing, Big Data, Data Science, Delta Lake, Data Lakehouse, MLflow, Machine Learning, Data Engineering, Data Warehousing, Data Streaming, Open Source, Generative AI, Artificial Intelligence, Data Intelligence, Data Management, Data Goverance, Generative AI, and AI/ML Ops

Products

Locations

Employees at Databricks

Updates

  • View organization page for Databricks

    1,174,598 followers

    The database architecture that made sense in the 1980s doesn't hold up in a world where agents are the primary builders. The reason is that agentic development doesn't work like traditional development. AI agents now create roughly 4x more databases than human users on Lakebase. Agents spin up databases for experiments, branch them for testing, and discard them when done. About half have a compute lifetime of less than 10 seconds, with high cost sensitivity. Agents also prefer open source tools like Postgres to proprietary databases. They generate queries, schemas, and integrations far more accurately for systems their training data actually covers. Here's what databases actually need to look like in the agentic era: https://lnkd.in/gigxFBJB

    • No alternative text description for this image
  • View organization page for Databricks

    1,174,598 followers

    As data estates grow, teams struggle to find, understand, and trust the right data. The new Discover experience in Unity Catalog brings business context, trust signals, and access into one place. With domains, intelligent curation, and governed access, users can discover data, analytics, and AI assets aligned to how the business works and move from discovery to action with more confidence. Now in Beta: https://lnkd.in/gnVPY9-S

    • No alternative text description for this image
  • View organization page for Databricks

    1,174,598 followers

    Get hands-on with what it takes to build and ship real-world AI, apps and agents. Join us at Databricks AI Days to learn how to: • Build AI agents that are accurate and grounded in your data • Develop intelligent apps with a fully managed Postgres database • Unify data, analytics, and AI with built-in governance Stops ahead: 📍Amsterdam 📍Chicago 📍Washington D.C. 📍Paris 📍Bangalore 📍Sydney 📍Frankfurt & more https://lnkd.in/gJe9XUbJ

  • View organization page for Databricks

    1,174,598 followers

    Data engineering is getting more complex, but it doesn't have to slow you down. The Big Book of Data Engineering is a practical guide packed with how-tos, code snippets, and real-world examples to help you build and scale pipelines faster and deliver high-quality data for AI, BI, and analytics workloads. Inside: - Patterns for scaling ETL pipelines effectively - Orchestrating data, analytics, and AI workloads - Implementing observability for your data pipelines - Using Lakeflow to manage pipelines https://lnkd.in/g8kTArb7

  • View organization page for Databricks

    1,174,598 followers

    Lumen Technologies processes terabytes of telecom data daily across two mission-critical systems supporting billing, network optimization, and regulatory compliance. Their legacy on-premises platform was holding them back. Fragile pipelines, limited governance, and constant manual maintenance meant engineers spent more time firefighting than building. After migrating 133TB to Databricks SQL on Azure, that changed: - Compute costs down 30–40% with serverless scaling - Queries that took hours now run in minutes - 7GB of telecom data processed every 10 minutes https://lnkd.in/gXurtnW4

  • View organization page for Databricks

    1,174,598 followers

    Planned maintenance causes more database disruption than actual hardware failures. Most databases get patched far more often than they experience outages, but every patch means a maintenance window, severed connections, and a cold cache that tanks performance for minutes after restart. We’re changing that. This is Part 1 of our series on eliminating the impact of planned maintenance entirely, rolling out automatically over the next few weeks. https://lnkd.in/gy8_fK3c

    • No alternative text description for this image

Affiliated pages

Similar pages

Browse jobs

Funding