GrowthBook’s cover photo
GrowthBook

GrowthBook

Technology, Information and Internet

Palo Alto, California 4,579 followers

A modern feature flagging and experimentation platform to help companies make smart data-driven decisions

About us

GrowthBook is an open-source feature flagging and A/B testing platform that helps companies release code and measure its impact with their own data. Our platform gives any company the power of a customized release and A/B testing solution like those used by Netflix, Pinterest, Uber, etc. GrowthBook is an in-house flagging and experimentation platform you don't have to build. Integration: GrowthBook is warehouse native - we tie into your existing data infrastructure. We support most SQL based data sources, and even Google Analytics and Mixpanel. You can A/B test anywhere you can get data: web, mobile app, ML, email, etc. Effectiveness: Tools built based on best practices. Our SDKs support modern development processes, and are absurdly fast and easy to use. You can create a common metric store with SQL or YML. Our statistics engine is robust and understandable by nontechnical folks. We also give data teams full transparency into results- you can even export results as SQL or as a Jupyter notebook. Documentation: Searchable and shareable library of past tests. Capture institutional knowledge gained from experimentation. No more lost conversations or searching through Google Docs. Data & Security: User data never leaves the company's data infrastructure- you can even host it yourself. Compliant with PCI Data Security Standards GrowthBook is backed by Y Combinator and other top VCs.

Website
https://www.growthbook.io/
Industry
Technology, Information and Internet
Company size
11-50 employees
Headquarters
Palo Alto, California
Type
Privately Held
Founded
2020
Specialties
A/B testing, open source, and feature flagging

Products

Locations

Employees at GrowthBook

Updates

  • Building a culture of experimentation is about creating a culture of humility, that learns together and shares wins together. Here is one small example from @grahammcnicoll about how GrowthBook helps: GrowthBook auto-generates a slide deck from your experiments, so there's no friction between finishing a test and bringing it to the room. Before you reveal the winner, everyone can vote on whether the change helped or hurt. When you give everyone an opportunity to guess whether an experiment won, it shows people how little they can predict without experimentation. Skeptics start paying attention. People who had nothing to do with the test suddenly care about the outcome. Wins get celebrated instead of filed away. #experimentation #productculture #abtesting

  • If your team is shipping AI and algorithmic changes without an A/B test, you're flying blind. Makram Mansour, former experimentation leader at LinkedIn and Intuit and now Head of Marketplace at ID.me, learned this firsthand when an algorithm tweak cratered session counts across LinkedIn with no warning. The same rigor you apply to UI changes applies to AI. No exceptions. Full episode linked below.

  • Unit tests have one job: tell you if something is broken. AI apps just made that job impossible. When your application is non-deterministic, there's no pass or fail. There's only better or worse. And the only way to know which one you're looking at is to test with real users and measure what actually happens. @marcocasalaina, VP of Products at Microsoft, puts it plainly: major in evals, set acceptable error rates by use case, and A/B test your models and strategies. Because what "good" looked like a few months ago is already outdated. Experimentation isn't a nice-to-have for AI apps. It's the only measurement tool that still works. Full episode in the comments. #experimentation #abtesting #AI

  • View organization page for GrowthBook

    4,579 followers

    Khan Academy gave their AI tutor a calculator to improve math accuracy. It worked, but it made responses painfully slow for students. So they ran five sequential A/B tests: ✅ Removed the calculator (math errors doubled) ✅ Switched to GPT-5 (accuracy still suffered) ✅ Tightened the agent's prompts (latency dropped 3 seconds) ✅ Upgraded the agent's model (another 300ms off) ✅ Time-boxed execution (more gains, accuracy stable) Without experiments, they might have shipped the first iteration and unknowingly made tutoring worse. That's the whole case for A/B testing AI features in one example. Kelli Hill, PhD, Senior Director of Data Insights at Khan Academy, shared this and more at Experimentation Island. She's joining us for a live webinar on April 16 to go deeper. Register for the April 16 webinar with Kelli: https://lnkd.in/giD2HuCN Blog recap: https://lnkd.in/gcxx2Tdt

  • View organization page for GrowthBook

    4,579 followers

    The bottleneck used to be shipping. Now it's knowing what to keep. AI is putting more code out the door faster than most teams can evaluate it. More changes in production means more variables, more surface area, and less certainty about what's moving the needle in either direction. Experimentation is what keeps you from flying blind. Charles Williams, Senior Vice President and Software Engineering Director at Truist, joined The Experimentation Edge to talk about building with AI in one of the most regulated industries in the world. Full link in the comments. #experimentation #softwaredevelopment #AI

  • We are thrilled to have Kelli Hill, PhD, senior director of data insights from Khan Academy, on our next webinar. Kelli will share Khan Academy's experiences as they grew and strengthened their A/B testing program. I'm particularly excited to hear her talk about the testing they did on top of pre-release versions of ChatGPT 4.0. They were building Khanmigo, their AI-powered tutor, and learned a lot about best practices for testing AI models and prompts. Be sure to register!

    View organization page for GrowthBook

    4,579 followers

    Most experimentation programs start the same way. A few A/B tests. Some promising results. Then the hard part begins: scaling it across teams. Adding AI features and software takes scaling to a new level: rapid iterations, experiment velocity, and responding to non-deterministic outcomes. In our next GrowthBook webinar, Kelli Hill, PhD shares how Khan Academy built an experimentation culture across teams. She’ll also discuss how they’re now experimenting with Khanmigo, their generative AI tutor. Learn how her team uses GrowthBook experimentation, feature flags, and product analytics to measure: ➡️ Learning quality ➡️ Responsible AI behavior ➡️ Prompt and model performance ➡️ Real student outcomes If you’re a data scientist or product engineer building AI systems, this will be a fascinating look inside one of the most thoughtful experimentation programs in education.

  • View organization page for GrowthBook

    4,579 followers

    Most experimentation programs start the same way. A few A/B tests. Some promising results. Then the hard part begins: scaling it across teams. Adding AI features and software takes scaling to a new level: rapid iterations, experiment velocity, and responding to non-deterministic outcomes. In our next GrowthBook webinar, Kelli Hill, PhD shares how Khan Academy built an experimentation culture across teams. She’ll also discuss how they’re now experimenting with Khanmigo, their generative AI tutor. Learn how her team uses GrowthBook experimentation, feature flags, and product analytics to measure: ➡️ Learning quality ➡️ Responsible AI behavior ➡️ Prompt and model performance ➡️ Real student outcomes If you’re a data scientist or product engineer building AI systems, this will be a fascinating look inside one of the most thoughtful experimentation programs in education.

  • View organization page for GrowthBook

    4,579 followers

    This is a great way for any e-commerce experimenter to learn and work with industry leaders while helping to grow your business. Work on features that have won for other businesses and could grow your revenue by as much as 7%. Interested? Comment Experiment below and DM Ashley Stirrup for the link.

    Calling e-commerce experimentation companies! Get free expert guidance from Ron Kohavi, Lukas Vermeer 🃏, and Jakub Linowski while shipping winning features that could grow your revenue by up to 7%. They'll help you design, execute, and analyze proven A/B test patterns from the Trustworthy A/B Patterns project. In exchange, they publish anonymized results for the broader experimentation community. You need 1M+ monthly active users to qualify. GrowthBook is coordinating. Reach out to ashley@growthbook.io or get a link to learn more in the comments.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

GrowthBook 2 total rounds

Last Round

Series A

US$ 22.6M

See more info on crunchbase