N.B. Please note that while this post talks about MVPs (which we may or may not agree about their scope and definition), it is mostly concerned with experimental features (whether a graduated product or not). Discussing “viability” is part of the issue, but I did put out a list of questions, which are more detailed and less controversial. Please don't get hung up on MVP and its definition, but rather try to address the "Concrete Questions".
This is inspired by this post/comment on MSO.
Over the past few years, we’ve seen a pattern: new features and experiments ship that lack core functionality, only to be quietly retracted or left to languish. That cycle undermines trust in every future experiment. I’d like to start a discussion to reach a clearer view of what a Minimum Viable Product (MVP)1 should include, and what guardrails we need to protect existing workflows.
Why this matters
Documentation’s rocky launch
Documentation was introduced back in 2016 as a collaborative “how-to” library, amassed thousands of examples—but failed in under a year. Key issues: no full-text search, limited moderation tools, and UI gaps that hid good content. Volunteers invested time that ultimately went to waste, and the very idea of crowd-authored tutorials was tainted.Discussions without the basics
Discussions was built to host opinion-based, architecture, or experience-driven questions. It was also launched as an alternative to Chat to address issues such as sub-optimal search. Yet, Discussions came without search, effective tagging, moderation tools, or downvotes (they were removed at a later date). Months later, it still feels like an unfiltered chat room (ironically with less features than Chat)—hardly the “forum” it was branded to be.Comment experiment
The latest example is February 2025’s comment experiment, a feature to ask “follow-up questions” was added that left even veteran users confused, broke moderation and review tools, etc. The UI hiccups that ensued made me, and many others, doubt whether any comment-system change could ever work.
Each of these experiments shipped without essentials, and the community, at least in part, ended up blaming the idea, while in reality the implementation was the main culprit.2 This makes it harder to revisit promising concepts later.
Concrete questions for the community
Core functionality
- What minimal feature set must be in place before an experiment goes live?
- Is there a general “checklist” that we can rely on in order to decide whether a product is an MVP or not?
Non-destructive rollouts
- Should experiments ever disable existing capabilities?
- If a disruptive change is unavoidable, how should it be communicated, measured, and potentially reversed?
Timing of community consultation
- At what development stage should product teams engage experienced users or moderators?3
- Which venues—Meta posts, dedicated chat rooms, beta programs—are most effective for early feedback? (For instance, some of the changes to the UI, can be distributed as userscripts to be tested by community members before they get rolled out.)4
Measuring harm vs. value
- Beyond raw usage metrics, how do we detect when a feature is causing more problems than it solves?
- What process should exist for swift rollback or remediation if an experiment degrades the experience?
“Small tweaks to Discussion or to comments are not going to achieve ambitious goals.”
That insight applies not just to high-level strategy but also to how we deliver new features. If we can agree on a clear, enforceable definition of “viable” and a shared rollback plan, future experiments won’t feel like blind swings.
What examples of “just right” MVPs have you seen on Stack Exchange? Which launches clearly missed the mark—and what lessons can we carry forward? How can we establish solid guardrails so that the next big idea lands ready for success?
1. Also see this article about Minimum Viable Products/Features (MVPs/MVFs), specifically: "A minimum viable product (MVP) is often mistaken as the first general release of a product, the initial offering that is good enough to address the early market. But for most products, an MVP should be a much earlier and cruder version that acts as a learning device—a means to test a crucial assumption and make the right product decision".
2. I am not claiming that there are no bad ideas/features; just saying that at least some of the experiments could have been successful if they were conducted properly.
3. I have previously touched on this matter in my answer to What can be cut away, and why?.
4. Credit goes to Kevin B, but I cannot find the message.