The CEO [of SE] recently posted a blog post titled CEO Update: Paving the road forward with AI and community at the center.
Stack Exchange staff haven't posted a request-for-comments on it, so... I did here.
What are your thoughts on that blog post?
The CEO [of SE] recently posted a blog post titled CEO Update: Paving the road forward with AI and community at the center.
Stack Exchange staff haven't posted a request-for-comments on it, so... I did here.
What are your thoughts on that blog post?
We have the domain focus and community trust to responsibly deploy generative AI
Citation needed
I've probably put in hundreds of hours of unpaid, and admittedly often unsolicited work into this place.
We continue to evolve as an organization, focusing on a path to profitability in addition to navigating a dynamic external market
Whatever that means. The current CEO has been on Stack Overflow since 2019. I've been here over 13 years. In that time we hear the same thing told in many ways. That the company needs to cut back for the good of everyone. Repeatedly. It does not engender trust.
To continue growing, every company needs to do more with the resources it has, and every employee (not just those in Sales) must understand that their actions impact revenue.
Revenue isn't the only real way to see the value of the company. Corporate social responsibility is just as important - we had to lobby for things like hiring into key community roles, and even annual donations to charities. Stack Overflow is what it is because the community put hundreds of hours into answering questions, curation and other things. While we do not make you money directly (well, outside advertising), the entire value of the Stack Overflow brand and the willingness of hundreds of subject matter experts - maybe thousands - to recommend and work with your product would depend on goodwill.
Quite often, cutting back can even result in revenue and value being lost. Where do you see Stack Exchange in 5, 10 or 20 years?
We have the domain focus and community trust
Stack Exchange hasn't really had domain focus on Q&A in years. It slipped into a company that was heavily marketing driven, rather than the rather lean, innovative company that had massive amounts of trust. We had a partnership, and now we have "We have changed the terms of our agreement, pray we do not change it further."
We asked for advice, and were told you trusted us to make our own. We did, working with our individual communities to find solutions we felt worked best. You then decided we were doing it all wrong
There are many instances like this where the community has expressed its views on something that either get ignored or are dismissed.
I guess the question is how badly do you need to poison the goodwill of the community while claiming that we trust you before things become irreparable.
For the community to grow we need the tools and support to build solid cores and feel that we're welcome in the place we built. And every employee, whether it’s marketing, or folks with direct work in the community - and especially the CEO - needs to realise their actions impact the community. One that's been often hurt, dismissed and pushed aside.
Sustainability needs long term planning, not chasing the next shiny thing. I've seen communities I've been a long time part of - sites like Server Fault and Information Security - lose a lot of regular users. My own community spaces - like Super User and Pets - are a lot quieter than they used to be. The current goals seem to focus on the next quarter while I've literally been trying to get things moving, or 'fixed' in certain respects, for years.
I used to argue that Careers was a unsustainable venture, because it was a 'better' product in a hugely competitive space. Everyone is doing generative AI, and... you're competing with Google and Meta - both massively cash-rich companies with significantly better resources.
We've shed 10% of staff and their associated knowledge, and I've heard rumors some engineering teams got hit somewhat worse than that - which seems bizarre. I could be wrong though. And unlike Meta or Twitter, SE didn't make any hugely bad bets, did it?
Stack Overflow, and Stack Overflow for Teams, in particular, is well-suited to the industry-wide shift from “growth-at-all-costs” to profitable, sustainable expansion I mention above.
I'd ask instead - does SE have any intention to ensure the community and network is healthy in the next decade or so, and help preserve the community you keep bringing up as its strength?
Looking forward to growth and the impact of community on AI
I'm not impressed with the impact of AI on the community. We're hurting, and we're literally in the biggest crisis in trust since Monica.
The community is often our biggest champion in the enterprise; its members want to use a tool they know and trust to manage their proprietary information and collaborate with peers who are likely familiar with Stack Overflow as well. Community is our competitive advantage and a key reason we remain insulated from the worst of the business cycle’s ups-and-downs.
Do you really hear us? Folks are angry. And we're really not appreciating that, in the same breath we're essentially held up as a strength, while being treated like we don't matter. The company has not been a great steward of the communities for many years - and has drifted away from actually understanding what the community wants or even needs.
I was asked about what I did as a moderator at a job interview a little back. I froze. Honestly - I couldn't recommend Stack Overflow for Teams or any other product because of how the company has treated the community over the years. I love the platform, and the communities I work with. I don't love the years of getting treated terribly while being told we matter.
The people we work with are wonderful - The community team clearly cares, even if we do disagree with some of the recent rules changes, and they're busy (which makes the downsizing of their team, rather than trying to get them the resources to do their jobs infuriating) We've a few developers who are awesome.
The company as a whole though, hasn't really earned back the trust it has lost over the years completely. We can't be your champions unless you're willing to be ours - and this is something that is in your interests and responsibility to do.
Now that we've seen what Stack Overflow Inc's idea of what GenAI is...
I'm underwhelmed. We've lost 10% of staff, and put 10% of what is left into what turned out to be, at least initially, a deeply flawed and barely hidden wrapper around ChatGPT or a similar LLM platform. At the current point, it's unfit for purpose
In order to do this, the company has succeeded in antagonising a significant part of the active moderation community to the point they are on strike. I've heard many moderators state they're considering quitting because of burn out.
We might have very different definitions for 'putting the community in the center' but as a active community member and someone who's welcome, trusted and actually there, the community is hurting.
I realise that there's a lot of 'sunk cost' but this is a very good time, maybe the best time other than before you started, to re-evaluate the current course of action.
I also posted this as a comment on the blog, pending moderation:
Hi Prashanth, you have not mentioned the recent policy change also reported by your VP of Community and broadly criticized by the Stack Exchange/Stack Overflow community: What is the network policy regarding AI Generated content?
This policy change to allow generative AI answers on SE/SO and forbid community moderators from removing this content means that SE/SO will no longer be a useful source of data to train AI. Rather than a place for humans to ask questions to be answered by other humans, SE/SO will now be a place for people to re-post GenAI responses. This adds no value to SE/SO, as askers can produce GenAI answers directly, without SE/SO as a middleman.
Why is your company abandoning the thing that makes SE/SO a higher-quality source for Q&A than these hallucinating GenAI products?
I'll note that this is the same sort of thing I was concerned about in my answer to the previous blog by the CEO: New blog post from our CEO Prashanth: Community is the future of AI
We continue to evolve as an organization, focusing on a path to profitability in addition to navigating a dynamic external market. Part of this evolution led us to make the difficult decision to reduce our headcount by 10% last month. Through these changes, I remain grateful for our community, which is the basis of everything we do, and for the many Stackers who demonstrate great resilience and live our mission and values each day.
You do indeed seem to be making short-sighted leaps to the nearest money bag. You are clearly not grateful for your community, as you continue to dump very not wanted stuff on us.
To continue growing, every company needs to do more with the resources it has, and every employee (not just those in Sales) must understand that their actions impact revenue.
In all seriousness, this seems to be a complaint at the workforce that you just fired.
This is especially important for developers. Often, a new feature or product is the difference between closing a new customer/growing an existing one and seeing deals slip to the next quarter or fall out of the pipeline entirely. In fact, research from McKinsey shows that companies who innovate through crises not only outperform the market by 10% during the crisis but also realize a 30% average long-term gain.
Stack Overflow is not any other company that just sells products to people. This is not how you run a community-driven platform. Management like this is the downfall of a platform like SO.
When I think about Stack Overflow’s future, what makes me most excited is how we are innovating, and that’s largely based around the work that we’re doing to incorporate GenAI into our products.
You are not innovating, and you have not been doing so, for a while. You take a piece of garbage (AI), relabel it, and push it onto us.
those companies who embrace opportunities to leverage generative AI and automation in their everyday work flows in intentional ways that assist the productivity of their workers will be the most successful in this next phase of the modern workplace.
No, not really. It goes to show that you lack self-control amongst the hype of a useless new technology. I love the future of AI; we're not there now, and your usage of it, is... well, not successful.
Our vision for community and AI coming together means the rise of GenAI is a big opportunity for Stack.
What is your vision, again? To replace us with AI? Use us as free labour for something that we get nothing in return for? But yes, you're correct: this is a big opportunity for Stack Overflow to get pulled to the bottom.
Approximately 10% of our company is working on features and applications leveraging GenAI that have the potential to increase engagement within our public community and add value to customers of our SaaS product, Stack Overflow for Teams.
That's 10% of your company's workforce wasting their time. What it does have the potential to do, is decrease the engagement within the public community. Your recent new policy on allowing AI content is making curators furious.
We believe that the developer community can play a crucial role in how AI accelerates, ultimately helping with the quality coming out of GenAI offerings—and in that, further improving the modern workplace as we know it.
Do not abuse us; we're here for the community, the quality, and the repository of knowledge; we're not here to accelerate your venture into selling AI.
Stack Overflow for Teams is uniquely positioned for this moment. But beyond this clear value prop, what really sets us apart is the strength of the community. The community is often our biggest champion in the enterprise; its members want to use a tool they know and trust to manage their proprietary information and collaborate with peers who are likely familiar with Stack Overflow as well. Community is our competitive advantage and a key reason we remain insulated from the worst of the business cycle’s ups-and-downs.
Then stop insulting us, lying to us, sabotaging us, and generally treating us like shit.
We have the domain focus and community trust to responsibly deploy generative AI to improve our existing suite of products and lead to new solutions and revenue streams.
This is a lie. You do not have the trust of the community.
Second, question reviewers are able to better understand the content of the question, making it easier to suggest edits or improve the post.
Humans have more sophisticated intelligence than your machine learning tools. We are more capable at understanding the question content through a badly written title, than a title formed by an AI which looked at the content of the question.
Finally, end users of Stack Overflow can more easily understand if the question is relevant to their needs.
What is an "end user"?
This tool is one of many that we are launching in coming weeks.
Madness. Sadness. Horrific.
Our community has given us feedback through the evolution of this tool, and their feedback is critical to how it scales
Then listen to our feedback. Stop closing your ears and eyes!
As the AI landscape continues to evolve, the need for communities that can nurture, inform, and challenge these technologies becomes paramount.
We're not here for this! Stop hammering us with stuff we don't want. It's abuse.
These platforms will not only offer the necessary guidance to refine AI algorithms and models but also serve as a space for healthy debate and exchange of ideas, fostering the spirit of innovation and pushing the boundaries of what AI can accomplish.
So in essence, you want to use us as free labour to develop a product you'll make lots of money off from, one of which we get nothing in return from; only burnouts.
We want to be able to continue to invest back into our community, and that’s why we’re also exploring what data monetization looks like in the future. LLMs are trained off Stack Overflow data, which our massive community has contributed to for nearly 15 years. We should be compensated for that data so we can continue to invest back in our community. This also boils down to proper attribution. Our data represents significant effort and hard work stored across our site and LLMs are taking those insights and not attributing to the source of that knowledge. This is about protecting the interest in the content that our users have created as much as it is the fight for data quality and AI advancement.
I agree, but I don't believe you anymore.
Then far down, outside the scope of AI:
The trust of our community is as essential as the trust in the data it produces
So stop abusing that trust. Stop shredding it. Work with us, not against us.
I do admire the steadfast determination of our community members to continue explaining and presenting things nicely, but I'm surprised many haven't just given up. SE are clearly not listening, and have no idea what they're doing. I feel abused, and that makes me angry, and unwilling to participate in a respectful discourse with them; I fail to see them doing it. It does not seem to have worked so far, we only get overrun with more and more community-hostile content. So excuse my harsh and angry response.
This is the second blog post where CEO talks about AI and Community.
It is beyond obvious that AI is used as a buzzword, because AI is a hot topic at the moment. Combined with the other recent actions, What is the network policy regarding AI Generated content?, that will have negative impact on community and therefore on sites in the SE network, I am pretty sure that the Community is also used by the CEO as a buzzword and nothing more.
CEO does not show any intentions to interact with the Community, not even in the most superficial way (the least he could do would be posting the major announcements himself here), he definitely does not listen to the Community, and he is not even close to having minimal understanding of the Community. And combining all that, he is definitely not part of the Community, nor he has the Community's trust.
Pretty soon CEO will find out that being the CEO of the company that is based on community contributions and ignoring that same community does not work well. The Community is not just a buzzword he can use as he pleases and the Community has the means to strike back.
But most significantly, we accelerated our AI efforts internally and look forward to sharing more this summer.
I'm just confused. The community is trying to ban AI, so why is the org pushing for AI?
We have the domain focus and community trust to responsibly deploy generative AI to improve our existing suite of products and lead to new solutions and revenue streams.
This feels like Animal Farm, where the pigs (the ruling class) are surreptitiously changing the rules and brainwashing the citizens. Plus, the community wants the opposite, so that is ________*
Stack Overflow for Teams, our enterprise, private version of Stack Overflow
what really sets us apart is the strength of the community.
Why get Stack Overflow for Teams if it doesn't have the same strong community as regular SO?
I have no idea what the company named Stack Overflow is trying to do.
AFAIK, this blog post wasn't also posted to Meta, so I'm starting a question for my feedback.
In the last quarter of our fiscal year [...] that meant announcing the availability of Stack Overflow for Teams in the Microsoft Azure marketplace while launching Topic Collectives and Staging Ground Beta 2 on our public platform. But most significantly, we accelerated our AI efforts internally and look forward to sharing more this summer. I’m excited to see how we leverage our domain focus and special community-driven blend of trust, recognition, and accuracy to GenAI.
Ehh... I like the concept of the SG. I really do, and I think it has a lot of potential. I'm not convinced that "our AI efforts" are really the most significant thing you've done that benefits the community. Possibly, and I think dupe finding has potential, but otherwise, I'm not convinced.
Our vision for community and AI coming together means the rise of GenAI is a big opportunity for Stack. Approximately 10% of our company is working on features [...] leveraging GenAI that have the potential to increase engagement within our public community and add value to customers of our SaaS product, Stack Overflow for Teams.
I don't like that. I'm not convinced it increases engagement (thanks for not sharing any data earlier, by the way). Also... please consult the community (not just mods, but MSE) before you do big AI things, please. I'm aware that something is coming in the summer, but it would be nice to know what it is.
We believe GenAI can be a similar competitive advantage for Stack Overflow. We have the domain focus and community trust to responsibly deploy generative AI to improve our existing suite of products and lead to new solutions and revenue streams
I really don't like that statement. Especially "We have the [...] community trust". I, and I'm guessing many others, don't really believe that. Somewhat harsh, but I think it's true. Sorry.
We want to be able to continue to invest back into our community, and that’s why we’re also exploring what data monetization looks like in the future. LLMs are trained off Stack Overflow data, which our massive community has contributed to for nearly 15 years. We should be compensated for that data so we can continue to invest back in our community. This also boils down to proper attribution. Our data represents significant effort and hard work stored across our site and LLMs are taking those insights and not attributing to the source of that knowledge. This is about protecting the interest in the content that our users have created as much as it is the fight for data quality and AI advancement.
Please define how you're going to "invest back into our community", specifically. Otherwise, it reads like you want to be compensated for our content.
This paragraph surprised me given the development of the title suggestion feature, which I strongly suspect is built off a third party LLM:
We want to be able to continue to invest back into our community, and that’s why we’re also exploring what data monetization looks like in the future. LLMs are trained off Stack Overflow data, which our massive community has contributed to for nearly 15 years. We should be compensated for that data so we can continue to invest back in our community. This also boils down to proper attribution. Our data represents significant effort and hard work stored across our site and LLMs are taking those insights and not attributing to the source of that knowledge. This is about protecting the interest in the content that our users have created as much as it is the fight for data quality and AI advancement.
So if SE views the training of LLMs on SE user content (which is relatively openly licensed under CC BY-SA) as legally/ethically suspect due to violating the attribution clause, why is SE moving forward on utilizing LLMs trained on massive, opaque datasets? Surely every copyright owner whose work was fed into the LLM has at least as much cause to "explore data monetization" (a euphemism for "recoup the stolen value of their work"), even more so if their works weren't openly licensed. This would destroy the economic viability of LLMs trained in this manner.
So which is it - is SE hoping that the LLM business model is disrupted by training data monetization? Is it hoping that the status quo is maintained so that SE can use those LLMs for feature development? Is it cynically fencesitting until it becomes more clear which way the legal landscape will move? Or is it hypocritically hoping to have its cake and eat it too, to use its corporate resources to cash in on LLM training data, while smaller rightsholders are unable to fight for their compensation?
After reading that, it sure feels like the Teams communities trump this one, despite the fact that this is the one that backbones curation across the network and effectively allows it to function.
I don't want to be entitled or arrogant; it's their platform... but the community-focused model of Stack Exchange sours greatly when the company no longer sees the curating community as valuable for anything beyond consulting work for Staging Ground, which is largely the sense I get from this blog.
Despite the 25+ references in the post to "community" and its importance, Stack's first and arguably most important community feels pretty abused and frustrated right now, and we've seen little sign that the Company cares about that whatsoever.
I'm not as cynical as many of the other folks reacting to recent events, but it's getting more difficult not to be, especially if the CEO captain of Stack's corporate ship truly believes that his final paragraph accurately characterizes the face of Stack Exchange this week.
A healthy community is essential to our company mission and higher purpose of empowering the world to develop technology through collective knowledge. We will never lose sight of how important, impactful, and unique it is. The trust of our community is as essential as the trust in the data it produces. ...
The one thing I won't take is
special community-driven blend of trust
SE, you have no trust. No one likes the AI changes. No one likes your stupid vote arrow changes either. You pretend to ask and care about our feedback, but you make no changes based on what we think. No one wants to become a content generator for an AI. That's not why we're here. You don't have trust, you have a massive outrage. From a popular meme: Who? Who? Asked [for any of this]? As I said in a comment over on the vote arrow post:
No one wants this, no one asked for this, no one finds this useful, no one can confirm your data. Everyone wants other things, everyone sees true problem (like area51 for example), everyone is asked for things that everyone wants but are declined with out (good) reason, and nearly everyone is upset about stack exchanges management.