-18

In the past, people searched for help on Stack Exchange sites. Since AI is training using the information from these Q&A Pages, and people are asking AI apps directly, are Stack Exchange sites approaching death? Or are there like AI agents asking and answering to Stack Exchange sites?

3
  • 4
    re "are there like AI agents asking and answering": they tried that, it didn't work well Commented Aug 20 at 23:40
  • 7
    Please research reasonably before considering posting. Please give exactly 1 (clear specific researched non-duplicate) question/topic. Commented Aug 21 at 0:48
  • 2
    It's a good question. Technological progress can be disruptive, but nobody knows the future exactly. So maybe yes or maybe no. But you describe the fears that people have since 2022 very well. I think that a lot of reactions here are much better understandable if you assume that people indeed think it's possible that SE will be killed. Commented Aug 21 at 4:59

2 Answers 2

15

Strictly this depends on what you define as "death". SE sites have already stomached a 10x-20x reduction in key metrics and still limp on happily. Prominent tech companies aren't all that much about real revenue, as ironically GenAI companies are showing right now.
But SE certainly has been dealt a severe blow ever since GenAI showed up.

Still, I don't think it's really the problem that people ask GenAI instead of SE per se. Rather, it's that the mindset and expectation towards answers has radically shifted with GenAI. People by and large now expect quick answers, they expect confident answers, they expect personalised answers, they expect friendly answers, they expect follow-up answers, and so on and on.
Even if GenAI magically goes away, or gets expensive, or whatever, that mindset will stay for a while.

SE cannot deliver that. Because ultimately its answers come from volunteers, and they don't want to deliver that. Certainly not by sacrificing their own free time for people who just treat them like disposable Q&A bots.

2
  • Regarding people expecting friendly answers now. My impression is that they were always expecting that not only now. And somehow I feel like LLMs actually put up a mirror there that if oneself looks into it, may not be so happy about about what one sees there. LLMs as slightly better than human answers in that little aspect. Commented Aug 21 at 9:19
  • 9
    @NoDataDumpNoContribution Well, my impression was that people expected friendly answers to friendly questions (and rightly so). Now they expect friendly answers no matter what. Most GenAI bots will sugarcoat answers to even the most inane ideas. Commented Aug 21 at 10:14
2

The future is always difficult to predict. AI in the current form is just a label, it's not yet intelligent as we traditionally understand it. It's also a hype, so the bubble may bust at some point. And it's incredibly energy consumptive. The other day I asked the latest ChatGPT version a relatively simple question and it thought about it for 10 seconds (it said). That must have cost a lot of energy. It's also notoriously bad at giving attributions or providing exhaustive primary sources, which won't change anytime soon.

But it's also a really good search/answering engine, much better than what we had in the past in cases where approximate knowledge is fine. It's very useful as sophisticated auto completion tool. It can solve tasks that were not able to be solved before. In short it's useful and it's going to stay. No way we will ever go back to the old ways.

And it's disruptive. It completely disrupted this marketplace of people with questions and people with answers here. We don't really know yet how that story will be told in the future, if askers simply didn't wanted to wait for human answers anymore because LLMs give nearly instant responses and are also helpful for new problems or if domain experts simply didn't want to be mere anonymous suppliers of data for big corporations anymore that make tons of money of it. For example experts could answer old, unanswered questions of which we have plenty but to me it seems they don't. Both parties vanished, new activity is so low, 20 times lower than at its peak and still falling further, in first approximation one can speak of a frozen state and complete halt. And once this community is destroyed it would probably be Herculean effort to recreate it. That is the backside of LLMs. They destroy their data source. Some sort of cannibalization effect.

And since we cannot turn back technological progress I would primarily frame future discussions about: How can we continue to generate new knowledge and let all people profit from it equally? Franck is asking the right questions the answers to which will bring us forward. Unfortunately there aren't many good answers at the moment, we simply have to wait until some way forward emerges. I personally think it will only work by LLM providers splitting their profits and giving a large part of it to their data sources because we are way past anything that volunteers could deliver. But who knows. We will find out everything and how exactly in the future. See you there.

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.