What would need to be in place for you to feel comfortable seeing Answer Assistant implemented as a controlled experiment in your Stack Exchange community?
For me to be OK with this "answer assistant" experiment, it would need to actually be an assistant and not some repackaged LLM pretending like it knows things.
First, itIt wouldn't generate an answer.answer; It would help me quickly write a good answer based on my expert knowledge. Some random ideas of how it could do that:
- Help me find related questions on the network
- Summarize all of the comments on the question and its answers
- Help me find good supporting sources for my answer
- Format the answer according to the community norms of the site
- Suggest ways to make the phrasing of the answer clearer, expand thoughts into full sentences and otherwise wordsmith my content
- Warn me that the question may be off-topic or a duplicate
A true answer assistant would encourage humans to write answers by making it easier to write them well. This AI generated answer experiment is pushing humans into the tedious work of reinforcement training an AI while taking away the more rewarding activity of helping people by answering their questions.
Unanswered questions have traditionally been a way for new users to start getting involved with the site. Having AI steal that opportunity seems counter to the goal "... to build and support a healthy ecosystem of active users and community contributors." Unless of course the community the company is trying to build is one willing to curate data for and reinforcement train an AI without compensation.