Timeline for answer to AI-generated Answers experiment on Stack Exchange sites that volunteered to participate by PM 2Ring
Current License: CC BY-SA 4.0
Post Revisions
10 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 11, 2025 at 9:44 | comment | added | PM 2Ring | @SmallSoft Well, I guess it's better than nothing. ;) It is good to cite trustworthy references that support your claims. But that's no substitute for saying "here are the previous works that were the actual foundations of my new work". | |
| Feb 11, 2025 at 8:56 | comment | added | gerrit | The GenAI system simply generates an utterance and then does a search on a relevant data pool (eg, the whole Internet, or the Stack Exchange network), looking for close matches to the utterance, and then claims that those matches are its sources why does that remind me of grad students in a hurry to finish their paper draft? | |
| Feb 7, 2025 at 11:41 | comment | added | VLAZ | @Shadur-don't-feed-the-AI why even be cynical about it? It fits the SE playbook to a T. They start something, deliver a half-finished project, then abandon it. But since it's in production, it stays there. Remember the notifications rework, for example? The last major change they did was to break what wasn't working further. So, same thing will happen with the answer bot - they'll push it to production, with many reassurances how it's initial version and they'll work on it. Then stop after fixing few inconsequential issues. | |
| Feb 7, 2025 at 11:38 | comment | added | Shadur-don't-feed-the-AI | My cynical prediction is that SE staff will continue to assure us that attribution is "non-negotiable" and "very important" and "a high priority" and "something we really want to maintain" until they finally get around to admitting that their AI "partner" has no idea how to implement attribution and less than zero interest in bothering because giving credit to the people curating its dataset isn't going to make them money, at which point SE will quietly drop the requirement and tell us they really wanted to, but... and we just have to take it. | |
| Feb 6, 2025 at 6:59 | comment | added | SmallSoft | If that's closest match from the sources the AI was actually trained on, it seems good enough. Attribution would not need to be better then output. | |
| Feb 5, 2025 at 4:12 | comment | added | PM 2Ring | @NotThatGuy My point is that a human students can maintain a chain of trust with the sources they study, building on a network of authoritative expertise. But current GenAI systems cannot maintain those chains. OK, a GenAI program may be able to use pattern matching to find sources that look relevant to some utterance that it's synthesised, which is better than nothing, I guess. But it's not the same as carefully building on top of a foundation of trusted works. | |
| Feb 5, 2025 at 3:46 | comment | added | NotThatGuy | A bit of a misleading title, that. The article says ChatGPT shortens the text instead of summarising it. That's different, sure, but it's far from "nothing of the kind". The article doesn't support your claim that it's attributions are "fake" - closer to the opposite. Also, "an LLM is bad at this thing" is not much of a story. LLMs are/were bad at lots of things, but they keep being made better and better. It's questionable to call that a "fundamental difference" between shortening based on length vs importance - the mention of "understanding" sounds like an appeal to human exceptionalism. | |
| Feb 5, 2025 at 3:29 | comment | added | PM 2Ring | @NotThatGuy When ChatGPT summarises, it actually does nothing of the kind - Gerben Wierda, May 27 2024. | |
| Feb 5, 2025 at 3:20 | comment | added | NotThatGuy | "LLMs can appear to give attributions for their utterances, but these are fakes" - LLMs can also summarise text, so they could very well be summarising what's said on some page, rather than finding one that matches after the fact. AI agent architecture already allows for this. Although I wouldn't be able to say how any particular LLM generates any particular response (it's fair to say most of e.g. ChatGPT isn't doing this). | |
| Feb 5, 2025 at 2:16 | history | answered | PM 2Ring | CC BY-SA 4.0 |