Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • 1
    "LLMs can appear to give attributions for their utterances, but these are fakes" - LLMs can also summarise text, so they could very well be summarising what's said on some page, rather than finding one that matches after the fact. AI agent architecture already allows for this. Although I wouldn't be able to say how any particular LLM generates any particular response (it's fair to say most of e.g. ChatGPT isn't doing this). Commented Feb 5, 2025 at 3:20
  • 8
    @NotThatGuy When ChatGPT summarises, it actually does nothing of the kind - Gerben Wierda, May 27 2024. Commented Feb 5, 2025 at 3:29
  • 3
    A bit of a misleading title, that. The article says ChatGPT shortens the text instead of summarising it. That's different, sure, but it's far from "nothing of the kind". The article doesn't support your claim that it's attributions are "fake" - closer to the opposite. Also, "an LLM is bad at this thing" is not much of a story. LLMs are/were bad at lots of things, but they keep being made better and better. It's questionable to call that a "fundamental difference" between shortening based on length vs importance - the mention of "understanding" sounds like an appeal to human exceptionalism. Commented Feb 5, 2025 at 3:46
  • 8
    @NotThatGuy My point is that a human students can maintain a chain of trust with the sources they study, building on a network of authoritative expertise. But current GenAI systems cannot maintain those chains. OK, a GenAI program may be able to use pattern matching to find sources that look relevant to some utterance that it's synthesised, which is better than nothing, I guess. But it's not the same as carefully building on top of a foundation of trusted works. Commented Feb 5, 2025 at 4:12
  • If that's closest match from the sources the AI was actually trained on, it seems good enough. Attribution would not need to be better then output. Commented Feb 6, 2025 at 6:59
  • 2
    My cynical prediction is that SE staff will continue to assure us that attribution is "non-negotiable" and "very important" and "a high priority" and "something we really want to maintain" until they finally get around to admitting that their AI "partner" has no idea how to implement attribution and less than zero interest in bothering because giving credit to the people curating its dataset isn't going to make them money, at which point SE will quietly drop the requirement and tell us they really wanted to, but... and we just have to take it. Commented Feb 7, 2025 at 11:38
  • 1
    @Shadur-don't-feed-the-AI why even be cynical about it? It fits the SE playbook to a T. They start something, deliver a half-finished project, then abandon it. But since it's in production, it stays there. Remember the notifications rework, for example? The last major change they did was to break what wasn't working further. So, same thing will happen with the answer bot - they'll push it to production, with many reassurances how it's initial version and they'll work on it. Then stop after fixing few inconsequential issues. Commented Feb 7, 2025 at 11:41
  • 2
    The GenAI system simply generates an utterance and then does a search on a relevant data pool (eg, the whole Internet, or the Stack Exchange network), looking for close matches to the utterance, and then claims that those matches are its sources why does that remind me of grad students in a hurry to finish their paper draft? Commented Feb 11, 2025 at 8:56
  • @SmallSoft Well, I guess it's better than nothing. ;) It is good to cite trustworthy references that support your claims. But that's no substitute for saying "here are the previous works that were the actual foundations of my new work". Commented Feb 11, 2025 at 9:44