You are not logged in. Your edit will be placed in a queue until it is peer reviewed.
We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.
-
1"LLMs can appear to give attributions for their utterances, but these are fakes" - LLMs can also summarise text, so they could very well be summarising what's said on some page, rather than finding one that matches after the fact. AI agent architecture already allows for this. Although I wouldn't be able to say how any particular LLM generates any particular response (it's fair to say most of e.g. ChatGPT isn't doing this).NotThatGuy– NotThatGuy2025-02-05 03:20:29 +00:00Commented Feb 5, 2025 at 3:20
-
8@NotThatGuy When ChatGPT summarises, it actually does nothing of the kind - Gerben Wierda, May 27 2024.PM 2Ring– PM 2Ring2025-02-05 03:29:18 +00:00Commented Feb 5, 2025 at 3:29
-
3A bit of a misleading title, that. The article says ChatGPT shortens the text instead of summarising it. That's different, sure, but it's far from "nothing of the kind". The article doesn't support your claim that it's attributions are "fake" - closer to the opposite. Also, "an LLM is bad at this thing" is not much of a story. LLMs are/were bad at lots of things, but they keep being made better and better. It's questionable to call that a "fundamental difference" between shortening based on length vs importance - the mention of "understanding" sounds like an appeal to human exceptionalism.NotThatGuy– NotThatGuy2025-02-05 03:46:44 +00:00Commented Feb 5, 2025 at 3:46
-
8@NotThatGuy My point is that a human students can maintain a chain of trust with the sources they study, building on a network of authoritative expertise. But current GenAI systems cannot maintain those chains. OK, a GenAI program may be able to use pattern matching to find sources that look relevant to some utterance that it's synthesised, which is better than nothing, I guess. But it's not the same as carefully building on top of a foundation of trusted works.PM 2Ring– PM 2Ring2025-02-05 04:12:35 +00:00Commented Feb 5, 2025 at 4:12
-
If that's closest match from the sources the AI was actually trained on, it seems good enough. Attribution would not need to be better then output.SmallSoft– SmallSoft2025-02-06 06:59:52 +00:00Commented Feb 6, 2025 at 6:59
-
2My cynical prediction is that SE staff will continue to assure us that attribution is "non-negotiable" and "very important" and "a high priority" and "something we really want to maintain" until they finally get around to admitting that their AI "partner" has no idea how to implement attribution and less than zero interest in bothering because giving credit to the people curating its dataset isn't going to make them money, at which point SE will quietly drop the requirement and tell us they really wanted to, but... and we just have to take it.Shadur-don't-feed-the-AI– Shadur-don't-feed-the-AI2025-02-07 11:38:09 +00:00Commented Feb 7, 2025 at 11:38
-
1@Shadur-don't-feed-the-AI why even be cynical about it? It fits the SE playbook to a T. They start something, deliver a half-finished project, then abandon it. But since it's in production, it stays there. Remember the notifications rework, for example? The last major change they did was to break what wasn't working further. So, same thing will happen with the answer bot - they'll push it to production, with many reassurances how it's initial version and they'll work on it. Then stop after fixing few inconsequential issues.VLAZ– VLAZ2025-02-07 11:41:59 +00:00Commented Feb 7, 2025 at 11:41
-
2The GenAI system simply generates an utterance and then does a search on a relevant data pool (eg, the whole Internet, or the Stack Exchange network), looking for close matches to the utterance, and then claims that those matches are its sources why does that remind me of grad students in a hurry to finish their paper draft?gerrit– gerrit2025-02-11 08:56:27 +00:00Commented Feb 11, 2025 at 8:56
-
@SmallSoft Well, I guess it's better than nothing. ;) It is good to cite trustworthy references that support your claims. But that's no substitute for saying "here are the previous works that were the actual foundations of my new work".PM 2Ring– PM 2Ring2025-02-11 09:44:02 +00:00Commented Feb 11, 2025 at 9:44
Add a comment
|
How to Edit
- Correct minor typos or mistakes
- Clarify meaning without changing it
- Add related resources or links
- Always respect the author’s intent
- Don’t use edits to reply to the author
How to Format
-
create code fences with backticks ` or tildes ~
```
like so
``` -
add language identifier to highlight code
```python
def function(foo):
print(foo)
``` - put returns between paragraphs
- for linebreak add 2 spaces at end
- _italic_ or **bold**
- indent code by 4 spaces
- backtick escapes
`like _so_` - quote by placing > at start of line
- to make links (use https whenever possible)
<https://example.com>[example](https://example.com)<a href="https://example.com">example</a>
How to Tag
A tag is a keyword or label that categorizes your question with other, similar questions. Choose one or more (up to 5) tags that will help answerers to find and interpret your question.
- complete the sentence: my question is about...
- use tags that describe things or concepts that are essential, not incidental to your question
- favor using existing popular tags
- read the descriptions that appear below the tag
If your question is primarily about a topic for which you can't find a tag:
- combine multiple words into single-words with hyphens (e.g. stack-overflow), up to a maximum of 35 characters
- creating new tags is a privilege; if you can't yet create a tag you need, then post this question without it, then ask the community to create it for you