Timeline for answer to Why do larger language models still fail on simple compositional reasoning tasks? by adsp42
Current License: CC BY-SA 4.0
Post Revisions
3 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| 14 hours ago | comment | added | minnmass | I've frequently seen GenAI described as "really fancy predictive text". Keeping that in mind has been quite helpful whenever I've encountered any "why did the LLM do the silly thing?" scenarios. | |
| 14 hours ago | comment | added | JimmyJames | One of the things I've heard that I think is a useful distinction is the clarification that 'LLMs don't reason, they rationalize'. That is, when an LLM is prompted to 'explain your reasoning', it actually produces a new generative output based on the context of the answer it gave. There's no understanding of the problem to explain. It's just more token prediction. | |
| yesterday | history | answered | adsp42 | CC BY-SA 4.0 |