Questions tagged [reasoning]
The reasoning tag has no summary.
33 questions
5
votes
1
answer
857
views
Why do larger language models still fail on simple compositional reasoning tasks?
Large language models often perform impressively on benchmark tasks, coding, and natural language generation, but they can still fail on reasoning problems that seem simple for humans, especially when ...
1
vote
1
answer
62
views
Why do large language models struggle with consistent multi step reasoning?
Large language models often perform well on single-step tasks but sometimes fail when reasoning requires multiple logical steps.
Is this limitation mainly due to training data patterns, model ...
0
votes
1
answer
26
views
Are there AI architectures where one model generates reasoning and another model verifies or monitors the reasoning process?
I am a student who has recently started learning about artificial intelligence and reasoning systems, so I apologize in advance if this question is already well known in the literature.
Many modern ...
1
vote
0
answers
98
views
How to distinguish "Reasoning Models" (LRMs) from standard LLMs beyond token count and CoT?
I am looking for academic references or frameworks that provide a formal distinction between Large Reasoning Models (LRMs)—like DeepSeek-R1 or the OpenAI o-series—and standard instruction-following ...
3
votes
2
answers
223
views
If large language models don’t reason symbolically, how can they still follow logical chains in text?
Transformers don’t use formal logic, yet models like GPT can handle multi-step reasoning questions. What mechanisms inside the network allow this kind of emergent logic without explicit symbolic ...
0
votes
0
answers
27
views
How to design a DSL for composable, auditable reasoning in symbolic AI?
I'm working on a framework for formal, auditable reasoning built around composable structures I call Reasoning DNA Units (RDUs). Each RDU represents a minimal, verifiable unit of reasoning, and can be ...
0
votes
0
answers
83
views
Is this formal approach to verifying AI reasoning consistency methodologically sound?
I'm working on a formal protocol called FPC v2.1 for verifying AI reasoning integrity. As a system analyst (not a professional AI researcher), I want to validate whether this approach has fundamental ...
6
votes
6
answers
3k
views
Why do we expect AI to reason instantly when humans require years of lived experience?
Humans are not born knowing how to reason. We develop it gradually through our individual subjective experiences, social interaction, and contextual learning over many years.
AI systems, however, are ...
1
vote
0
answers
40
views
Optimal Hybrid Search for LLM Reasoning
Given a finite computational budget $C$, suppose we want to learn an optimal policy $\Pi$ for large language models that dynamically decides between two inference-time reasoning strategies: deepening ...
0
votes
0
answers
89
views
Are there hidden “forces” in LLM embedding space? I observed a consistent semantic refraction effect
I've been exploring how prompt structures interact with the embedding space of large language models.
While testing a minimal .txt-based reasoning file across six different models (ChatGPT, Claude, ...
4
votes
1
answer
168
views
Is PAC-unlearnability a fundamental limitation for LLM reasoning?
For simplicity, let’s focus on knowledge reasoning tasks with Yes/No answers. According to learning theory, even moderately complex knowledge reasoning tasks are PAC-unlearnable. This implies that no ...
2
votes
1
answer
768
views
How exactly are the <think> steps generated in DeepSeek-R1?
As a narrowing-in on the question How does DeepSeek-R1 perform its "reasoning" part exactly?, how exactly does the <think> step generation work? ...
1
vote
2
answers
906
views
How does DeepSeek-R1 perform its "reasoning" part exactly?
I wrote up my understanding for how LLMs generate text responses to text prompts (at a somewhat practical yet high level), focusing on example numerical vectors and how they are transformed at each ...
0
votes
2
answers
525
views
How much reasoning is true reasoning in LLM reasoning models?
Posing a technical question to a reasoning LLM may elicit a series of "thinking-like" sentences. For example,
Suppose this model is already pre-fossilized into its 600 billion weight values....
1
vote
1
answer
474
views
Are the newer OpenAI models such as o1 not generative pretrained transformers?
OpenAI seems to be avoiding branding their reasoning models as "GPTs. See, for example, this page from their API docs, which has one column for "GPT models" and another for "...