Skip to main content

Questions tagged [reasoning]

5 votes
1 answer
857 views

Large language models often perform impressively on benchmark tasks, coding, and natural language generation, but they can still fail on reasoning problems that seem simple for humans, especially when ...
Avalon Brooks's user avatar
1 vote
1 answer
62 views

Large language models often perform well on single-step tasks but sometimes fail when reasoning requires multiple logical steps. Is this limitation mainly due to training data patterns, model ...
Avalon Brooks's user avatar
0 votes
1 answer
26 views

I am a student who has recently started learning about artificial intelligence and reasoning systems, so I apologize in advance if this question is already well known in the literature. Many modern ...
Sagar P.'s user avatar
1 vote
0 answers
98 views

I am looking for academic references or frameworks that provide a formal distinction between Large Reasoning Models (LRMs)—like DeepSeek-R1 or the OpenAI o-series—and standard instruction-following ...
Humberto José Bortolossi's user avatar
3 votes
2 answers
223 views

Transformers don’t use formal logic, yet models like GPT can handle multi-step reasoning questions. What mechanisms inside the network allow this kind of emergent logic without explicit symbolic ...
Anushka_Grace Chattopadhyay's user avatar
0 votes
0 answers
27 views

I'm working on a framework for formal, auditable reasoning built around composable structures I call Reasoning DNA Units (RDUs). Each RDU represents a minimal, verifiable unit of reasoning, and can be ...
eric's user avatar
  • 1
0 votes
0 answers
83 views

I'm working on a formal protocol called FPC v2.1 for verifying AI reasoning integrity. As a system analyst (not a professional AI researcher), I want to validate whether this approach has fundamental ...
AIDoctrine's user avatar
6 votes
6 answers
3k views

Humans are not born knowing how to reason. We develop it gradually through our individual subjective experiences, social interaction, and contextual learning over many years. AI systems, however, are ...
Nascent's user avatar
  • 123
1 vote
0 answers
40 views

Given a finite computational budget $C$, suppose we want to learn an optimal policy $\Pi$ for large language models that dynamically decides between two inference-time reasoning strategies: deepening ...
user1666769's user avatar
0 votes
0 answers
89 views

I've been exploring how prompt structures interact with the embedding space of large language models. While testing a minimal .txt-based reasoning file across six different models (ChatGPT, Claude, ...
PSBigBig's user avatar
4 votes
1 answer
168 views

For simplicity, let’s focus on knowledge reasoning tasks with Yes/No answers. According to learning theory, even moderately complex knowledge reasoning tasks are PAC-unlearnable. This implies that no ...
nova's user avatar
  • 93
2 votes
1 answer
768 views

As a narrowing-in on the question How does DeepSeek-R1 perform its "reasoning" part exactly?, how exactly does the <think> step generation work? ...
Lance Pollard's user avatar
1 vote
2 answers
906 views

I wrote up my understanding for how LLMs generate text responses to text prompts (at a somewhat practical yet high level), focusing on example numerical vectors and how they are transformed at each ...
Lance Pollard's user avatar
0 votes
2 answers
525 views

Posing a technical question to a reasoning LLM may elicit a series of "thinking-like" sentences. For example, Suppose this model is already pre-fossilized into its 600 billion weight values....
James's user avatar
  • 305
1 vote
1 answer
474 views

OpenAI seems to be avoiding branding their reasoning models as "GPTs. See, for example, this page from their API docs, which has one column for "GPT models" and another for "...
kuzzooroo's user avatar
  • 121

15 30 50 per page