Toward 'Theory of Reasoning' Reasoning Architectures
A cognitive-scientific framework for unifying reasoning architectures under a theory of reasoning
Introduction
Recent years have seen the rise of reasoning architectures—frameworks such as Chain-of-Thought (CoT), Tree-of-Thoughts (ToT), and their successors—that enable large models to reason through structured intermediate steps. Yet despite impressive empirical results, these systems remain heuristic and cognitively ungrounded.
This white paper introduces Theory-of-Reasoning Reasoning Architectures (ToR-RAs), a programme to formalise the cognitive architecture of reasoning itself: how coherence, metacognitive control, and resource allocation jointly give rise to structured thought.
Why This Work
Reasoning architectures today resemble early computing before Turing—powerful, but lacking a theory of what they actually are. Just as the theory of computation defined ‘what it means to compute’, a theory of reasoning should define ‘what it means to reason’.
The ToR-RA framework treats reasoning as a cognitive architecture composed of interacting subsystems for inference, evaluation, and metareasoning operating under bounded resources. The aim is not merely to engineer better reasoning algorithms, but to found a science of reasoning architectures linking cognitive-scientific principles with computational design.
Conceptual Foundations
A reasoning architecture, as defined in the paper, satisfies four invariants:
Coherence: Local inferences support global consistency.
Causality of thought: Representational changes have identifiable precursors.
Introspectability: The system can form meta-representations of its own reasoning.
Economy: Inference optimises progress per unit of cognitive cost.
These invariants align with long-standing frameworks in cognitive science—Marr’s three levels of analysis, rational metareasoning, and dual-process models—and together they provide a bridge between biological and artificial reasoning.
Relation to Prior Work
ToR-RAs extend my earlier developments in reasoning architectures:
Lateral Tree-of-Thoughts (LToT) — reasoning breadth and cognitive economy.
Natural Language Edge Labelling (NLEL) — semantic self-instruction and metacognitive control.
Both suggested that reasoning architectures could serve as models of reasoning rather than merely as engineering tools. ToR-RAs make that synthesis explicit.
Significance
ToR-RAs aim to unify cognitive science and reasoning-architecture research under a shared theoretical language, integrating interpretability, efficiency, and scientific clarity.
They represent a step toward understanding reasoning not as a black-box capability but as an architectural phenomenon—something that can be studied, designed, and improved with the same rigour once brought to computation itself.
Abhinav Madahar · अभिनव ਮਦਾਹਰ
Independent Computer Scientist
abhinavmadahar.com | abhinav@abhinavmadahar.com


It's interesting how you describe current reasoning architectures as pre-Turing, lacking a fundamental theory. I completely agree that defining 'what it means to reason' formally is crucal for the field to progress beyond heuristics.