Research Statement

Abhinav Madahar · अभिनव ਮਦਾਹਰ
October 27, 2025 CE

My research focuses on the discovery of artificial general intelligence (AGI) and on ensuring that the impact of its discovery and creation is prosocial.

AGI. Artificial general intelligence (AGI) refers to an artificial system capable of performing any cognitive task that a human being can, across domains, without domain-specific retraining. It exhibits generalizable reasoning, adaptive learning, and reflective self-improvement, enabling it to transfer knowledge and strategies across distinct tasks and modalities.

Prosociality. Prosociality refers to the orientation of an agent’s actions and outcomes toward the flourishing, well-being, and equitable thriving of sentient beings. In the context of AGI, prosociality denotes the design and governance of general intelligence such that its emergence, deployment, and long-term trajectory contribute positively to collective welfare, minimize harm, and reinforce cooperative structures of civilization.


Track I — Advancing Reasoning Architectures

The contemporary field of reasoning architectures investigates how large-scale artificial systems can perform structured, interpretable reasoning rather than pattern-based association alone. Current research encompasses frameworks such as chain-of-thought (CoT), tree-of-thoughts (ToT), and their successors, which attempt to model multi-step inference, hypothesis revision, and evaluative reflection. These architectures aim to improve logical consistency, generalization, and the alignment between model reasoning processes and human-understandable rational structure. My work situates within this domain; at present, my ongoing research specifically focuses on ToR-RAs, seeking to extend these architectures toward greater efficacy, coherence, and general intelligence.

Contributions to date

  • Lateral Tree-of-Thoughts (LToT) — completed. Introduces a search-time controller that separates logically consistent, low-utility candidates (laterals) from high-utility exploitation paths (mainlines). It converts large inference budgets into productive reasoning breadth by preserving logically coherent alternatives and racing them with short, compute-bounded probes. This architecture mitigates breadth saturation and depth myopia, improving success-per-compute and reducing false promotions under structured reasoning tasks.

  • Natural Language Edge Labelling (NLEL) — completed. Develops a labeller–tuner overlay for structured reasoning that decouples semantic intent from execution control. Each edge in a reasoning graph carries a free-form natural-language directive that is mapped to a bounded control vector governing decoding, search, retrieval, and verification. This approach unifies interpretability, controllability, and compute-efficiency in reasoning architectures, and demonstrates formal guarantees such as anytime monotonicity under Tree-of-Thought selection.

  • Theory-of-Reasoning Reasoning Architectures (ToR-RAs) — ongoing. Provides a theoretical synthesis that grounds reasoning architectures in cognitive science and formalizes reasoning as a cognitive-computational system that unifies representation, control, and metacognition.

ToR-RA framework (theoretical core)

ToR-RAs define reasoning architectures as dynamical organizations of cognitive activity characterized by four invariants:

  1. Coherence: local inferences reinforce global logical and representational consistency.

  2. Causality of thought: representational changes trace to identifiable precursors, allowing reasoning steps to be causally justified.

  3. Introspectability: the system maintains meta-representations of its own reasoning process, supporting self-evaluation and adaptive control.

  4. Cognitive economy: the architecture allocates resources to maximize reasoning progress per unit of cognitive or computational cost.

The framework aligns with Marr’s levels of analysis (computational, algorithmic, implementational) and draws on rational metareasoning and dual-process models, treating the interaction between fast-divergent and slow-deliberative subsystems as an architectural principle rather than a training artifact.

Empirical aim. ToR-RAs aim to demonstrate that reasoning efficacy can be improved not only by larger models but by architectures that enforce structural invariants of thought. The ongoing research involves developing a concrete instantiation of a ToR-RA to demonstrate the viability of this approach.


Track II — Prosocial Impact of AGI

Beyond advancing reasoning architectures, my research addresses ensuring that the impact of AGI is prosocial, with primary attention to economic outcomes.

Prosocial economic gains. I focus on growth in productivity, knowledge, and welfare that translates into broad-based societal benefit rather than narrow concentration of wealth or power. A subarea concerns enabling economically suboptimal but socially desirable deployments—for example, equitable access for underfunded educational institutions—so that the pedagogical advantages of AGI reach resource-limited contexts. In this sense, prosocial deployment encompasses both maximizing total economic welfare and deliberately sacrificing efficiency where doing so advances social equity, inclusion, or long-term civilizational flourishing.

Civilization-scale lens. My analysis is universal and macroscopic: AGI is treated as a civilization-scale phenomenon, emphasising the systemic interplay among actors, institutions, and technologies in shaping trajectories of human welfare and progress.

Program: AGI to pursue a general cure for cancer

I consider whether AGI research can reach a state where we can deliberately instantiate a highly intelligent, language-centric, tool-using AGI and assign it a small number of highly expensive but civilization-critical problems—centrally, discovering a general cure for cancer. The program entails:

  • Reading and synthesising the literature;

  • Generating and pruning mechanistic hypotheses;

  • Designing, scheduling, and interpreting experiments via self-driving labs;

  • Integrating multi-modal evidence with verification and independent replication.

The approach is staged and pragmatic: unified knowledge integration and governed data access; systematic hypothesis curation against pan-cancer resources; automated experimentation; designs robust to resistance; and translational pathways leveraging tumor-agnostic precedents. Because training and sustained inference will be costly, the effort requires durable, treaty-backed funding and governance (e.g., pooled compute procurement, staged safety evaluations, human-in-the-loop oversight, biosecurity review, federated data stewardship, and binding carbon/water budgets). Success is defined not by displacing human scientists but by augmenting them—using machine reasoning to restore growth control across tumors through verifiable, general interventions.

Jurisprudential analysis

I also examine jurisprudential questions surrounding AGI, including whether an AGI could qualify as a legal actor and how culpability or responsibility should be assigned for its actions. This spans personhood, intent, negligence, and vicarious liability as applied to autonomous reasoning systems. The analysis assumes primary adjudication at the international level, reflecting the transnational nature of AGI’s development, deployment, and impact, and considers how global governance, treaties, and multilateral institutions could allocate rights, duties, and liabilities among states, corporations, and autonomous artificial agents.

My interest in jurisprudential questions surrounding AGI arises from the recognition that many of these issues—including the attribution of intent, negligence, personhood, and vicarious liability to autonomous reasoning systems—can be coherently framed and resolved only by scholars with substantive expertise in both AGI and law. Absent a dual understanding of cognitive architectures (e.g., causality of thought and introspectability) and legal theory, such questions risk being posed in ways that lack conceptual precision and normative adequacy; the project here is therefore to develop an epistemically grounded jurisprudence that co-evolves with technical reality rather than retrofitting legal categories to opaque machine behavior.