Cognitive Architectures Meet Large Language Models: Towards Human-Like Reasoning in AI Systems
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in language understanding and generation, yet their reasoning processes differ fundamentally from human cognition. This paper explores the integration of classical cognitive architectures—specifically ACT-R and SOAR—with modern transformer-based language models to create hybrid systems capable of more human-like reasoning. We propose the Cognitive-LLM (C-LLM) architecture, which uses cognitive models for structured reasoning and planning while leveraging LLMs for natural language interface and knowledge retrieval. Preliminary experiments on complex reasoning benchmarks show that C-LLM outperforms pure LLM approaches on multi-step logical reasoning tasks by 28% while providing interpretable reasoning traces.
Keywords
Extended Content
Preprint Note
This is a working paper currently under review at a major AI conference. Comments and feedback are welcome.
Motivation
Current LLMs exhibit several reasoning limitations:
- Inconsistent performance on multi-step logical problems
- Difficulty with counterfactual reasoning
- Lack of explicit working memory management
- Opaque reasoning processes
Cognitive architectures, developed over decades of research, have well-understood mechanisms for these capabilities but lack the natural language fluency of LLMs.
The C-LLM Architecture
The system combines:
- LLM Component: Handles natural language understanding, generation, and semantic knowledge retrieval
- Cognitive Architecture: Manages working memory, goal structures, and production rules for reasoning
- Unified Memory: Shared knowledge store accessible to both components
Preliminary Results
| Benchmark | GPT-4 | Claude 3 | C-LLM |
|---|---|---|---|
| LogiQA | 78% | 81% | 94% |
| ProofWriter | 65% | 68% | 89% |
| bAbI Tasks | 92% | 94% | 99% |
Future Directions
- Integration with embodied simulation for spatial reasoning
- Application to AI safety through interpretable reasoning chains
- Scaling studies with larger cognitive model implementations
Cite This Paper
Abhimanyu. "Cognitive Architectures Meet Large Language Models: Towards Human-Like Reasoning in AI Systems." arXiv preprint , 2024.