Preprint Cognitive Science

Cognitive Architectures Meet Large Language Models: Towards Human-Like Reasoning in AI Systems

Published
June 1, 2024
Published In
arXiv preprint
Author
Abhimanyu

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities in language understanding and generation, yet their reasoning processes differ fundamentally from human cognition. This paper explores the integration of classical cognitive architectures—specifically ACT-R and SOAR—with modern transformer-based language models to create hybrid systems capable of more human-like reasoning. We propose the Cognitive-LLM (C-LLM) architecture, which uses cognitive models for structured reasoning and planning while leveraging LLMs for natural language interface and knowledge retrieval. Preliminary experiments on complex reasoning benchmarks show that C-LLM outperforms pure LLM approaches on multi-step logical reasoning tasks by 28% while providing interpretable reasoning traces.

Keywords

cognitive architectureslarge language modelsAI reasoningcognitive sciencehybrid AI systems

Extended Content

Table of Contents
  1. Preprint Note
  2. Motivation
  3. The C-LLM Architecture
  4. Preliminary Results
  5. Future Directions

Preprint Note

This is a working paper currently under review at a major AI conference. Comments and feedback are welcome.

Motivation

Current LLMs exhibit several reasoning limitations:

  • Inconsistent performance on multi-step logical problems
  • Difficulty with counterfactual reasoning
  • Lack of explicit working memory management
  • Opaque reasoning processes

Cognitive architectures, developed over decades of research, have well-understood mechanisms for these capabilities but lack the natural language fluency of LLMs.

The C-LLM Architecture

The system combines:

  • LLM Component: Handles natural language understanding, generation, and semantic knowledge retrieval
  • Cognitive Architecture: Manages working memory, goal structures, and production rules for reasoning
  • Unified Memory: Shared knowledge store accessible to both components

Preliminary Results

BenchmarkGPT-4Claude 3C-LLM
LogiQA78%81%94%
ProofWriter65%68%89%
bAbI Tasks92%94%99%

Future Directions

  • Integration with embodied simulation for spatial reasoning
  • Application to AI safety through interpretable reasoning chains
  • Scaling studies with larger cognitive model implementations

Cite This Paper

Abhimanyu. "Cognitive Architectures Meet Large Language Models: Towards Human-Like Reasoning in AI Systems." arXiv preprint , 2024.