Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Rule Extraction from Large Language Models within the Clarion Cognitive Architecture

Authors
Mr. Joseph Killian Jr
Rensselaer Polytechnic Institute ~ Cognitive Science
Dr. Ron Sun
Rensselaer Polytechnic Institute
Abstract

Computational cognitive architectures are useful tools for capturing the structures and processes of the mind computationally as well as for simulating behavior. One such cognitive architecture, Clarion, incorporates a two-level structure consisting of both symbolic and sub-symbolic representations (at the two different levels respectively) and bottom-up learning that goes from sub-symbolic to symbolic representations (using the Rule-Extraction-Refinement algorithm). This work explores the integration of Large Language Models (LLMs) into the Clarion framework to enhance its capabilities. The present paper specifically explores a new rule extraction method within Clarion, with SBERT (Sentence-BERT) incorporated into the bottom level of Clarion and a sliding window approach to extract n-gram rules for the top level of Clarion. This modified version of the Rule-Extraction-Refinement algorithm is used to carry out bottom-up learning within the new Clarion. Ongoing experiments on the Pennebaker and King essays dataset (for personality prediction) demonstrate the potential for improved performance and increased explainability when incorporating LLMs into Clarion.

Tags

Keywords

Clarion
LLMs
Cognitive Architectures
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Killian Jr, J. A., & Sun, R. (2025, July). Rule Extraction from Large Language Models within the Clarion Cognitive Architecture. Paper presented at MathPsych / ICCM 2025. Via mathpsych.org/presentation/1990.