Rule Extraction from Large Language Models within the Clarion Cognitive Architecture
Computational cognitive architectures are useful tools for capturing the structures and processes of the mind computationally as well as for simulating behavior. One such cognitive architecture, Clarion, incorporates a two-level structure consisting of both symbolic and sub-symbolic representations (at the two different levels respectively) and bottom-up learning that goes from sub-symbolic to symbolic representations (using the Rule-Extraction-Refinement algorithm). This work explores the integration of Large Language Models (LLMs) into the Clarion framework to enhance its capabilities. The present paper specifically explores a new rule extraction method within Clarion, with SBERT (Sentence-BERT) incorporated into the bottom level of Clarion and a sliding window approach to extract n-gram rules for the top level of Clarion. This modified version of the Rule-Extraction-Refinement algorithm is used to carry out bottom-up learning within the new Clarion. Ongoing experiments on the Pennebaker and King essays dataset (for personality prediction) demonstrate the potential for improved performance and increased explainability when incorporating LLMs into Clarion.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: