ICCM II
Conner Hanley
Dr. Mary Kelly
We present a typed computer language, Doug, in which all typed programs may be proved to halt in polynomial time, encoded in a vector-symbolic architecture (VSA). Doug is just an encoding of the light linear functional programming language (LLFPL) described by Schimanski (2009, ch. 7). The types of Doug are encoded using the slot-value encoding scheme of holographic declarative memory (HDM; Kelly, Arora, West, & Reitter, 2020). The terms of Doug are encoded using a variant of the Lisp VSA defined by Tomkins-Flanagan and Kelly (2024). Doug allows for some points on the embedding space of a neural network to be interpreted as types, where the types of nearby points are similar both in structure and content. Types in Doug are therefore learnable by a neural network. Following Chollet (2019), Card, Moran, and Newell (1983), and Newell and Rosenbloom (1981), we view skill as the application of a procedure, or program of action, that causes a goal to be satisfied. Skill acquisition may therefore be expressed as program synthesis. Using Doug, we hope to describe a form of learning of skilled behaviour that follows a human-like pace of skill acquisition (i.e., substantially faster than brute force; Heathcote, Brown, & Mewhort, 2000), exceeding the efficiency of all currently existing approaches (Kaplan et al., 2020; A. L. Jones, 2021; Chollet, 2024). Our approach brings us one step closer to modeling human mental representations, as they must actually exist in the brain, and those representations’ acquisition, as they are actually learned.
Sangeet Khemlani
Gregory Francis
Andrew Lovett
Chase detection involves tracking objects and comparing their locations over time. What is it about the relative spatial relations of two objects that helps you perceive one as chasing the other rather than, say, merely moving in the general direction of the other object? A recent model of chase detection provided an explanation in terms of an attentional strategy. However, it is unclear if this model generalizes or has predictive power since it was fit to experimental data. Here we examine whether this model explanation extends to and predicts a frequently studied chasing cue: chasing subtlety—the degree to which the chaser deviates from the most direct path to its target. To test the model, we made preregistered model predictions from simulations run prior to data collection. We then conducted two experiments where chasing subtlety varied. Overall, the model did a good job predicting response time and accuracy patterns across most conditions. Additionally, it predicted specific videos that had the highest error rates. Thus, we show that the model explanation extends to chasing subtlety and, more broadly, that the model can be used to generate a falsifiable theory of chase detection.
Justin Li
Semantic and co-occurrent memory associations aid the retrieval of relevant memory elements from long term memory but little is understood about how semantics and co-occurrence interact to facilitate retrieval. This paper explores the relationship between these associations via eleven potential relationships between semantics and co-occurrence in a Bayesian computational memory model. We assessed the performance of the candidate mechanisms using two linguistic tasks - the Word Sense Disambiguation task and the Remote Associates Test. The most successful mechanisms use co-occurrent associations to modulate semantic associations by removing or adding associations to the retrieval context or pool of candidate memory elements for retrieval. Features of the demonstrated interaction between semantic and co-occurrence are discussed in light of their psychological implications, consistent with recent experimental work in memory retrieval.
Chris Dancy
Holographic Declarative Memory (HDM) is a vector-symbolic alternative to ACT-R's traditional Declarative Memory (DM) system that can bring advantages such as scalability and architecturally defined similarity between DM chunks. We adapted HDM to work with the most comprehensive and widely-used implementation of ACT-R (Lisp ACT-R) so extant ACT-R models designed with DM can be run with HDM without major changes. With this adaptation of HDM, we have developed vector-based versions of common ACT-R functions, set up a text processing pipeline to add the contents of large documents to ACT-R memory, and most significantly created a useful and novel mechanism to retrieve an entire chunk of memory based on a request using only vector representations of tokens. Preliminary results indicate that we can maintain vector-symbolic advantages of HDM (e.g., chunk recall without storing the actual chunk and other advantages with scaling) while also extending it so that previous ACT-R models may work with the system with little (or potentially no) modifications within the actual procedural and declarative memory portions of a model. As a part of iterative improvement of this newly translated holographic declarative memory module we will continue to explore better time-context representations for vectors to improve the module's ability to reconstruct chunks during recall. To more fully test this translated HDM module, we also plan to develop decision-making models that use instance-based learning (IBL) theory, which is a useful application of HDM given the advantages of the system.
Submitting author
Author