Categorization & Representation
wolf vanpaemel
Francis Tuerlinckx
Prof. Jonas Zaman
Modern perceptual theories suggest that human perception is not a static or direct reflection of the physical world. Instead, our brains translate sensory inputs into probabilistic mental representations, shaped by prior experiences and contextual influences. This perspective challenges traditional associative learning theories, which assume that individuals have perfect and direct access to physical reality when forming associations. These classical theories also rely on predefined similarity functions to account for generalization, failing to reflect the inherent uncertainty and variability in perception. To address these limitations, we propose the Distributional Perceptual Knowledge Mapping (DPKM) model, which posits that perception and associative learning occur within a shared mental space. In this framework, learned associations are shaped by the same probabilistic processes that govern perception, rather than being independent of them. As a result, generalization emerges naturally as a consequence of the stochastic structure of mental representations, eliminating the need for an explicit similarity function. This approach provides a more ecologically valid understanding of how learning adapts to perceptual uncertainty, offering insights into fundamental cognitive mechanisms. By integrating perception and learning within a unified theoretical model, DPKM offers a novel perspective on how knowledge is acquired, structured, and generalized in the mind.
Odysseus Orr
Dr. Ed Wasserman
Developing a deep understanding of animal cognition in tasks such as category learning demands that one first achieve an appreciation of an animal’s sensory/perceptual/memory world. In this project, we report work that, for the first time, derives a nonhuman high-dimensional psychological-scaling representation for a set of visual objects and uses the representation to predict complex forms of category learning in a nonhuman species. Specifically, we pursue the question of whether pigeons can acquire multiple hard-to-discriminate rock-image categories as defined in the geologic sciences. We test a formal computational model of associative learning on its ability to account quantitatively for pigeons’ category learning performance. A prerequisite for applying the model is to embed the rock images in a pigeon psychological similarity space. We achieve that goal by modeling pigeons’ performance in an independently conducted same-different discrimination task involving the identical set of to-be-categorized rock images. The models provide a unified and accurate quantitative account of intricate sets of same-different and categorization-confusion data in this high-dimensional rock-categories domain. The psychological similarity space derived for pigeons resembles to a surprising degree one previously derived for humans, but with some notable exceptions, which are crucial to explaining pigeons’ detailed patterns of categorization performance.
Michael Lee
We present a model of the dynamics of category learning tasks using the Coupled Hidden Markov Model (CHMM) framework (Villarreal and Lee, 2024). The key innovation of the CHMM approach is the assumption that participants can update the category assignment of all stimuli, including those not currently presented, on a trial-by-trial basis. CHMMs have the ability to adapt their predictions about category assignment based on future observations, which makes them difficult to evaluate. To address this problem we demonstrate two approaches for evaluating a CHMM by comparing its predictions to the Generalized Context Model of categorization (GCM, Nosofsky, 1988). The first approach uses leave “n” out cross validation with data from a category learning experiment reported by Navarro et al. (2005) in which participants classify pictures of faces to one of two categories. The second approach uses a generalization test based on a learning-transfer categorization task with simple shape stimuli reported by Bartlema et al. (2014). Our results show that the predictions of the CHMM model are at least as accurate as predictions of the GCM. These findings suggest that the ability of the CHMM approach to accurately account for data from category learning tasks is not a consequence of its added flexibility.
Dr. Brandon Turner
Vladimir Sloutsky
Instance (or exemplar) models of memory and inference have been used to explain data from numerous experiments, though they have been criticized for lacking a broader theory of conceptual knowledge. Recently, we showed that instance models can be implemented as the update equation of a class of attractor networks, where varying the amount of competition during retrieval allows the networks to flexibly retrieve both individual items and the means of clusters from the same memories. In this work, we show that the same networks can recover hierarchical category structures, such as those seen in real world semantic categories. We first consider an artificial hierarchical dataset, finding that a variety of instance-based networks, including Hopfield networks, the Brain-in-a-Box model to model, the MINERVA 2 architecture, modern approaches using lateral inhibition (similar to SUSTAIN and the Adaptive Representation Model), and continuous-valued Modern Hopfield Networks can each recover hierarchical structures under ideal data conditions. Critically, given an item as a retrieval cue, prototypes of each hierarchical level can be retrieved using a simple attentional mechanism, providing a potential route to deliberately control the information which is retrieved. We then examined more realistic memory representations by storing noisy, pretrained GLoVE, Word2Vec, and BERT embeddings, as well as embeddings obtained from human feature norms (McRae et al., 2005) into each architecture. Overall, models with lateral inhibition and nonlinear competitive dynamics can retrieve hierarchical representations with GLoVE, Word2Vec, and feature norm embeddings, while BERT embeddings possess less hierarchical information for the categories we consider.
Submitting author
Author