By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Making inferences about another person’s motives from their behavior is integral to how people behave in social situations. This requires inverting the cognitive processes or latent states that led to behavior, yet most artificial intelligence (AI) systems lack estimates of cognitive processes. The present study tests the capacity of an objective pursuit model inspired by approach-avoidance theory to convey information related to latent motives by evaluating different AIs trained to infer a human player’s intent during a continuous control task. Human players were assigned a goal on each trial, where they could be attacking, avoiding, or inspecting (staying close to) the opponent. Additionally, some goals had participants defending a location or herding the opponent to a location. Cognitive model parameters were estimated by simulation-inversion neural networks with recurrent layers to model sequential data. Deep neural networks that classified a participant's intent were trained by (a) directly using observable information, (b) selecting important features by estimating the parameters of a generative model of movement behavior balancing tensions between objectives, or (c) ensemble networks that combine observable information and extracted features. Comparisons of classifier accuracy suggest that latent model parameters can improve intent inference when combined with summary statistics about behavior, yielding faster and more stable network training compared to networks that had no manual feature extraction. Equipping AI with cognitive models is a promising avenue for developing explainable, accurate, and trustworthy systems.
We examine how individuals evaluate alternatives and construct preferences in naturalistic multi-attribute choice. In our experiment, 1,000 participants make 100 ternary choices each, selecting between images of various activities. While the study design is straightforward, using images as stimuli and the presence of latent attributes complicate the inference of preference formation processes. To address this, we employ cognitively structured neural network (NN) models to uncover general patterns in choice behavior. First, a pre-trained convolutional neural network (CNN) transforms each image into a vector of human-interpretable attributes. We then develop a suite of NN models that embed cognitive hypotheses into their network architecture. By fitting these models to both population- and individual-level choice data, we investigate (1) whether alternatives are evaluated independently or contextually, (2) whether latent attributes are assessed individually or in the aggregate, and (3) the extent to which these evaluations follow a linear structure.
Designing effective Human-AI collaboration for difficult decisions, such as medical diagnostic decisions, is crucial to improving systems and reducing errors and biases. Traditional AI collaboration systems assume that humans can effectively integrate AI outcomes and often rely on humans' metacognitive awareness of when to ask for help. Therefore, these systems can have issues of overreliance and underreliance, potentially leading to worse performance than humans or machines alone. We propose a collaborative system that uses a cognitive model-driven approach to identify and correct human errors and biases. In this system, when an error in human decision making is detected, the decision is flagged. A human takes a second look at the flagged decision and makes a second decision. The presence of a flag is determined by a cognitive model of similarity coupled with human psychological representations obtained using deep neural networks. We investigate our approach using data from Trueblood et al. (2018) and Trueblood et al. (2021), where novices (undergraduates) and experts (medical professionals) classify white blood cell images as cancerous or not. For both novices and experts, we demonstrate the effectiveness of our collaborative AI system where we flag decisions where the original decision is inconsistent with the cognitive model predicted decision. We also show our approach can help correct human perceptual biases using data from Trueblood et al. (2021). Overall, our approach ensures that AI serves as a cognitive aid rather than a mere automation tool, potentially fostering more effective and trustworthy decision-making.
Modern machine learning models yield vector representations that capture similarity relations between complex items like text and images. These representations can help explain and predict how individuals respond to those items in particular tasks, but only if representations are coupled to a cognitive model of the processes by which those tasks are performed. I introduce C2L (``context to likelihood''), a mathematical transformation of the similarity between vector representations, operationalized as the cosine of the angle between them, into a ratio of the relative likelihood that the two representations encode the same versus different items. The likelihood ratio operationalizes similarity in a manner that is motivated by cognitive theories of perception and memory and is readily ``plugged in'' to cognitive models. Three example applications show how C2L can be used to compute drift rates of a diffusion decision model based on similarity information derived from various sources, including machine learning models. By thus accounting for the speed and accuracy of decisions about individual items in memory and perception tasks, C2L enables inferences regarding how different people represent items, how much information they encode about each item, and how that information is affected by experimental manipulations. C2L serves both the practical purpose of making it easier to incorporate representations from machine learning---indeed, any method in which similarity is operationalized as a cosine---into cognitive models and the theoretical purpose of allowing cognitive models to grant insight into how people process the increasingly complex, naturalistic items to which machine learning models are applied.