virtual SMP Long Talk
Multisensory and multimodal combinations – both for neurons and perception – generally follow one of three mathematical rules: nonlinear summation (usually vector-like), gated amplification, and weighted averaging. Although there are many possible weighted averages, there are two rules that have particular theoretical appeal – Erwin Schrödinger’s (1926) nonlinear weighted average (proposed for binocular psychophysics) and inverse variance-weighted averaging (MLE) which is the most popular Bayesian model used in perceptual theory. Suppressive multisensory and multimodal neurons are the most natural neural loci for weighted averaging. We obtained five sets of suppressive sensory cortical neuron firing rate data: macaque V1 binocular neurons, macaque MSTd visual-vestibular neurons, cat PLLS audiovisual neurons, ferret PPr visual-tactile and ferret AAF audio-tactile neurons. We modeled all five sets of suppressive sensory neurons with the Schrödinger and MLE models. In all five cases and by two criteria, the Schrödinger model outperformed the MLE model, but the two models’ outcomes were well correlated. This elides the problem of extracting and storing variances/reliabilities in early stages of cortical processing, while producing an outcome broadly compatible with Bayesian processes. Schrödinger’s nonlinear means could serve Bayesian ends. Supported by a supplement to Office of Naval Research MURI Award #N00014-20-1-2163.
Snigdha Sushil Mishra
Sam Lin
Qiong Zhang
Memory has traditionally been studied in well-controlled laboratory environments, which, while effective, do not fully capture the range of dynamics and behaviors shown in real-world contexts. To address this gap, we propose using summarization as a novel task to study memory recall in naturalistic settings. We argue that a key component of summarization is the ability to represent and retain information from the original material. Inspired by approaches in the free recall literature to analyze temporal dynamics of memory recall, such as how recall begins and transitions to subsequent items, we analyzed the temporal dynamics of summary patterns. Using three publicly available summarization datasets and a naturalistic narrative recall dataset, we found alignments between the summarization patterns and established free recall patterns, including primacy, recency, temporal contiguity, and list length effects. These results support that summarization involves processes of memory recall and open up opportunities to use summarization as a naturalistic task to study memory recall in the future.
Dr. Jerome Busemeyer
Quantum cognitive models have recently emerged as a powerful framework for explaining a range of biases and fallacies in human decision-making. However, no quantum approach has yet incorporated cognitive control into decision-making processes, despite its critical role in shaping decision outcomes. To address this gap, we introduce the Oscillating Field Perturbation (OFP) model, a quantum cognitive model that formalizes cognitive control using principles from perturbation theory. Building on the Multiple Particle Multiple Well (MPMW) model, a pioneering quantum model of cognitive control, OFP represents attentional strength as the depth of a quantum square well and models cognitive control as the excitation of particles from the ground state to higher-energy eigenstates. We demonstrate that the OFP model successfully replicates key empirical patterns predicted by the Yerkes-Dodson law, a foundational theory linking arousal and cognitive control. Additionally, OFP can be integrated into sequential sampling models for decision-making, offering a novel explanation for empirical variability in deterministic heuristics. To our knowledge, OFP is the first quantum model to formally integrate cognitive control into heuristic decision-making and the first quantum cognitive framework to account for empirical variations of the Yerkes-Dodson law.
Heart rate (HR) variability tends to decrease when people are unwell, physically or mentally. HR is a complex signal that is chaotic at all possible temporal frequencies, leading to an index that is sensitive to environmental stressors. A Wellness Index estimated from Fitbit/Apple Watch HR data provided accurate tracking of subjective wellness, as measured by WHO-5 for 5 of 10 people, none of whom had any known chronic illness (Heath & Garnett, 2022). An app for iOS devices has been developed that includes this Heart Rate Index as well as the WHO-5 Wellness Scale, a recognition memory task, a tapping task and a visual tracking task, these five tasks taking about 5-6 minutes to run on an iPhone. The time series data for HR, tapping and tracking tasks are analysed using an entropy index derived from the multifractal spectra of each series. All five scores are used to update an adaptive novelty-sensitive associative memory, an accompanying change detection algorithm providing notifications about significant illness changes. All calculations and data acquisition are performed securely in the app using advanced iOS software packages. Possible applications in preventative health monitoring with notifications include Long Covid, Anxiety and Depression, Bipolar Disorder, Diabetes, Healthy Living etc Ideally, a medical team can use this app to monitor a client’s health without the latter needing to attend a medical facility.
Qiong Zhang
Despite robust empirical evidence supporting the role of reward in enhancing memory, the relationship between reward and memory shows complex patterns. We present a novel computational model that considers how people optimally allocate limited cognitive resources during memory encoding. Unlike previous accounts, which assume that higher rewards directly lead to stronger memory encoding, we allow our model to adaptively decide how much to encode for each item based on the overall reward environment and one's limited cognitive resources. Our model's predictions align closely with human recall patterns across three experiments. It successfully explains why high-reward items are better remembered than low-reward items only when they are presented together but not separately (Exp 1). Analyzing an existing dataset (Exp 2), our model accounts for how memory is modulated not only by the current reward but also by the rewards of preceding (but not future) items. To further test our proposed model, we collected new data (Exp 3) demonstrating that this insensitivity to rewards of future items can be reversed when participants can anticipate upcoming reward values. These findings provide evidence that memory encoding is an active process involving meta-level control, where cognitive resources are strategically allocated to maximize overall rewards, rather than a passive response to individual reward values.
Dr. Gregor Schöner
The ability to map entities based on relational similarities while ignoring featural similarities, a process known as analogical mapping, is fundamental to learning abstract concepts and generalizing across irrelevant differences. Analogical mapping is believed to rely on a general-purpose similarity matching mechanism that can selectively focus on different dimensions depending on task demands. This view is supported by evidence showing that individuals with impaired cognitive control often fail to match objects based on their relational roles and instead are drawn to featurally similar objects. Moreover, the ability to form analogies emerges relatively late in evolution, suggesting that the neural mechanisms supporting analogical mapping must build upon pre-existing neural systems serving other functions. We demonstrate this in a neural process model based on the principles of Dynamic Field Theory, showing how the dynamics of neural population give rise to the cognitive processes underlying analogical mapping. The model performs analogical mapping between two visual scenes containing simple geometric objects. We propose that this process involves first constructing a conceptual description of the relational structure in one scene and then searching for objects in another scene that fulfill the same description, extending previous models that account for these processes. Consistent with empirical findings, the model fails to map objects relationally when mutual inhibition between relational and featural representations is weak. These results provide insight into the neural dynamics underlying analogical reasoning and its dependence on cognitive control mechanisms.
Dr. Gregor Schöner
Humans can infer the spatial layout of objects based on premises that specify spatial relationships between them. Mental model theory suggests that this process involves constructing a mental model of the premises, which then supports the deduction of novel information. We present a neural process model of spatial relational reasoning based on the principles of Dynamic Field Theory. The model explains how the dynamics of strongly recurrent neural systems generate a sequence of mental events, leading to the formation of an internal representation of spatial premises and the inference of novel information. The cognitive processes in the model emerge autonomously from coupled integro-differential equations describing neural population dynamics, requiring minimal external intervention. Our model builds upon the previous model that interfaces visually grounded representation and conceptual representation, enabling both the description of visual input and attention to objects specified by spatial relations. Specifically, Specifically, the component representing the mental model in the current model is analogous to the component of the previous model representing the objects found in the visual input, with most other components remaining unchanged. This continuity demonstrates how a system grounded in sensorimotor processes can be extended to support more abstract cognitive functions, aligning with grounded and evolutionary perspectives on cognition. Furthermore, when provided with indeterminate premises—where the spatial arrangement of objects is not uniquely determined—the model constructs a mental representation that aligns with the preferred models built by humans, adding to its psychological plausibility.
Mr. Minseok Kang
Stephan Sehring
The MathPsych and ICCM communities are committed to using precise mathematical concepts to understand human cognition. While this has been most successful around specific cognitive competences such as decision making, a continuing challenge is to provide a set of theoretical principles that reach all facets of cognition. Cognitive architectures such as ACT-R, SOAR, and neural frameworks such as LISA and DORA address that challenge as does Dynamic Field Theory (DFT). DFT postulates that cognitive processes share properties with sensory-motor processes that both emerge from the dynamics of neural populations with strong recurrent connectivity. The goal of the workshop is to teach participants the principles of DFT and provide hands-on experience in building models that can be applied in their own research. Structure of the tutorial: 1. Core concepts of DFT: Neural populations formalized as neural fields evolve according to integro-differential equations whose attractor states are the units of representation. Bifurcation of these attractors are the basis for elementary cognitive functions: detection, selection, working memory, sequence generation. 2. Higher cognition: Binding different feature dimensions emerges from coupling neural fields that represent these feature spaces. A small set of mini-architectures provide the foundation of higher cognitive processes. 3. Hands-on modelling: Building neural dynamic cognitive architectures by combining mini-architectures using the programming framework PyCosivina. A model of visual attention and working memory will serve as a work example. Change detection paradigms will be modeled and predictions demonstrated in simulation.
Hanshu Zhang
Prof. Cheng-Ta Yang
The criteria shift induced by prevalence changes was found to be moderated by the presence of feedback, leading to two opposite trends. With feedback, low prevalence resulted in more conservative criteria (low prevalence effect, LPE). In contrast, when feedback was absent, low prevalence led to more liberal criteria (prevalence-induced concept change, PICC). Building on previous findings that older adults exhibit reduced sensitivity to PICC in perceptual decision-making, this study examines whether this age effect extends to the LPE when feedback is present. In contrast to our hypothesis, the data indicated that, relative to the regular prevalence condition, older adults in the low prevalence condition exhibited a more pronunced shift towards faster, conservative, and less discriminable replies.Drift diffusion model analyses further indicated that, in addition to longer non-decision times and more conservative decision boundaries, older adults demonstrated a greater influence of prevalence changes on their drift rate, while showing smaller effects on starting points and criterion shifts. Taken together with prior research on age-related differences in PICC, these findings suggest that older adults engage in distinct cognitive strategies when processing feedback under varying target prevalence conditions.
Prof. Cheng-Ta Yang
Dr. Elizabeth Fox
Systems Factorial Technology (SFT; Townsend & Nozawa, 1995) is a diagnostic tool grounded in theoretical principles that is used to analyze and diagnose information processing. It has been extensively applied in various areas of cognitive research, including visual search, memory search, face perception, and visual world processing, among others. In this symposium, we will present several studies based on SFT, highlighting both theoretical advancements and its application in decision making tasks that reveal underlying cognitive processes.
Prof. Cheng-Ta Yang
Hanshu Zhang
Automation is expected to enhance decision efficiency, yet how decision-makers integrate perceptual and aided information—and whether this leads to more efficient decisions—remains unclear. This study investigates how decision-makers perceive task-relevant information and automation aids, engaging in context-dependent processing. We designed a length judgment task that manipulated task difficulty (easy vs. difficult) and automation accuracy (90% vs. 50%) to examine their effects on decision efficiency, assessed using workload capacity from Systems Factorial Technology (SFT). In Experiment 1, task difficulty was manipulated by increasing decision uncertainty, achieved by enlarging the variance of categorical distributions. In Experiment 2, task difficulty was instead increased by reducing categorical discriminability while keeping the distribution variance constant. The results revealed that under high decision uncertainty, participants exhibited lower efficiency regardless of automation accuracy, characterized by limited-capacity processing. In contrast, under low categorical discriminability, participants were more efficient when automation accuracy was high, demonstrating supercapacity processing, where aided information was effectively integrated to enhance decision performance. These findings suggest that the efficiency of aided decision-making varies depending on contextual factors. When decision uncertainty is high, individuals tend to process task-relevant and aided information separately, leading to lower decision efficiency. Conversely, when categorical discriminability is low, individuals integrate task-relevant and aided information holistically, leveraging automation to improve decision-making. This research highlights the critical role of task difficulty and automation accuracy in shaping decision efficiency and provides insights into the underlying mechanisms, emphasizing the importance of these factors in evaluating automated decision support systems.
Dr. August Capiola
Arielle Stephenson
Mr. Gregory Bowers
Real-time metrics of team efficiency can provide a strategic advantage if appropriately estimated and displayed. The present work leveraged a parametric, time-varying model of team multitasking throughput (tMT) and displayed it in multiple ways. We used the conjugate prior of a Weibull distribution and Bayesian updating to estimate the cumulative reserve hazard function, which serve as components used to compute tMT. Assuming a fixed shape parameter, β, simplified the model, and further relaxing the assumption of stationarity allowed for scale, θ, to vary across time. The posterior estimate of the scale parameter is updated across time using a recency-based weighting. In a human-subjects experiment, tMT metrics were provided to users in two ways: a quantitative and graphical interface. A similar weighted approach was taken for a Raw display that updated average response times, accuracies, and number of interventions for each task over time. Team throughput was assessed across display type (and a control ‘no display’ condition) and across two levels of team performance, with findings suggesting that a quick quantitative summary of cost, MT, and tMT (quantitative display) afforded the best overall performance across dual- auditory and visual team-based tasks.
Andrew Cohen
Jeffrey Starns
In the attraction effect, a decoy option increases the choice share of a similar, dominating target option at the expense of a dissimilar, competitor option. The attraction effect has been demonstrated across numerous choice domains, including simple perceptual choice, i.e., selecting the rectangle with the largest area. Recently, researchers have also demonstrated a repulsion effect in perceptual choice, where the decoy increases the competitor’s choice share. Crucially, in these experiments, the target and decoy are more similar and thus easier to compare, which may generate correlations in their perceived areas. Such correlations can lead to reduced target choices, even if, on average, the perceived target and competitor areas remain equal. To examine the possibility that the repulsion effect is generated by correlated target-decoy perceptions, as opposed to a choice bias for the competitor, we employed a psychophysics experiment, coupled with Bayesian hierarchical modeling, to estimate the parameters of a multivariate Gaussian choice model. This work shows that judgements of the decoy and target areas are more strongly correlated than target-competitor or decoy-competitor areas. Furthermore, the model naturally produces a repulsion effect that is qualitatively similar to the results of a choice experiment using the same stimuli. These findings suggest that (this form of) the repulsion effect may stem from fundamentally different processes than the attraction effect.
Semantic vectors derived from training on large text corpora (e.g., word2vec, BERT) are widely used as a methodological tool to model similarity of concepts. Recent work has demonstrated that a small amount of human training data can be used to fine-tune these vectors for modeling specific tasks. For example, human ratings of pairwise similarity can be used to estimate a set of dimensional weights, and these weights can improve estimates of human similarity ratings for held-out pairs. We applied this methodology to the semantic fluency task (listing items from a category) and find that category- specific weights can be used to identify the semantic category of a fluency list. The results have methodological implications for modeling retrieval in semantic fluency tasks, estimating semantic representations, and identifying semantic clusters and switches in fluency data.
Understanding the organization of mental processes in classical cognitive tasks, such as visual and memory search, is critical for refining theoretical models of cognition. While factorial methods, particularly Systems Factorial Technology (SFT), have provided a powerful framework for distinguishing between serial and parallel processing, their application has been limited by methodological constraints in both theoretical development and the scale of research studies. This study aims to improve the diagnostic validity of factorial methods in investigating cognitive process organization by employing higher-order factorial designs. By extending factorial contrast tests, including Mean Interaction Contrast (MIC) and Survivor Interaction Contrast (SIC), to multi-process cognitive networks, we address previous limitations in experimental designs that restricted their effectiveness in real-world cognitive tasks. Through the systematic manipulation of factorial interactions, we demonstrate how higher-level factorial designs refine the identification of processing architectures and reduce ambiguities in distinguishing mental network structures. This research advances factorial methodologies, expanding their applicability to more complex cognitive systems and improving their utility in experimental psychology.
James T. Townsend
Revolving around a two-stage decisional paradigm where a categorization decision is followed by an action decision, it has been revealed and replicated in the past two decades that the choice behavior when both decisions were explicitly measured is inconsistent to that when only the action decision was measured explicitly. Such an inconsistency in choice behavior, referred as the interference effect, violates the fundamental properties of probability theory: the law of total probability and the Markov property; thus challenges a wide range of classical cognitive models of decision-making. By extending the application of a set of theory-driven response-time based measurements, the current study probed the underlying cognitive structure of the categorization and action decisions within the two-stage decisional paradigm. The results suggest that the interference effect is closely pertinent to the cognitive systems that deliberate the categorization and action decisions in a parallel manner, and moreover facilitatory interact the processing of these two deliberations. These findings set a solid foundation for further theoretical modeling efforts of revealing the underlying cognitive mechanisms that can produce the interference effect.
Prof. Stephane Hess
Dr. Thomas Hancock
Prof. Gustav Markkula
Pedestrian decision-making in road-crossing scenarios involves real-time sensory integration, risk assessment, and adaptive learning based on past experiences. Traditional models such as Drift Diffusion Models (DDMs) effectively capture momentary evidence accumulation but overlook learning mechanisms, while Reinforcement Learning (RL) frameworks excel at modeling experience-driven adaptations but lack real-time decision dynamics. In this study, we introduce a novel framework, Reinforcement Learning-Drift Diffusion Model (RL-DDM) that unifies these approaches, providing a comprehensive mathematical framework. We apply this to model pedestrian crossing decisions under uncertainty. Our model incorporates dynamically modulated drift rates, urgency-dependent decision boundaries, and feedback-driven learning mechanisms, capturing how individuals adapt crossing strategies based on trial-by-trial experiences and time-to-arrival (TTA) evaluations. Using hierarchical Bayesian inference, we fit the RL-DDM to experimental pedestrian crossing data, demonstrating superior predictive accuracy over standard DDM and RL models. Results reveal that learning-based drift adjustments and collapsing boundaries significantly improve alignment with observed crossing behaviors, highlighting the interplay between real-time evidence accumulation and long-term adaptive learning. Our findings offer new insights into cognitive models of decision-making in dynamic environments, bridging mathematical psychology with transportation research and pedestrian safety interventions.
Dr. Thomas Hancock
Prof. Stephane Hess
Decision Field Theory (DFT) provides a dynamic and psychologically grounded framework for discrete choice modelling, yet direct estimation of its parameters can be computationally challeng-ing in practice due to its complex dynamics and large parameter space. By contrast, Multinomial Logit (MNL) models are relatively straightforward to estimate and offer a more tractable represen-tation of discrete choice behaviour. In this paper, we propose an indirect inference approach that bridges the two methods using machine learning techniques. Specifically, we generate synthetic choice datasets from simulated DFT parameters and estimate MNL models on those datasets to obtain corresponding MNL parameter estimates. We then train an Artificial Neural Network (ANN) to learn an “inverse mapping” from these MNL estimates back to the original DFT pa-rameters. Once trained, the ANN is applied to a separate dataset—where MNL parameter estimates and true DFT parameters are both available—and successfully recovers the DFT parameters to within just a few percentage points of their true values. This empirical application shows that the combination of theory-driven simulation for generating training data and flexible machine learning for learning the inverse mapping provides a practical indirect inference framework that effectively learns the dynamics of complex models such as DFT from an easily estimated MNL model. Such approach offers a computationally efficient pathway to recover DFT parameters without fitting a DFT model directly. In addition, because MNL estimation is less prone to local optima than direct DFT estimation, the proposed approach helps mitigate the risk of suboptimal parameter recovery.
Prof. Stephane Hess
Dr. Thomas Hancock
Prof. Charisma Choudhury
Dr. Faisal Mushtaq
Prof. Mark Mon-Williams
Dr. Eran Ben-Elia
Recent advances in mathematical psychology have sought to bridge cognitive neuroscience and choice modeling, yet mainstream decision frameworks rarely incorporate mechanistic accounts of brain function. In this study, we introduce the Free Energy Principle (FEP)—a leading theory from neuroscience—as a novel approach to modeling dynamic choice behavior. The FEP posits that agents minimize surprise by updating internal beliefs to align with external states, offering a unifying perspective on learning, uncertainty reduction, and exploration-exploitation trade-offs. We develop an FEP-based model of route choice under uncertainty, capturing how individuals adapt to changing environments by updating beliefs about travel times. Using experimental data from 49 participants across 300 sequential choice tasks, we compare the FEP model to traditional reinforcement learning (RL) approaches. Our results demonstrate superior predictive accuracy for the FEP model (mean LL: -65.48 vs. mean LL: -73.02 for RL), highlighting its ability to account for belief updating, action precision, and memory decay. Crucially, FEP captures the differential impact of information availability on learning rates and exploration tendencies, offering a principled, biologically grounded alternative to conventional decision models. This work paves the way for new applications of active inference in behavioral modeling, with implications for transportation research, policy design, and broader cognitive modeling domains.
Prof. Stephane Hess
Dr. Thomas Hancock
Valence framing, where equivalent information is presented in either a positive or negative light, is a nuanced topic that has previously not been well accounted for in choice modelling literature. This may explain inconsistent results within the field and findings contrary to other fields, such as behavioural economics, where framing definitions are typologised. We endeavour to bridge this gap across fields by introducing a valence framing typology into a discrete choice experiment in a transport context, in an attempt to show that different types of framing should be more prominent within choice modelling because they can have differing results. Furthermore, a discrete choice experiment allows for the possibility of these three valence framing types to be investigated simultaneously, in one, multi-attribute decision context, which has not been explored previously. We ask participants to complete a series of choice tasks where attributes are framed using different valences and different types of framing. We gain insights into participants’ risk preferences and we find a relationship between response duration and frame condition, although no main framing effects are identified. We also consider the potential reasons for this and explore further avenues for analysis.
Prof. Stephane Hess
Dr. Thomas Hancock
Prof. Michiel Bliemer
Dr. Matthew Beck
Dr. Muhammad Fayyaz
Dr. Eran Ben-Elia
Decision-making in dynamic environments often involves learning through experience. A key example is travel behavior, where individuals must explore and adapt their route preferences based on past outcomes, gradually refining both their route perceptions and preferences. While human reinforcement learning (RL) models have been extensively used in mathematical psychology to explain adaptive decision processes, their application in real-world travel settings remains limited. In this study, we leverage RL frameworks to investigate route choice behavior, applying it to data from both driving simulator experiments and stated preference surveys. Our findings reveal systematic learning effects, with substantial heterogeneity in individual exploration-exploitation tradeoffs and sensitivity to feedback. Crucially, we demonstrate how experimental context (real-time experience vs. hypothetical scenarios) shapes learning dynamics, influencing the stability of preferences over time. By estimating hierarchical Bayesian RL models, we quantify inter-individual differences in learning rates and information updating, providing novel insights into how decision-makers balance uncertainty, risk, and feedback in sequential travel choices. Our results highlight the potential of reinforcement learning as a bridge between mathematical psychology and transport research, offering a powerful framework for modeling adaptive choice behavior in dynamic environments.
Dr. Thomas Hancock
There is increasing acknowledgement – including from the UK government - of the benefit of employing deliberative processes (deliberative fora, citizens’ juries, etc.). Current analytical methods for public debates are qualitative. Evidence suggests that the reporting of deliberative fora are often unclear or imprecise. If this is the case, their value to policymakers could be diminished. In this study, we expand the methods of deliberative processes to numerically document people’s preferences, as a complement to qualitative analysis. Data are taken from the National Food Conversation, a nationwide (UK) public consultation on reformations of the food system comprising 339 members of the general public. Each participant attended 5 workshops, each debated its own subtopic of the food system. In each workshop, individuals twice reported responsibility, from 0-10, for changing the food system for 5 bodies (governments, the food industry, supermarkets, farmers, individuals). Analyses examined individuals’ perceptions of food system change responsibility. Governments were most responsible and farmers least so. We further assessed variation over time, by workshop content, and by demographics. Across workshops, responsibility changed most for individuals, and least for the food industry. We devise a dynamic choice model to document a reversion effect, where shifts in responsibility within workshops waned over time, with preferences often reverting to pre-workshop levels. Crucially, this effect was less strong for those who abstained from voting, implying that preferences are harder to shift for those who already vote. These results can support qualitative analyses and inform food system policy development. These methods are readily adopted for any such deliberative process.
The ability to flexibly generate and execute structured action sequences is fundamental for goal-directed behavior. It requires the capacity to form intentions, maintain them over time, and dynamically adapt execution based on environmental conditions. The underlying mechanism for sequential action control is thought to be a general-purpose process of maintaining and updating structured representations of intentions, allowing for flexible transitions between actions. Importantly, cognitive processes underlying structured action have to emerge from the continuous dynamics of neural systems. We demonstrate these principles in a neural dynamic process model based on Dynamic Field Theory, which autonomously drives a robotic agent to execute sequences of structured pick-and-place actions. The model forms neural representations of action intentions, binding actions to objects and maintaining activation over time. Selection processes ensure that actions unfold in a structured manner, while perception-driven updates enable flexible adaptation. The model successfully generates and executes structured action sequences by continuously integrating sensory input, working memory, and motor representations. We propose that the ability to autonomously structure actions arises from the interaction of sustained neural activation, competitive selection, and perceptual grounding, in line with empirical findings on sequential action generation.
Dr. Catherine Manning
Dr. Jamal Amani Rad
Children with dyslexia exhibit atypical motion processing, yet the underlying cognitive mechanisms remain debated. While magnocellular differences have been proposed, recent evidence suggests that higher-level processes such as evidence accumulation and decision-making may also play a role. To disentangle these mechanisms, we employed both two-choice and continuous-response random dot motion (RDM) tasks, manipulating motion coherence and direction integration. A total of 42 children with dyslexia and 33 typically developing peers (ages 7–13) completed these tasks, enabling a comprehensive examination of motion perception at multiple levels of analysis. We applied the drift-diffusion model (DDM) to two-choice tasks and the circular diffusion model (CDM) to continuous-response tasks, providing a fine-grained analysis of evidence accumulation and decision processes. Children with dyslexia exhibited a significantly lower drift rate across all tasks, indicating reduced efficiency in extracting sensory evidence from global motion. However, non-decision time remained comparable between groups, suggesting that early sensory encoding differences alone cannot fully account for group differences in performance. These findings support the hypothesis that dyslexia-related motion processing differences extend beyond magnocellular functioning and involve differences in evidence accumulation and decision-making dynamics. By integrating both discrete- and continuous-outcome decision frameworks, our approach provides deeper insight into the nature of motion perception differences, offering a more detailed characterization of motion errors in dyslexia.
Dr. John Buckell
A key issue in modelling multi-attribute, multi-alternative decision-making is that there are many sources of heterogeneity that lead to different alternatives being chosen. Consequently, it is difficult to disentangle complex psychological effects with preference heterogeneity. For example, it is hard to disentangle non-attendance and `not caring much’ for a given attribute without eye-tracking data. In this paper, we develop a generalised discrete mixture (DM) model, which allows for combinations of parameter estimates rather than grouping parameter estimates as is the case in latent class models. Additional parameters in the class allocation component allow the model to collapse to a standard DM or LC structure as best fits the data at hand. This means that the model, by definition, performs at least as well as the best of a standard DM and a LC model. Additional benefits include that it (a) allows the data to tell us the underlying correlations of preferences and effects (additionally demonstrated with simulated data), (b) does not rely on distributions as is the case for mixed logit models, meaning estimation times are reduced and it does not require assumptions on the distribution of parameters (demonstrated with an application to health choice data) and most importantly (c) can help allow for the separate identification of preference heterogeneity and decision-making process heterogeneity (e.g. different thinking speeds, demonstrated through use of a discrete mixture decision field theory application).
To investigate the mental representation of numerosity, one of the methods is to ask the participants to estimate the number of a group of objects, such as a collection of dots. Numerosity perception studies using this kind of conversion from non-symbolic stimuli to symbolic stimuli like numbers indicate an underestimation bias, such that the responses are generally smaller than the actual number. These findings are explained by a mental numerosity representation, which is based on a logarithmically compressed mental line. In the current study, the effect of the feedback frequency on the underestimation errors was investigated in two experiments. For both of the experiments, the participants were asked to respond if the presented number of dots were greater than 50. In the first session of each experiment, feedback was withheld. In the second sessions, feedback was provided, but the frequency of the feedback was manipulated across experiments. In Experiment 1, feedback was presented after each trial, while in Experiment 2, feedback was provided only once at the beginning of the second session. For both experiments, a similar amount of underestimation error was found when feedback was not provided. The findings of an underestimation bias supports the idea of a logarithmically compressed mental number line. Providing feedback, even when presented only once, led to more accurate responses, suggesting a persistent calibration of the mental number line.
Mr. Hasan Qarehdaghi
Rolf Ulrich
Dr. Jamal Amani Rad
Conflict processing is a fundamental aspect of decision-making, traditionally studied using discrete-choice paradigms, where a response should be provided based on relevant informational sources while conflicting irrelevant sources must be ignored. However, real-world decisions often require selecting responses along continuous scales rather than between binary options. To bridge this gap, we introduce a novel adaptation of the Flanker task that employs a continuous decision space. Instead of discrete left/right arrow stimuli, targets and flankers are presented at varying angles within a 360-degree space, allowing for a graded spectrum of congruency based on the angular difference between target and flanker. An angular difference of 0 indicates the highest level of congruency, while a difference of 180 degrees represents the lowest level of congruency. Our results reveal systematic variations in response error and reaction time (RT) as a function of congruency. Specifically, response errors followed a structured pattern: at congruencies of 0°, 90°, -90°, and 180°, errors were minimized, indicating accurate identification of the target’s direction. At intermediate congruencies, systematic biases emerged—at 45° and -135°, participants tended to deviate in the negative direction relative to the flanker, whereas at -45° and 135°, deviations occurred in the positive direction. RTs were lowest for high-congruency trials and increased as congruency decreased. These findings suggest that conflict effects in continuous decision spaces follow structured, predictable patterns, highlighting the necessity of expanding theoretical models of conflict processing beyond discrete-choice frameworks. Our study lays the groundwork for future investigations into dynamic response strategies and attentional control in continuous conflict tasks.
Anne Collins
Prof. Hamidreza Pouretemad
Dr. Jamal Amani Rad
Human learning is driven by multiple interacting cognitive processes, which operate in parallel to shape decision-making in dynamic environments. Previous studies investigating the interaction between reinforcement learning (RL) and working memory (WM) suggest a dynamic interplay between the two. WM facilitates rapid early learning but is capacity-limited, leading to interference. RL, in contrast, gradually updates action values and enhances decision-making stability in high WM load, ultimately improving policy optimization over time. Here, we extended prior work by employing the RLWM task with a continuous response space featuring three discrete action targets to manipulate WM load, an aspect that has remained unexplored. We collected behavioral data from 85 participants who completed the RLWM task, including a surprise testing phase. Our findings reveal that under high WM load, RL plays a greater role, gradually strengthening stimulus-response associations that are later recalled more reliably during the test phase. While WM facilitates rapid early learning, this speed comes at a cost: associations learned quickly are more prone to forgetting, whereas those acquired more slowly under greater reliance on RL exhibit enhanced retention. These results highlight the interaction between WM and RL in shaping learning and memory retention. Future studies should further investigate this interaction in a continuous response space, which may better capture decision-making dynamics.
Prof. Joe Houpt
Psychophysical research on color filters tends to focus on information lost directly due to a filter. However, many interactions, particularly those relying on color, depend on relationships among and configurations of information sources. In the current research, we focus on narrow-band color filters and how those filters may disrupt the perception of target shape cues. By adapting multiple facets of stimuli, we examine how these disruptions influence attention and information accumulation through the application of a Linear Ballistic Accumulator Omissions Model. We will report on the impact of disruption on visual stimuli with filtered overlays within a search detection task and their influence on varying perceptual cues that may affect the rate of information processed by an individual. We find performance with unfiltered search arrays to be more accurate and faster. LBAO model fits indicate that both information accumulation rate and response caution increase when color filters are applied.
Anderson Fitch
Dr. Peter Kvam
To assess subjective value, people dynamically allocate attention to different attributes of their available options. As a measurable proxy for attention, eye fixation offers an external window into what information is attended and incorporated into preferences. Some preferential choice models, such as attentional drift diffusion models, incorporate gaze information to better understand both choice data and fixation patterns. However, the connection between gaze and other tasks, such as pricing, is not well understood. To shed light on this question, we created and tested a model for multi-attribute pricing, applying it to a risky-intertemporal pricing task. In this task, 31 participants were to respond how much they would pay to acquire delayed and/or risky payoffs (e.g., 30% chance of $20 in 100 days). Participants showed a strong preference for looking at the payoff attribute first and switched gaze mostly between payoffs and either secondary attribute (risk or delay), suggesting anchoring based on payoff during information sampling. This behavior was captured well by a model integrating process-based interaction of attention, information accumulation, and anchoring. We show that the diverging patterns of attention allocation between pricing and choice models can help explain preference reversals across preference elicitation procedures.
Lisa Dierker
Passion-Driven Statistics (https://passiondrivenstatistics.com) is an NSF-funded project-based, introductory statistics curriculum that supports students in conducting original research with real-world data from the very first day. Datasets are provided or an instructor can use one of their choosing. The curriculum is built around the Guidelines for Assessment and Instruction in Statistics Education. Traditional topics for an introductory statistics course are covered. Students work with descriptive and inferential statistics as well as basic statistical programming concepts and skills in the pursuit of managing and analyzing data. This original work is presented at a research poster session in which students have the opportunity to describe their process of inquiry, including the different decisions made along the way, their premises, conclusions and any barriers faced. Liberal arts colleges, large state universities, regional colleges/universities, medical schools, community colleges, and high schools have all successfully implemented the model. All resources, including student learning materials, are freely available to any instructor planning an authentic data-driven research curriculum for use across a variety of disciplines, and for engaging students at many different levels, including complete beginners. Resources include lecture videos, exams, assignments, etc. and current instructors using the model to offer their support. The model and resources are flexible and adaptable to meet your students’ needs and your classroom goals, whether you use one assignment or the full turnkey model. The project-based course enrolled higher numbers of underrepresented minority (URM) students than a traditional introductory statistics course (Dierker et al., 2015). Higher rates of female URM and a wider range of mathematical aptitude enrolled in the project-based course compared to both a general introductory programming course and an introductory course representing a gateway to the computer science major (Cooper & Dierker, 2017). Students enrolled in the course had more positive course experiences than those enrolled in a traditional course, including a better understanding of the information presented through one-on-one support, engaging in greater preparation for class, finding the course more useful and gauging its reward and feelings of accomplishment more highly (Dierker et al., 2018). Recent findings suggest the course may contribute to the decision for students to enroll in future courses in statistics and data analysis when compared to the psychology and mathematics department courses (Nazzaro et al., 2020). Similar findings with the curriculum on undergraduate students in Ghana, West Africa, demonstrate the potential for its global portability (Awuah et al., 2020).
Dr. Greg Cox
We present four experiments that examine perception and memory for a novel set of auditory stimuli, using multidimensional scaling and cognitive modeling to clarify how people perceive and recognize these items. The stimuli are auditory “textures” constructed by adjusting the distribution of power across upper frequency bands. In Experiment 1, people rated similarity between pairs of stimuli; in Experiments 2 and 3, they also engaged in a recognition memory task using the same stimuli. In Experiment 4, they did all the same tasks from the first three experiments, and rated stimuli for distinctiveness. Multidimensional scaling suggested the stimuli were perceived along three dimensions, a result which replicated across all four experiments. Similarity ratings, recency, and list homogeneity predicted recognition performance, but distinctiveness ratings did not. The Exemplar-Based Random Walk model (Nosofsky & Palmeri, 1997) accommodated all these effects. Taken together, our findings extend prior work (Visscher et al., 2007) to show that memory and attention processes in the auditory domain are fundamentally like those in the visual domain—though particularly strong recency effects in the auditory domain may be due to the unique structure of echoic memory. We conclude by discussing how the stimuli introduced in these experiments can be used as “building blocks” to test hypotheses about perception and memory for complex, naturalistic sounds like speech or music while retaining tight experimental control.
Michael Lee
Two major open questions in the study of reasoning are (1) what functions people compute to draw conclusions from given pieces of information, or premises; and (2) how people interpret the meanings of the premises they draw conclusions from. For example, how justified is it to conclude "they travelled by train" on the basis that "If they went to Ohio, then they travelled by train" and "they went to Ohio", and why? Although these questions have been debated for thousands of years, it is typically difficult to distinguish competing theories empirically because they tend to be defined only verbally, not computationally; and because they usually overlap in the predictions they make. This talk presents the current state of an ongoing project in which we translate verbal theories of how people reason with and interpret conditional premises like "If they went to Ohio, then they travelled by train" into computational form. Building on the hypothesis that people try not to contradict themselves when reasoning, we derive sets of internally consistent conclusions for a range of inferences and premise interpretations, and formalize them as components of a Bayesian latent-mixture model. Applying the model to simulated and existing reasoning datasets, we illustrate how different combinations of inferences provide more or less information for distinguishing between competing theories based on the specificity and degree of overlap in their predictions.
Ms. Heather Statham
Mr. Phil Schmid
The stop-signal reaction time (SSRT) is amongst the most popular measures of executive function. Individual differences in SSRT are largely believed to reflect differences in the speed of inhibitory top down signals. This belief is, however, largely misguided since, as I will show once more, about 25% of the variance in SSRT is shared with speeded reaction time in tasks where no top-down inhibition is required. This strong overlap is theoretically expected, as both measures include the same incompressible sensory and motor delays, which are large, vary across participants and remain stable over time. Considering the documented poor stability of SSRT over time, 25% represents a large proportion, one that may well drive many effects related to SSRT in the literature. To provide a measure of top-down stopping which, in contrast to SSRT, acknowledges sensory and motor delays, one can use a selective stopping task, which interleaves signal-stop, signal-ignore and signal-absent trials. I will present results from 30 participants each performing manual and saccadic selective stopping tasks over thousands of repetitions. This rich dataset allows the extraction of two functionally meaningful landmarks from the reaction time distributions: the time T0 when the signal-present and signal-absent distributions diverge, and the time TS when the signal-stop and signal-ignore distributions diverge. T0 results from automatic inhibition and indicates the lower bound of visuo-motor delay (Bompas et al., 2024), and is the same for signal-ignore and signal-stop trials. TS is the earliest time a participant is able to apply the stopping instruction selectively. The delay between T0 and TS reflects the time taken to turn automatic signals into task-selective stopping signals. This delay is around 80 ms on average. Using the DINASAUR model (Bompas et al. 2020), we show that this top-down stopping delay is much longer than top-down excitatory delay needed to turn automatic signals into a go response. Bompas, A., Campbell, A. E. and Sumner, P. (2020). Cognitive control and automatic interference in mind and brain: A unified model of saccadic inhibition and countermanding. Psychological Review. https://doi.org/10.1037/rev0000181 Bompas, A., Sumner, P. and Hedge, C. (2024). Non-decision time: the Higgs boson of decision. Psychological Review. https://doi.org/10.1037/rev0000487
Ying-yu Chen
Erin Silvas
Prof. Joe Houpt
General Recognition Theory (GRT) demarcates various types of independence in the multidimensional perceptual decision process and provides empirical methods to detect violations of these independences. These methods are based on the pattern of errors made by participants and therefore require the experiment to be carefully designed so that the level of difficulty is appropriately matched to perceptual ability. The traditional approach to administering a GRT experiment is susceptible to individual differences in perceptual ability, which may cause more data to need to be collected than would otherwise be necessary. To address this problem, we developed an adaptive approach that calibrates the experiment to the individual participant. We previously conducted a simulation study to demonstrate proof of concept and gauge the adaptive method’s statistical properties. In the current work, we present a validation study using human participants. Twenty participants made judgments about separable or integral stimuli. During a short preliminary block, the adaptive algorithm fit a highly constrained Gaussian GRT model to each participant’s responses. Afterwards, the participants performed the typical complete identification task using stimuli predicted by the fitted model to yield a targeted level of accuracy. We observed more violations of marginal response invariance in the integral condition than in the separable condition. We compare the effectiveness of the adaptive approach to a control study where a single set of stimuli were determined via traditional pilot-testing and administered to all participants. Our adaptive approach improves the accessibility and efficiency of GRT studies and facilitates online and replication studies.
Dr. Peter Kvam
Investigating the attention-modulated decision process and attention allocation during decision-making is crucial for understanding human decision, learning, and information search mechanisms. However, these models can be challenging to fit, which has hindered researchers from developing and comparing complex theories using computational modeling. To address this issue, we developed a neural network-based method using deep sets, which makes it possible to accurately fit and compare attention-modulated decision models as well as joint models of attention and decision formation without relying on explicit likelihoods. First, we validated our approach through successful parameter recovery and model recovery to ensure that the models could be meaningfully fit in principle. Second, we benchmarked our method against traditional likelihood-based model fitting techniques, using existing models as testbeds. Third, we applied our method to more complex simulation-based models that lack the likelihoods required for traditional fitting methods. Finally, we examined how well these models could be applied to real data, illustrating that the deep set-based approach is a promising method for making attention-driven decision models more usable and accessible.
Nathan Lepora
Perceptual decision-making provides a framework for understanding how organisms translate sensory evidence into actions, but traditional models face challenges in explaining choice phenomena and motor integration. Despite evidence of both covert and overt motor processes during deliberation, most frameworks treat movement as merely implementing a completed decision. We explore the relationship between action and decision making by extending a proposed framework for embodied choice and independently varying the influence of motor feedback on internal choice variables and the contribution of evidence to action. This new model, Degenerate Embodied Choice (DEC), arbitrates between parallel and embodied theories of choice. We demonstrate that DEC replicates the speed-accuracy trade-off (SAT) degenerately, with embodiment proving both necessary and unique for trading speed and accuracy across urgent and accuracy-emphasised tasks. DEC emulates empirical data both qualitatively and quantitatively, with model-fitted parameters falling exclusively within the embodied set and producing congruent predictive SAT values within a narrow band. We then introduce the Optimality Framework for Embodied Choice (OFEC) as a lens for examining embodied choice through optimality principles. Our findings suggest that complex decision behaviours can emerge from simple underlying principles, whether through geometric properties of decision boundaries or motor-cognitive integration.
Submitting author
Author