By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
The cognitive processes underlying Go/No-Go performance may be explained by two plausible evidence accumulation models: Two-Boundary (2-B) and One-Boundary (1-B) decision drift models (DDMs). While both embed a Go decision, the 2-B DDM embeds a definitive No-Go decision, whereas the 1-B DDM embeds a response window for Go. Using simulations, we found that model comparison methods like leave-one-out cross-validation (LOO), coupled with Bayesian hierarchical modeling, can correctly identify the underlying model. Additionally, using the correct model reduces the risk of missing true effects or detecting spurious findings. Therefore, we recommend researchers implement and compare both models for Go/No-Go studies to reduce misleading results. Lastly, we implemented these models to investigate race effects in the decision to shoot during police training. We found that the accumulated evidence needed to reach the Shoot decision is lower for Black suspects, which explains the heightened error rates for shooting unarmed Black suspects in data.
This is an in-person presentation on July 27, 2025 (15:20 ~ 15:40 EDT).
Behavioral adaptation in probabilistic environments requires learning through trial and error. While reinforcement learning (RL) models can describe the temporal development of preferences through error-driven learning, they neglect mechanistic descriptions of single-trial decision-making. On the other hand, sequential sampling models such as the diffusion decision model (DDM) allow for the mapping of state preferences on single response times. We present a Bayesian hierarchical RL-DDM that integrates temporal-difference (TD) learning to bridge these perspectives. Our implementation incorporates variants of TD learning, including SARSA, Q-Learning, and Actor-Critic models. We tested the model with data from N = 58 participants in a two-stage decision-making task. Participants exhibited learning over time, becoming both more accurate and faster in their choices. They also reflected a difficulty effect, with faster and more accurate responses for easier choices, as reflected by greater subjective value differences between available options. Model comparison using predictive information criteria and posterior predictive checks demonstrated that, overall, participants seemed to employ on-policy learning. Furthermore, the RL-DDM captured both the temporal dynamics of learning and the difficulty effect in decision-making. Our work represents an important extension of the RL-DDM into temporal-difference learning.
This is an in-person presentation on July 27, 2025 (15:40 ~ 16:00 EDT).
Go/no-go tasks are used extensively in neuropsychological testing to assess attention and inhibitory control. But because go/no-go tasks have only one response, it is possible that some responses for some subjects are fast guesses. This is critically important because fast responding can be used as a sign of impaired attention or inhibitory processing. In two-choice tasks, fast guesses can be identified using short cutoffs and examining accuracy of responses below that cutoff. When accuracy, even in the easiest conditions, is at chance, then responses are almost certainly guesses. A similar solution can be used for the go/no-go task using conditional accuracy functions. We used conditional accuracy functions to eliminate fast guesses and fit diffusion models to data from 4 go/no-go tasks and standard two-choice tasks tested on the same subjects. Nondecision time and starting points differed between the tasks but unlike earlier modeling, drift rates and the other model parameters were the same for both versions of the task. Anticipations are used in diagnosis for ADHD for example, but we found that the proportion of fast guesses for ADHD children, controls, and undergraduates were similar which shows potentially serious problems for the use of go/no-go tasks in neuropsychological testing.
This is an in-person presentation on July 27, 2025 (15:00 ~ 15:20 EDT).