Learning & Updating
wolf vanpaemel
Francis Tuerlinckx
Prof. Jonas Zaman
Modern perceptual theories suggest that human perception is not a static or direct reflection of the physical world. Instead, our brains translate sensory inputs into probabilistic mental representations, shaped by prior experiences and contextual influences. This perspective challenges traditional associative learning theories, which assume that individuals have perfect and direct access to physical reality when forming associations. These classical theories also rely on predefined similarity functions to account for generalization, failing to reflect the inherent uncertainty and variability in perception. To address these limitations, we propose the Distributional Perceptual Knowledge Mapping (DPKM) model, which posits that perception and associative learning occur within a shared mental space. In this framework, learned associations are shaped by the same probabilistic processes that govern perception, rather than being independent of them. As a result, generalization emerges naturally as a consequence of the stochastic structure of mental representations, eliminating the need for an explicit similarity function. This approach provides a more ecologically valid understanding of how learning adapts to perceptual uncertainty, offering insights into fundamental cognitive mechanisms. By integrating perception and learning within a unified theoretical model, DPKM offers a novel perspective on how knowledge is acquired, structured, and generalized in the mind.
Dr. Michael Frank
Alexander Fengler
Dr. Michael Frank
In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the choice function for reinforcement learning (RLSSM), opening up new avenues for exploring generative processes that can jointly account for decision dynamics within and across trials. To date, such approaches have been limited by computational tractability, due to lack of closed-form likelihoods for the decision process and expensive trial-by-trial evaluation of complex reinforcement learning (RL) processes. By combining differentiable RL likelihoods with Likelihood Approximation Networks (LANs), and leveraging gradient-based inference methods including Hamiltonian Monte Carlo or Variational Inference (VI), we enable fast and efficient hierarchical Bayesian estimation for a broad class of RLSSM models. By exploiting the differentiability of RL likelihoods, this method improves scalability and enables faster convergence with gradient-based optimizers or MCMC samplers for complex RL processes. To showcase the combination of these approaches, we consider the Reinforcement Learning - Working Memory (RLWM) task and model with multiple interacting generative learning processes. This RLWM model is then combined with decision-process modules via LANs. We show that this approach can be combined with hierarchical variational inference to accurately recover the posterior parameter distributions in arbitrarily complex RLSSM paradigms. In comparison, fitting a choice-only model yields a biased estimator of the true generative process. Our method allows us to uncover a hitherto undescribed cognitive process within the RLWM task, whereby participants proactively adjust the boundary threshold of the choice process as a function of working memory load.
Dr. Dora Matzke
Individual differences in human abilities are traditionally treated as having two components, fluctuating states and traits that are stable over time with only very slow developmental and aging changes. Both biological and psychological sources, such as circadian rhythms, sleep debt, affective states and learning and forgetting, cause fluctuations with scales ranging from seconds, minutes and hours to days, weeks and longer time periods. We investigate the implications of multiple scales of temporal variations in states for the measurement of human abilities. We show that multi-scale variation implies that both test-retest reliability and external validity can be improved, relative to traditional single-occasion testing, by briefer measurements on multiple occasions. We discuss how these results can be used to optimise the new ecological momentary assessment opportunities that are afforded by mobile measurement technologies.
This is an in-person presentation on July 28, 2025 (15:00 ~ 15:20 EDT).
Alexander Fengler
Dr. Michael Frank
In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the choice function for reinforcement learning (RLSSM), opening up new avenues for exploring generative processes that can jointly account for decision dynamics within and across trials. To date, such approaches have been limited by computational tractability, due to lack of closed-form likelihoods for the decision process and expensive trial-by-trial evaluation of complex reinforcement learning (RL) processes. By combining differentiable RL likelihoods with Likelihood Approximation Networks (LANs), and leveraging gradient-based inference methods including Hamiltonian Monte Carlo or Variational Inference (VI), we enable fast and efficient hierarchical Bayesian estimation for a broad class of RLSSM models. By exploiting the differentiability of RL likelihoods, this method improves scalability and enables faster convergence with gradient-based optimizers or MCMC samplers for complex RL processes. To showcase the combination of these approaches, we consider the Reinforcement Learning - Working Memory (RLWM) task and model with multiple interacting generative learning processes. This RLWM model is then combined with decision-process modules via LANs. We show that this approach can be combined with hierarchical variational inference to accurately recover the posterior parameter distributions in arbitrarily complex RLSSM paradigms. In comparison, fitting a choice-only model yields a biased estimator of the true generative process. Our method allows us to uncover a hitherto undescribed cognitive process within the RLWM task, whereby participants proactively adjust the boundary threshold of the choice process as a function of working memory load.
This is an in-person presentation on July 28, 2025 (15:20 ~ 15:40 EDT).
Submitting author
Author