By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Interruptions are a fundamental challenge requiring individuals to efficiently manage task switching and cognitive load. Artificial Intelligence (AI) assistants aim to mitigate these challenges, yet their effectiveness is often hindered by poorly timed or irrelevant interventions. This study explores the use of cognitive modeling to predict human behavior in interruption-prone environments, with the goal of enhancing AI-driven assistance. A controlled cooking task experiment was conducted, in which participants completed a four-step recipe while interacting with a robotic assistant providing task suggestions with varying degrees of usefulness. An ACT-R-based cognitive model, grounded in the memory for goals framework, was developed to simulate human task-switching behavior. Model validation followed an iterative refinement process, comparing predictions against human data. The findings indicate that the cognitive model closely approximates human task execution times and was able to adapt to different levels of usefulness of the robot suggestions. The study demonstrates the feasibility of using cognitive models to model and predict human behavior in interruption tasks which could be used to improve human-robot interactions.
Humans live in the flow of time. However, the perception of time varies between individuals and changes depending on the situation. This variability is believed to be closely related to emotions. For example, people tend to perceive time as passing quickly when they are having fun, whereas they may feel it slows down when they are anxious. Based on this relationship, this study reports a simulation examining the effects of anxiety on perceptual-motor tasks from the perspective of time perception. Previous research on time perception suggests that anxiety increases arousal, which in turn makes time feel like it is passing more slowly. In particular, individuals with high trait anxiety are more sensitive to environmental changes, leading to more pronounced distortions in time perception. Considering these findings, this study conducted a simulation using the cognitive architecture ACT-R to investigate how anxiety affects perceptual-motor tasks. The modeling of anxiety in this study is based on the anticipation of failure-related memories. The simulation results confirmed that freezing behavior occurred in response to the task, affecting the model's performance.
AI tools offer the promise of individualized training and support in a variety of tasks, but to determine when and what kind of assistance to provide, it will be necessary for such tools to gauge the level of difficulty each action poses to an individual subject. This is made more challenging for tasks that take place in dynamic environments where many small-scale actions contribute to a long term goal, but no individual action can be objectively labelled as being "correct". Here, we attempt to map the difficulty of just such a task, the video game Tetris. Using a simple model that is capable of human-like performance at a high level, we take advantage of the model's ability to evaluate all possible actions to determine not only which is the best, but also how that action relates to the other available actions at each decision point. Decisions with a single action rated definitively higher than any other option are considered easy, and decision difficulty increases with the number of plausible actions. We look at the incidence rate of decisions of increasing difficulty levels, and how these patterns vary across players of different skill levels.
Human’s systematic cognitive processes are vulnerable to misinformation-related effects. For example, misleading information can have a lasting influence even after it has been corrected. This continued influence effect (CIE) was found to be resistant to mitigation in experiments. Leading explanations are memory-based, but some include emotion and/or reasoning. However, mixed findings have hindered our understanding of the phenomenon. We argue that cognitive models are uniquely suited to help clarify these mixed findings and theories through specification of underlying mechanisms, testing hypotheses, and identifying why and when mitigations are effective. We start by discussing relevant experimental findings, then present an updated cognitive model of the CIE, compare model fits to two experiments across three model variations, and discuss the results along with recent exploratory analyses with previous experiments.