By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
In this talk, I will explore applications of the visual analytics method Recurrence Quantification Analysis (RQA) to choice sequences and other discrete behavior time series. Choice sequences are often examined as aggregate behavior statistics, like choice proportions, or proxy summary statistics, like points earned. But in the process of aggregation, much information about behavioral dynamics is lost. Yet, our descriptions of choice strategies, like “win-stay-lose-shift”, are statements about the behavioral dynamics; they suggest specific patterns that should be observed in the sequences. Auto-RQA helps us characterize individual sequences in ways that highlight important aspects of behavioral dynamics, such as short-range switching between options and longer time-scale adaptations or shifts in preferences, when present. Cross-RQA provides tools allowing us to compare observed behaviors to specific strategies. I will discuss implications of using RQA for model selection and to inform intelligent machines for adaptive decision aiding and human-autonomy teaming.
Human operators often perform signal detection tasks with assistance from automated aids. Unfortunately, users tend to disuse aids that are less than perfectly accurate (Parasuraman & Riley, 1997), disregarding the aids' advice even when it might be helpful. To facilitate cost-benefit analyses of automated signal detection aids, we benchmarked the performance of human-automation teams against the predictions of various models of information integration. Participants performed a binary signal detection task, with and without assistance from an automated aid. Each trial, the aid provided the participant a binary judgment along with an estimate of certainty. Models chosen for comparison varied from perfectly efficient to highly inefficient. Even with an automated aid of fairly high sensitivity (d' = 3), performance of the human-automation teams was poor, approaching the predictions of the least efficient comparison models, and efficiency of the human-automation teams was substantially lower than that achieved by pairs of human collaborators. Data indicate strong automation disuse, and provide guidance for estimating the benefits of automated detection aids.
Recent research in cybersecurity has begun to develop active defense strategies using game-theoretic optimization of the allocation of limited defenses combined with deceptive signaling. These algorithms assume rational human behavior. However, human behavior in an online game designed to simulate an insider attack scenario shows that humans, playing the role of attackers, attack far more often than predicted under perfect rationality. We describe an instance-based learning cognitive model, built in ACT-R, that accurately predicts human performance and biases in the game. To improve defenses, we propose an adaptive method of signaling that uses the cognitive model to trace an individual’s experience in real time. We discuss the results and implications of this adaptive signaling method for personalized defense.
The use of cognitive-theory-driven approaches may evaluate performance and cognitive processes with more rigor and precision than current procedures and metrics used in human factors research and application. A mathematical modeling approach allows for both more theoretically meaningful measures than raw accuracy or response time (RT), and for insight into the aspects of the cognitive process that may have led to better or worse performance. Extending the modeling approaches developed in mathematical psychology to evaluate applied environments may inform display design, multitask combination, assist adaptive automation, or supply pertinent feedback in real-time. In this talk, I demonstrate a few applications of mathematical models to inform human-centered design: the evaluation of multispectral fusion techniques, estimation of efficiency to compare multitask configurations, and the influence of task load on multitasking efficiency and management strategies. Each of these modeling approaches provide additional insights beyond traditional analyses. In conclusion, I illustrate how developing time-varying mathematical models can serve as a useful online tool for evaluating cognitive processes and performance.
Subject matter expert (SME) knowledge is often an integral component in multidisciplinary analyst teams. SMEs can provide proper context, meaning, and additional insight on data received from the real world. We use conjoint analysis to elicit SME expertise from various scenarios. Conjoint analysis provides a means to rank knowledge graph elements and determine node-level, edge-level, and subgraph (event) level weights. I will discuss findings for this novel application to graph data and potential use cases for such rankings.
Recent advances in neural networks and deep reinforcement learning (e.g., for image/video classification, natural language processing, autonomy, and other applications) have begun to produce AI systems that are highly capable, but often fail in unexpected ways that are hard to understand. Because of their complexity and opaqueness, an Explainable AI community has re-emerged with the goal of developing algorithms that can help developers, users, and other stakeholders understand how these systems work. However, the explanation produced by these systems are generally not guided by psychological theory, but rather by unprincipled notions of what might be effective at helping a user understand a complex system. To address this, we have developed a psychological theory of explanation implemented as a mathematical / computational model. The model is focused on how users engage in sensemaking and learning to develop a mental model of a complex process, with a focus on two levels of learning that map onto System 1 (intuitive, feedback-based tuning of a mental model) and System 2 (construction, reconfiguration, and hypothesis testing of a mental model) processes. These elements of explanatory reasoning map onto two important areas of research within the mathematical psychology community: feedback-based cue/category learning (e.g, Gluck & Bower, 1988), and knowledge-space descriptions of learning (Doignon & Falmagne, 1985). We will describe a mathematical/computational model that integrates these two levels, and discuss how this model enables better understanding of the explanation needed for various AI systems. This work was in collaboration with Lamia Alam, Tauseef Mamun, Robert R. Hoffman, and Gary L. Klein.