By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Algorithm aversion, the tendency to favor human judgment over algorithmic recommenda-tions despite evidence of superior performance, poses significant challenges in corporate decision-making. This research explores the underlying factors contributing to this phenomenon in management and accounting, with a particular focus on cost forecasting. Grounded in a decision-theoretic framework, this work introduces an extended framework that accounts for: (1) the varying performance requirements across business contexts and their implications for algorithm acceptance; (2) the differentiation between decision-support and delegation systems, highlighting that, in corporate decision-making, the boundaries between these alternatives are often fluid, making a strict classification challenging; (3) the integration of multiple influencing factors by distinguishing individual error terms instead of aggregating them, allowing for a more precise analysis of different sources of uncertainty; and (4) the role of individual capabilities, incorporating perceived competence as a key determinant in the adoption process. Through a structured mathematical analysis of the psychological decision-making process, this research investigates the interplay between human preferences, individual capabilities, and organizational as well as technological constraints in shaping selections regarding the adoption or refusal of algorithms in managerial context. By examining these dynamics, the study not only enhances the understanding of algorithm aversion but also identifies practical ways to reduce its impact and encourages broader adoption. By aligning theoretical insights with practical business needs, the research aims to support companies with the necessary insights to successfully integrate algorithmic tools into their decision-making processes while addressing potential barriers to acceptance.
In safety-critical modern workplaces, individuals are often required to perform concurrent tasks, including unaided tasks and tasks supported by automated decision aids. We present an integrated computational model of how people use automated decision aids when multi-tasking. We test the model using a multi-tasking paradigm involving an aided ongoing task and a concurrent unaided prospective memory (PM) task. We find that several interacting cognitive mechanisms underlie performance of the concurrent unaided PM task and use of the automated decision aid. Providing an automated decision aid slowed the rate of evidence accumulation for the concurrent unaided PM task. Automation provision increased (excited) accumulation for ongoing task responses congruent with automated advice and decreased (inhibited) accumulation for incongruent responses, which improved accuracy and reduced response times when the automation-aided task was performed alone. When multi-tasking, participants controlled the balance of excitation and inhibition to facilitate concurrent unaided PM task completion. When provided automated advice, participants reduced their aided ongoing task and unaided PM task thresholds in a manner consistent with increased automation reliance. Our findings have implications for automation design in work settings involving multi-tasking.
In human-AI decision-making, understanding the factors that maximize overall accuracy remains a critical challenge. This study highlights the role of metacognitive sensitivity—the agent's ability to assign confidence scores that reliably distinguish between correct and incorrect predictions. We propose a theoretical framework to evaluate the impact of accuracy and metacognitive sensitivity in hybrid decision-making contexts. Our analytical results establish conditions under which an agent with lower accuracy but higher metacognitive sensitivity can enhance overall decision accuracy when paired with another agent. Empirical analyses on a real-world image classification dataset confirm that stronger metacognitive sensitivity—whether in AI or human agents—can improve joint decision outcomes. These findings advocate for a more comprehensive approach to evaluating AI and human collaborators, emphasizing the joint optimization of accuracy and metacognitive sensitivity for enhanced decision-making.
Fundamental choice axioms, such as transitivity of preference and other rationality axioms, provide testable conditions for determining whether human decision making is rational, i.e., consistent with a utility representation. Recent work has demonstrated that AI systems trained on human data can exhibit similar reasoning biases as humans and that AI can, in turn, bias human judgments through AI recommendation systems. We evaluate the rationality of AI responses via a series of choice experiments designed to evaluate rationality axioms. We considered ten versions of Meta's Llama 2 and 3 LLM models. We applied Bayesian model selection to evaluate whether these AI-generated choices violated two prominent models of transitivity. We found that the Llama 2 and 3 models generally satisfied transitivity, but when violations did occur, occurred only in the Chat/Instruct versions of the LLMs. We argue that rationality axioms, such as transitivity of preference, can be useful for evaluating and benchmarking the quality of AI-generated responses and provide a foundation for understanding computational rationality in AI systems more generally.