Perspectives
State-trace analysis (STA; Bamber, 1979) is a method for determining the number of underlying parameters or latent variables that are varying across two or more dependent variables. The method operates on a data space, which includes a dimension for every dependent variable. This talk will describe several recent results that considerably generalize and refine STA. First, STA is generalized to any arbitrary number of tasks by showing that, under very weak conditions, any model in which r free parameters are varying across N dependent variables (where N >= r) predicts that the resulting state-trace plot will be r dimensional. Next, the talk will address the question of whether STA can identify the number of underlying cognitive systems that produced the data. This section will begin by proposing a formal definition of a cognitive system and then show that unless a two-dimensional state-trace plot is a perfect vertical or horizontal line, STA can provide no information about the number of underlying systems. The final portion of the talk will show that the length of the state-trace plot produced by a model when a single parameter is varied provides a measure of the relative contribution of that parameter to the model’s overall flexibility or complexity. For example, this method shows that the d' parameter of signal detection theory contributes much more to the model’s complexity than the X_C parameter.
This is an in-person presentation on July 27, 2025 (15:40 ~ 16:00 EDT).
Peter Kvam
Operational definitions have long been a core approach to measuring and relating observed data to theoretical constructs in psychological science. However, many contemporary modeling approaches violate basic assumptions of operational definitions and operationalism more generally – foregoing assumptions about objectivity, repeatability, independence, and fixed elicitation procedures. Counterintuitively, these departures imbue model-based definitions of constructs with superior measurement properties, such as improved reliability and validity, when compared to their operational counterparts. Instead of relying on operational definitions of constructs, we instead suggest that psychology can adapt relational definitions, representing constructs as latent variables in a multilevel generative model of behavior, self-report, or neuroimaging data. These model-based metrics can better reflect measurement error at multiple levels, account for the interactions between measurement devices (tasks, scales) and measurement objects (participants, processes), provide a holistic account of latent constructs and how they manifest across different measurements, facilitate convergence or discrimination tests among different tasks seeking to measure the same construct, and improve scientific communication by clarifying core psychological concepts. Relational definitions of important constructs should naturally emerge as we apply models more regularly, and these definitions and models will improve as we discover mathematical approaches that are suited to describing psychological processes.
This is an in-person presentation on July 27, 2025 (15:00 ~ 15:20 EDT).
Ms. Brittney Currie
Ms. Yu Huang
Ms. Anna Carlson
Ms. Sylvia E
Philosophers and psychologists broadly agree that humans share universal moral values, but scholars disagree on the details. We analyze two leading theories in moral psychology, Social Domain Theory (SDT) and Moral Foundations Theory (MFT), in novel ways. We embrace contemporary calls for hardening psychological research that have emerged from the replication crisis. 1. We apply a Popperian philosophy of science ideal of formulating and testing parsimonious falsifiable theories. To that end, we represent SDT, MFT, as well as three sub-theories of MFT, through restrictive mathematical constraints. 2. We avoid over- or mis-specifying hypothetical constructs by treating moral preference as an ordinal scale and probability as an absolute scale. 3. We translate the mathematical characterizations into formally precise, paradigm cross-cutting, probabilistic choice models. 4. Rather than follow "statistical rituals," such as hunting for effects or leaning on statistical models poorly grounded in substantive theory, we employ custom-designed order-constrained models and data analytics. We evaluate the empirical performance on several different levels of analysis, through both model-fitting and quantitative model-competition. Our findings reveal rich, detailed, and nuanced insights into SDT and MFT as either individual-level or universal theories. While we make some judgment calls in our formalizations, other scholars can readily carry out similar mathematical and statistical analyses using their own understanding of SDT, MFT, or other theories, with the same or different data. Our approach provides a path for scholars who aspire to translate other verbal theories into parsimonious, falsifiable choice models, with minimal extraneous assumptions.
This is an in-person presentation on July 27, 2025 (15:20 ~ 15:40 EDT).
Submitting author
Author