Memory I
Mr. Ani Doppalapudi
Retrieval-Induced Forgetting (RIF) is defined as reduced recall of a studied item when one is cued to recall some other studied item. Inhibition of the retrieved trace has been proposed as a cause: E.g. Anderson, Bjork and Bjork hypothesized that a first letter cue for recall of a word starting with that letter might sometimes produce retrieval of another word not starting with that letter, and that the retrieved trace would be inhibited (degraded). Other theories suggest the opposite, that a wrongly retrieved trace would be stored and/or strengthened. It is not known if such incorrect retrievals occur often enough to produce either inhibition or the opposite. The present studies included conditions with non-diagnostic picture primes that would induce retrieval of the wrong word. All conditions compared final recall of the studied word that might have been inhibited (or the opposite) due to cued testing of a different word, with final recall of a word studied after the testing: A trace can only be inhibited if it exists, so conditions posited to produce inhibition should increase the difference between recall of early words vs late words, whereas storage or strengthening should decrease the difference. Whichever is the case, picture priming should increase the effect. Every test in two experiments produced the opposite of inhibition: Incorrect words that are retrieved are better recalled later. We model the results with well-established memory processes. We suggest further that RIF found in other studies can be explained by well-established memory processes, namely competition and context change.
Caren Rotello
Prof. Yonatan Goshen-Gottstein
The field of Recognition memory is in the midst of a measurement crisis. Simulations (Rotello et al., 2008; Levi et al., 2024) show that standard sensitivity measures, including d′ and Pr, confound sensitivity with response bias. Consequently, experimental conditions with similar sensitivity but different levels of bias exhibit spurious significant differences (Type-I errors) at rates far exceeding 5%. The bias confound on the measurement of sensitivity arises because standard measures rely on assumptions inconsistent with the data-generating model of recognition memory (e.g., unequal-variance Gaussian lure and target distributions). Moreover, participants differ in the extent to which the target distribution has a larger variance than the lure distribution, making it yet more difficult for any measure to capture the precise data-generating model. Here, we promote the less-known signal-detection measure, da, which supports the unequal variance assumption. Using simulations and empirical data, we estimated the variability of the target distribution relative to that of the lures, presumably allowing for a valid calculation of sensitivity. In a series of recognition experiments, we manipulated implied base-rates of targets. Sensitivity was measured across two iso-sensitive conditions, with da as well as d´, Pr, and other standard measures. Only da yielded the expected Type-I error rates of approximately 5%. Our findings illustrate how the estimation of target variability at the participant level interacts with the number of trials and sample size. These results underscore the importance of methodological precision in measuring memory, offering da as the alternative for future research.
Dr. Greg Cox
We present four experiments that examine perception and memory for a novel set of auditory stimuli, using multidimensional scaling and cognitive modeling to clarify how people perceive and recognize these items. The stimuli are auditory “textures” constructed by adjusting the distribution of power across upper frequency bands. In Experiment 1, people rated similarity between pairs of stimuli; in Experiments 2 and 3, they also engaged in a recognition memory task using the same stimuli. In Experiment 4, they did all the same tasks from the first three experiments, and rated stimuli for distinctiveness. Multidimensional scaling suggested the stimuli were perceived along three dimensions, a result which replicated across all four experiments. Similarity ratings, recency, and list homogeneity predicted recognition performance, but distinctiveness ratings did not. The Exemplar-Based Random Walk model (Nosofsky & Palmeri, 1997) accommodated all these effects. Taken together, our findings extend prior work (Visscher et al., 2007) to show that memory and attention processes in the auditory domain are fundamentally like those in the visual domain—though particularly strong recency effects in the auditory domain may be due to the unique structure of echoic memory. We conclude by discussing how the stimuli introduced in these experiments can be used as “building blocks” to test hypotheses about perception and memory for complex, naturalistic sounds like speech or music while retaining tight experimental control.
Dr. Constantin Meyer-Grant
Dr. Henrik Singmann
Samuel Harding
Signal Detection Theory (SDT) has been a cornerstone of psychological research, particularly in recognition memory. However, its conventional application relies predominantly on the Gaussian assumption—a reliance driven more by historical precedent than theoretical necessity, with notable drawbacks. This talk critically examines these limitations and introduces an alternative: a principled parametric approach based on extreme-value distributions, specifically event minima—the Gumbel_min SDT model. A key feature distinguishing this model from other alternatives is its foundation in a behavioral axiom of invariance under choice-set expansions, akin to Yellott Jr.'s (1977) seminal work on Luce’s Choice Theory. We present a novel recognition-memory experiment that directly supports this behavioral axiom and, by extension, the Gumbel_min model. Furthermore, we benchmark its performance against traditional Gaussian SDT across various recognition-memory tasks, including ranking, forced-choice, and simultaneous detection-identification paradigms. Our findings underscore the advantages of Gumbel_min-based modeling, particularly its robust sensitivity index, g′, which can be computed from a single pair of hit and false-alarm rates.
Submitting author
Author