Measurement
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options impact ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using different scales. We find people tend to give positive ratings, especially when using letter scales compared to number scales. Furthermore, people tend to use the end-points less often when a scale is presented in reverse. Our model-based analysis allows us to infer how the features of scales shift responses to higher or lower ratings and how they compress scale use to make end-point responses more or less likely. The model also makes predictions about equivalent ratings across scales, which we demonstrate using real-world evaluation data. Our study has implications for the design of scales and for their use in assessment.
The continuous performance task (CPT) is widely used to assess deficits in sustained attention among people with psychotic disorders. People with psychotic disorders have poorer performance in the CPT compared to individuals without psychosis, but it is not exactly clear what specific factors may contribute to these between-group differences in sustained attention. To investigate the factors that may contribute to deficits in sustained attention among individuals with psychotic disorders, we propose a theory-based hierarchical Bayesian model for the CPT, and apply this model to a data set comprised of people with and without first-episode psychosis. This model allows us to find the potential underlying mechanism for people’s performance deficits on the CPT by interpreting changes in the model’s estimated parameters. Application of this model to the data set reveals that people with first-episode psychosis might have more difficulty identifying the mismatches between stimuli, and utilizing this mismatch information to guide their behaviors.
The ranking procedure requires participants to rank the entries on a line-up memory test where there is a single old item and n-1 novel foils; the ranking is from the perceived most likely target (with a rank of 1) to the least likely target (with a rank of n). This assessment procedure results in a critical test for the two-high threshold model (Chechile & Dunn, 2021). Moreover, ranking data can readily be used to construct a hazard function, which can be useful for assessing any model of recognition memory. In the current paper, the ranking procedure is employed to examine the memory of order. After the presentation of a series of items for study, a random triplet of these items is tested to assess the memory of the relative order of the items. Chechile and Pintea (2021) previously developed an Event Order (EO) model for measuring four states of triplet order. They also provided evidence that order knowledge is a separate attribute of memory from item content. Chechile and Pintea (2021) estimated the four-states of order knowledge from data obtained that used a series of forced-choice tests. In the current paper, it is shown that the ranking procedure can be also used to estimate the parameters of the EO model. The ranking test method also provides a way to generate an empirical hazard function for memory order, which can be useful for comparing rival models of memory order.
The Intra-Extra-dimensional set shift task (IEDS) is a widely used test of learning and attention, believed to be sensitive to aspects of executive function. The task proceeds through a number of stages, and it is generally claimed that patterns of errors across stages can be used to discriminate between reduced attention switching and more general reductions in rates of learning. A number of papers have used the IEDS task to argue for specific attention shifting difficulties in Autism Spectrum Disorder (ASD) and Schizophrenia, however, it remains unclear how well the IEDS really differentiates between reduced attention shifting and other causes of impaired performance. To address this issue, we introduce a simple computational model of performance in the IEDS task, designed to separate the competing effects of attention shifting and general learning rate. We fit the model to data from ASD and comparison individuals matched on age and IQ, as well as to data from four previous studies which used the IEDS task, using a combination of MCMC and Approximate Bayesian Computation techniques. Model fits do not show consistent evidence for reductions in attention shifting rates in ASD and Schizophrenia. Instead, we find performance is better explained by differences in learning rate, particularly from punishment, which we show correlates with IQ. We, therefore, argue that the IEDS task is not a good measure of attention shifting in clinical group.
Eyewitness identifications play a key role in many criminal investigations. Investigators have a wide range of options for how they conduct an identification attempt, and eyewitness researchers have explored many of the relevant variables, such as whether a suspect is shown to a witness individually (a “showup”) or together with a number of fillers (a “lineup”). Unfortunately, different measures of lineup effectiveness often support different research conclusions and policy recommendations. We show that existing measures are incomplete in the sense that they do not use all of the information from the reference population defining witness performance. We introduce a complete measure, Expected Information Gain (EIG), by applying information-theory principles to identification data. EIG identifies the procedure that produces the most information about suspect guilt or innocence across all of the possible witness responses. Thus, EIG is a useful measure for policy-focused research.