By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Several statistics have been proposed to detect Differential Item Functioning (DIF) and quantify its magnitude. However, not all DIF statistics quantify the same population parameters and, therefore, cannot be interpreted interchangeably. We focus on deriving principled guidelines for the interpretation of the non-compensatory differential item functioning (NCDIF) parameter as a DIF effect size measure. This parameter quantifies the expected (average) effect of the possible differences between item parameters on the (squared differences of the) item score for the focal population of examinees. We first investigate in which situations the Delta Mantel-Haenszel (∆MH) is comparable to NCDIF, so that the ETS cutoff points can serve meaningfully as a benchmark. Then, we examine the parameter’s behavior under various conditions of uniform and non-uniform DIF, as well as the effect that the distribution of the focal group exerts on its magnitude. Lastly, using one of the estimators of the NCDIF parameter, we evaluate the accuracy of the derived classification rules for NCDIF and identify an approximate bias correction for this estimator. Overall, these results provide useful guidelines for interpreting the magnitude of NCDIF that are consistent with its specific nature and improve the alignment of DIF classifications with the magnitude of the NCDIF parameter.
A plethora of explanations have been proposed regarding the origins of SARS-CoV-2 since the onset of the pandemic that vary in kind from zoonotic transmission, a laboratory accident, deliberate genetic manipulation, or divine intervention. We use a new methodology that applies Thurstone modeling to understand people's attitudes on the origins of SARS-CoV-2 expressed by rankings. A Thurstone model provides an aggregate ranking of items across all respondents and allows us to identify how much an individual deviates from the aggregate ranking. We apply the methodology to two sample populations: an undergraduate subject pool and a U.S. representative sample. Undergraduate respondents ranked the zoonotic transmission as the most plausible explanation, whereas U.S. representative respondents ranked zoonotic transmission and laboratory accident as the most plausible explanations. We discuss the benefits and advantages ranking has over Likert-type ratings in understanding people's attitudes.
The problem of representing a system of choice probabilities through a probability distribution over rankings, as formulated by Falmagne (1978), is a central issue in mathematical psychology, particularly in stochastic choice modeling. His approach relies on Block-Marschak polynomials, which impose consistency conditions on choice probabilities, leading to an algorithmic solution that does not initially appear explicit. We provide a streamlined proof of this theorem and derive an explicit solution, in contrast to Falmagne’s approach. Using Gaussian elimination, we transform the original constraints on choice probabilities into a system of linear constraints on Block-Marschak polynomials. This reformulation enables entropy maximization under linear constraints, a well-established technique in information theory, leading to a Gibbs-type solution. The resulting ranking probabilities are expressed as a normalized product of Block-Marschak polynomials, offering a more structured representation of stochastic choice behavior. Applying this framework to the Luce model (logit), we recover the Exploded Logit Model. Unlike standard derivations relying on multiple integrals, our approach does not use the random utility representation, providing a novel and more direct justification. This method also extends to Generalized Extreme Value (GEV) models, which previously lacked an explicit ranking probability representation. Furthermore, it suggests potential extensions to other probabilistic choice models where existing representations remain incomplete. This framework bridges the gap between stochastic choice modeling and optimization principles from information theory, offering a unified perspective on the structure of ranking probabilities and their rationalizability.