Scalable Bayesian methods for cognitive psychometrics
Bayesian inference provides a principled framework for modeling cognitive and psychometric data, but scalability remains a challenge. Traditional MCMC methods become computationally impractical when working with very large datasets. In these scenarios, MCMC methods often require extended periods of continuous computing and in many cases result in defective, non-convergent chains. Divide-and-conquer (DC) methods offer a scalable alternative by partitioning data into disjoint subsets, performing computations separately on each, and then recombining the “subposterior” MCMC samples to approximate the full posterior distribution. Crucially, the recombination strategy in use directly affects posterior accuracy and predictive performance. Here, we evaluate the performance of different recombination strategies across full and partitioned datasets in cognitive and psychometric models, including Item Response Theory models. Comparing DC recombination against full-data inference allows us to explore the degree in which different strategies accurately track the target posterior. Using this approach, we systematically compare trade-offs between computing costs and posterior accuracy, highlighting the conditions under which different recombination rules succeed or fail. Our analysis considers factors such as the size of the dataset and the type of model, providing insights into the practical limitations of DC methods for scalable Bayesian inference.
Keywords
There is nothing here yet. Be the first to create a thread.
Cite this as: