A Bayesian Approach to Pre-Post Comparison of Inter-Rater Agreement in Ordinal Ratings
Pub. online: 16 December 2025
Type: Statistical Data Science
Open Access
Received
2 September 2025
2 September 2025
Accepted
8 December 2025
8 December 2025
Published
16 December 2025
16 December 2025
Abstract
Inter-rater agreement is fundamental to decision making in medicine, psychology, and the social sciences, as it reflects the quality and reliability of rating systems. ICC (intraclass correlation) has been widely used as a measure of inter-rater agreement. To date, there has been no methodological development that properly assesses improvement in ICC for pre–post studies with ordinal ratings. It remain uninvestigated whether/how correlations between pre- and post-intervention scores impact the estimation and comparison of ICC. We present a Bayesian hierarchical probit framework for evaluating changes in ICCs in such settings. The model incorporates rater- and item-level correlations and compares two parameterizations: an “individual components” prior that separately models variances and correlations, and an inverse Wishart prior. Simulation studies show that accounting for pre–post correlation substantially improves estimation accuracy and power to detect changes in agreement, while ignoring it reduces efficiency. Application to a multicenter study on conjunctival inflammation demonstrates that a novel grading scale markedly increased inter-rater agreement. This framework underscores the importance of modeling ordinal outcomes appropriately and provides a flexible Bayesian tool for evaluating the effectiveness of interventions on inter-rater agreement in pre-post studies.
Supplementary material
Supplementary MaterialThe supplementary material includes supplementary tables and R codes.
References
Albert JH, Chib S (1993). Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88(422): 669–679. https://doi.org/10.1080/01621459.1993.10476321
Atenafu EG, Hamid JS, To T, Willan AR, M Feldman B, Beyene J (2012). Bias-corrected estimator for intraclass correlation coefficient in the balanced one-way random effects model. BMC Medical Research Methodology, 12(126): 1–8. https://doi.org/10.1186/1471-2288-12-126
Calle-Alonso F, Perez Sanchez CJ (2015). A Monte Carlo–based Bayesian approach for measuring agreement in a qualitative scale. Applied Psychological Measurement, 39(3): 189–207. https://doi.org/10.1177/0146621614554080
Cohen J (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1): 37–46. https://doi.org/10.1177/001316446002000104
Cohen J (1968). Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4): 213–220. https://doi.org/10.1037/h0026256
Eziama E, Nguyen C, Foster CS, Heydinger S, Cao JH (2025). Novel grading scale for conjunctival inflammation in cicatrizing conjunctivitis associated with pemphigoid. Ocular Immunology and Inflammation, 33(4): 649–653. https://doi.org/10.1080/09273948.2024.2434128
Fanshawe TR, Lynch AG, Ellis IO, Green AR, Hanka R (2008). Assessing agreement between multiple raters with missing rating information, applied to breast cancer tumour grading. PLoS ONE, 3(8): e2925–e2936. https://doi.org/10.1371/journal.pone.0002925
Gajewski BJ, Hart S, Bergquist-Beringer S, Dunton N (2007). Inter-rater reliability of pressure ulcer staging: Ordinal probit Bayesian hierarchical model that allows for uncertain rater response. Statistics in Medicine, 26(25): 4602–4618. https://doi.org/10.1002/sim.2877
Giraudeau B, Mary J (2001). Planning a reproducibility study: How many subjects and how many replicates per subject for an expected width of the 95 per cent confidence interval of the intraclass correlation coefficient. Statistics in Medicine, 20(21): 3205–3214. https://doi.org/10.1002/sim.935
Hallgren KA (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1): 23–34. https://doi.org/10.20982/tqmp.08.1.p023
Konishi S (1985). Normalizing and variance stabilizing transformations for intraclass correlations. Annals of the Institute of Statistical Mathematics, 37(1): 87–94. https://doi.org/10.1007/BF02481082
Müller R, Büttner P (1994). A critical discussion of intraclass correlation coefficients. Statistics in Medicine, 13(23–24): 2465–2476. https://doi.org/10.1002/sim.4780132310
Nelson KP, Edwards D (2015). Measures of agreement between many raters for ordinal classifications. Statistics in Medicine, 34(23): 3116–3132. https://doi.org/10.1002/sim.6546
Olkin I, Lou Y, Stokes L, Cao J (2015). Analyses of wine-tasting data: A tutorial. Journal of Wine Economics, 10(1): 4–30. https://doi.org/10.1017/jwe.2014.26
Shrout PE, Fleiss JL (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2): 420–428. https://doi.org/10.1037/0033-2909.86.2.420
Tran QD, Demirhan H, Dolgun A (2021). Bayesian approaches to the weighted kappa-like inter-rater agreement measures. Statistical Methods in Medical Research, 30(10): 2329–2351. https://doi.org/10.1177/09622802211037068
Wang C, Yandell B, Rutledge J (1991). Bias of maximum likelihood estimator of intraclass correlation. Theoretical and Applied Genetics, 82(4): 421–424. https://doi.org/10.1007/BF00588594
Yue C, Chen S, Sair HI, Airan R, Caffo BS (2015). Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models. Computational Statistics & Data Analysis, 89: 126–133. https://doi.org/10.1016/j.csda.2015.02.012
Zhang S, Cao J, Ahn C (2018). Sample size calculation for before–after experiments with partially overlapping cohorts. Contemporary Clinical Trials, 64: 274–280. https://doi.org/10.1016/j.cct.2015.09.015
Zhang Z (2021). A note on Wishart and inverse Wishart priors for covariance matrix. Journal of Behavioral Data Science, 1(2): 119–126. https://doi.org/10.35566/jbds/v1n2/p2