A primer of inter‐rater reliability in clinical measurement studies: Pros and pitfalls - Alavi - Journal of Clinical Nursing - Wiley Online Library
Exact one-sided confidence limits for Cohen's kappa as a measurement of agreement
Untitled
Assessment of polytraumatized patients according to the Berlin Definition: Does the addition of physiological data really improve interobserver reliability? | PLOS ONE
Confidence Intervals for Kappa
Interpretation of Kappa Values. The kappa statistic is frequently used… | by Yingting Sherry Chen | Towards Data Science
Intra-Rater and Inter-Rater Reliability of a Medical Record Abstraction Study on Transition of Care after Childhood Cancer | PLOS ONE
Kappa coefficient (95% confidence interval) for the intra- and... | Download Table
Cohen's Kappa and Fleiss' Kappa— How to Measure the Agreement Between Raters | by Audhi Aprilliant | Medium