The Cohen's Kappa agreement is a measure of inter-rater or intra-rater reliability for qualitative variables scoring the results of a test procedure. Inter-reliability is the agreement between examinators. Intra-reliability is the stability of a measure over time.
This test is used when one wants to compare the percentages of agreement between assessments of different examinators or between assessments at different times.
For example, one could be interested in knowing if the coronavirus test procedure X gives the same results 1) when the test is carry out by two different nurses 2) at 15 minutes apart. The point 1) assess the inter-rater reliability of the test, the point 2) assess the intra-rater reliability of the test.
To do this test, go to "Test 2 variables" and chose two categorical variables.
How to interprate the Cohen's Kappa agreement ?
The indice of agreement are commonly interprated as (Landis, Koch, 1977):
- < 0: No agreement
- 0.0 - 0.20: Very low agreement
- 0.21- 0.40: Low agreement
- 0.41- 0.60: Moderate agreement
- 0.61 - 0.80: High agreement
- 0.81- 1.00: Quasi perfect agreement