Skip to contents

Cohen's kappa and weighted kappa for two-rater agreement analysis. Essential for assessing agreement between two pathologists, diagnostic reproducibility, and quality assurance in clinical pathology practice.

Usage

cohenskappa(
  data,
  rater1,
  rater2,
  kappa_type = "cohen",
  confidence_level = 0.95,
  ci_method = "asymptotic",
  bootstrap_samples = 1000,
  exact_agreement = TRUE,
  marginal_homogeneity = TRUE,
  agreement_plot = TRUE,
  confusion_matrix = TRUE,
  category_analysis = FALSE,
  interpretation_guide = TRUE,
  missing_treatment = "listwise",
  minimum_categories = 2
)

Arguments

data

The data as a data frame.

rater1

.

rater2

.

kappa_type

Type of kappa statistic to calculate

confidence_level

Confidence level for confidence intervals

ci_method

Method for calculating confidence intervals

bootstrap_samples

Number of bootstrap samples for CI estimation

exact_agreement

Calculate overall and category-specific agreement percentages

marginal_homogeneity

Test whether marginal distributions are equal (Stuart-Maxwell test)

agreement_plot

Create agreement plot showing observed vs expected agreement

confusion_matrix

Display confusion matrix with agreement patterns

category_analysis

Detailed analysis of agreement for each category

interpretation_guide

Provide clinical interpretation of kappa values

missing_treatment

How to handle missing data

minimum_categories

Minimum number of categories required for analysis

Value

A results object containing:

results$todoa html
results$summarya html
results$kappaTablea table
results$agreementStatsa table
results$confusionMatrixa table
results$categoryStatsa table
results$marginalTesta table
results$bootstrapResultsa table
results$agreementPlotan image
results$confusionHeatmapan image
results$interpretationGuidea html
results$technicalNotesa html

Tables can be converted to data frames with asDF or as.data.frame. For example:

results$kappaTable$asDF

as.data.frame(results$kappaTable)