Power Analysis for Inter-rater Agreement Studies
Source:R/kappasizepower.b.R
kappaSizePowerClass.Rd
Performs power analysis to determine the required sample size for detecting a specified improvement in inter-rater agreement (kappa coefficient). This function helps researchers design adequately powered studies to validate training programs, standardized protocols, or other interventions aimed at improving agreement between raters.
Details
The function calculates the sample size needed to detect a difference between two kappa values (kappa0 vs kappa1) with specified power and significance level. It supports 2-5 outcome categories and 2-5 raters, using the kappaSize package implementation of power calculations for different scenarios.
Key requirements:
kappa1 must be greater than kappa0 (alternative hypothesis should represent improvement)
Proportions must sum to 1.0 and match the number of outcome categories
Power should be at least 0.50, typically 0.80 or higher
Super classes
jmvcore::Analysis
-> ClinicoPath::kappaSizePowerBase
-> kappaSizePowerClass
Methods
Inherited methods
jmvcore::Analysis$.createImage()
jmvcore::Analysis$.createImages()
jmvcore::Analysis$.createPlotObject()
jmvcore::Analysis$.load()
jmvcore::Analysis$.render()
jmvcore::Analysis$.save()
jmvcore::Analysis$.savePart()
jmvcore::Analysis$.setCheckpoint()
jmvcore::Analysis$.setParent()
jmvcore::Analysis$.setReadDatasetHeaderSource()
jmvcore::Analysis$.setReadDatasetSource()
jmvcore::Analysis$.setResourcesPathSource()
jmvcore::Analysis$.setStatePathSource()
jmvcore::Analysis$addAddon()
jmvcore::Analysis$asProtoBuf()
jmvcore::Analysis$asSource()
jmvcore::Analysis$check()
jmvcore::Analysis$init()
jmvcore::Analysis$optionsChangedHandler()
jmvcore::Analysis$postInit()
jmvcore::Analysis$print()
jmvcore::Analysis$readDataset()
jmvcore::Analysis$run()
jmvcore::Analysis$serialize()
jmvcore::Analysis$setError()
jmvcore::Analysis$setStatus()
jmvcore::Analysis$translate()
ClinicoPath::kappaSizePowerBase$initialize()
Examples
if (FALSE) { # \dontrun{
# Basic binary outcome power analysis
result <- kappaSizePower(
outcome = "2",
kappa0 = 0.40, # Current agreement level
kappa1 = 0.60, # Target agreement level
props = "0.30, 0.70", # Expected proportions
raters = "2", # Number of raters
alpha = 0.05, # Significance level
power = 0.80 # Desired power
)
# Medical diagnosis validation study
result <- kappaSizePower(
outcome = "2",
kappa0 = 0.50, # Baseline fair agreement
kappa1 = 0.75, # Target good agreement
props = "0.25, 0.75", # 25% disease prevalence
raters = "2",
alpha = 0.05,
power = 0.80
)
# Multi-category severity assessment
result <- kappaSizePower(
outcome = "3",
kappa0 = 0.55, # Current moderate agreement
kappa1 = 0.75, # Target good agreement
props = "0.20, 0.50, 0.30", # Mild, moderate, severe
raters = "3",
alpha = 0.05,
power = 0.85
)
} # }