Receiver Operating Characteristic (ROC) curve analysis with optimal cutpoint determination.
Usage
psychopdaroc(
data,
dependentVars,
classVar,
positiveClass,
subGroup,
method = "maximize_metric",
metric = "youden",
direction = ">=",
specifyCutScore = "",
tol_metric = 0.05,
break_ties = "mean",
allObserved = FALSE,
boot_runs = 0,
usePriorPrev = FALSE,
priorPrev = 0.5,
costratioFP = 1,
sensSpecTable = FALSE,
showThresholdTable = FALSE,
maxThresholds = 20,
delongTest = FALSE,
plotROC = TRUE,
combinePlots = TRUE,
cleanPlot = FALSE,
showOptimalPoint = TRUE,
displaySE = FALSE,
smoothing = FALSE,
showConfidenceBands = FALSE,
legendPosition = "right",
directLabel = FALSE,
interactiveROC = FALSE,
showCriterionPlot = FALSE,
showPrevalencePlot = FALSE,
showDotPlot = FALSE,
precisionRecallCurve = FALSE,
partialAUC = FALSE,
partialAUCfrom = 0.8,
partialAUCto = 1,
rocSmoothingMethod = "none",
bootstrapCI = FALSE,
bootstrapReps = 2000,
quantileCIs = FALSE,
quantiles = "0.1,0.25,0.5,0.75,0.9",
compareClassifiers = FALSE,
calculateIDI = FALSE,
calculateNRI = FALSE,
refVar,
nriThresholds = "",
idiNriBootRuns = 1000
)
Arguments
- data
The data as a data frame.
- dependentVars
Test variable(s) to be evaluated for classification performance. Multiple variables can be selected for comparison.
- classVar
Binary classification variable representing the true class (gold standard). Must have exactly two levels.
- positiveClass
Specifies which level of the class variable should be treated as the positive class.
- subGroup
Optional grouping variable for stratified analysis. ROC curves will be calculated separately for each group.
- method
Method for determining the optimal cutpoint. Different methods optimize different aspects of classifier performance.
- metric
Metric to optimize when determining the cutpoint. Only applies to maximize/minimize methods.
- direction
Direction of classification relative to the cutpoint. Use '>=' when higher test values indicate the positive class.
- specifyCutScore
Specific cutpoint value to use when method is set to 'Manual cutpoint'.
- tol_metric
Tolerance for the metric value when multiple cutpoints yield similar performance. Cutpoints within this tolerance are considered equivalent.
- break_ties
Method for handling ties when multiple cutpoints achieve the same metric value.
- allObserved
Display performance metrics for all observed test values as potential cutpoints, not just the optimal cutpoint.
- boot_runs
Number of bootstrap iterations for methods using bootstrapping. Set to 0 to disable bootstrapping.
- usePriorPrev
Use a specified prior prevalence instead of the sample prevalence for calculating predictive values.
- priorPrev
Population prevalence to use for predictive value calculations. Only used when 'Use Prior Prevalence' is checked.
- costratioFP
Relative cost of false positives compared to false negatives. Values > 1 penalize false positives more heavily.
- sensSpecTable
Display detailed confusion matrices at optimal cutpoints.
- showThresholdTable
Display detailed table with performance metrics at multiple thresholds.
- maxThresholds
Maximum number of threshold values to show in the threshold table.
- delongTest
Perform DeLong's test for comparing AUCs between multiple test variables. Requires at least two test variables.
- plotROC
Display ROC curves for visual assessment of classifier performance.
- combinePlots
When multiple test variables are selected, combine all ROC curves in a single plot.
- cleanPlot
Create clean ROC curves without annotations, suitable for publications.
- showOptimalPoint
Display the optimal cutpoint on the ROC curve.
- displaySE
Display standard error bands on ROC curves (when LOESS smoothing is applied).
- smoothing
Apply LOESS smoothing to ROC curves for visualization.
- showConfidenceBands
Display confidence bands around the ROC curve.
- legendPosition
Position of the legend in plots with multiple ROC curves.
- directLabel
Label curves directly on the plot instead of using a legend.
- interactiveROC
Create an interactive HTML ROC plot (requires plotROC package).
- showCriterionPlot
Plot showing how sensitivity and specificity change across different thresholds.
- showPrevalencePlot
Plot showing how PPV and NPV change with disease prevalence.
- showDotPlot
Dot plot showing the distribution of test values by class.
- precisionRecallCurve
Display precision-recall curves alongside ROC curves.
- partialAUC
Calculate AUC for a specific region of the ROC curve.
- partialAUCfrom
Lower bound of specificity range for partial AUC calculation.
- partialAUCto
Upper bound of specificity range for partial AUC calculation.
- rocSmoothingMethod
Method for smoothing the ROC curve (requires pROC package).
- bootstrapCI
Calculate bootstrap confidence intervals for AUC and optimal cutpoints.
- bootstrapReps
Number of bootstrap replications for confidence interval calculation.
- quantileCIs
Display confidence intervals at specific quantiles of the test variable.
- quantiles
Comma-separated list of quantiles (0-1) at which to display confidence intervals.
- compareClassifiers
Perform comprehensive comparison of classifier performance metrics.
- calculateIDI
Calculate Integrated Discrimination Improvement for model comparison.
- calculateNRI
Calculate Net Reclassification Index for model comparison.
- refVar
Reference test variable for IDI and NRI calculations. Other variables will be compared against this reference.
- nriThresholds
Comma-separated probability thresholds (0-1) defining risk categories for NRI. Leave empty for continuous NRI.
- idiNriBootRuns
Number of bootstrap iterations for IDI and NRI confidence intervals.
Value
A results object containing:
results$instructions | a html | ||||
results$procedureNotes | a html | ||||
results$simpleResultsTable | a table | ||||
results$resultsTable | an array of tables | ||||
results$sensSpecTable | an array of htmls | ||||
results$thresholdTable | a table | ||||
results$aucSummaryTable | a table | ||||
results$delongComparisonTable | a table | ||||
results$delongTest | a preformatted | ||||
results$plotROC | an array of images | ||||
results$interactivePlot | an image | ||||
results$criterionPlot | an array of images | ||||
results$prevalencePlot | an array of images | ||||
results$dotPlot | an array of images | ||||
results$dotPlotMessage | a html | ||||
results$precisionRecallPlot | an array of images | ||||
results$idiTable | a table | ||||
results$nriTable | a table | ||||
results$partialAUCTable | a table | ||||
results$bootstrapCITable | a table | ||||
results$rocComparisonTable | a table |
Tables can be converted to data frames with asDF
or as.data.frame
. For example:
results$simpleResultsTable$asDF
as.data.frame(results$simpleResultsTable)