Introduction to Decision Panel Optimization
meddecide Development Team
2025-06-30
Source:vignettes/meddecide-10-decision-panel-optimization.Rmd
meddecide-10-decision-panel-optimization.Rmd
Introduction
The Decision Panel Optimization module in the meddecide
package provides a comprehensive framework for optimizing diagnostic
test combinations in medical decision-making. This vignette introduces
the basic concepts and demonstrates core functionality.
Key Concepts
Testing Strategies
When multiple diagnostic tests are available, they can be combined in different ways:
- Single Testing: Use individual tests independently
-
Parallel Testing: Perform multiple tests
simultaneously
- ANY rule (OR): Positive if any test is positive
- ALL rule (AND): Positive only if all tests are positive
- MAJORITY rule: Positive if majority of tests are positive
-
Sequential Testing: Perform tests in sequence based
on previous results
- Stop on first positive
- Confirmatory (require multiple positives)
- Exclusion (require multiple negatives)
Optimization Criteria
The module can optimize test panels based on various criteria:
- Accuracy: Overall correct classification rate
- Sensitivity: Ability to detect disease (minimize false negatives)
- Specificity: Ability to rule out disease (minimize false positives)
- Predictive Values: PPV and NPV
- Cost-Effectiveness: Balance performance with resource utilization
- Utility: Custom utility functions incorporating costs of errors
Installation and Loading
# Install meddecide package
install.packages("meddecide")
# Or install from GitHub
devtools::install_github("ClinicoPath/meddecide")
# Load required packages
library(ClinicoPath)
#> Registered S3 method overwritten by 'future':
#> method from
#> all.equal.connection parallelly
#> Warning: replacing previous import 'dplyr::select' by 'jmvcore::select' when
#> loading 'ClinicoPath'
#> Warning: replacing previous import 'cutpointr::roc' by 'pROC::roc' when loading
#> 'ClinicoPath'
#> Warning: replacing previous import 'cutpointr::auc' by 'pROC::auc' when loading
#> 'ClinicoPath'
#> Warning: replacing previous import 'magrittr::extract' by 'tidyr::extract' when
#> loading 'ClinicoPath'
#> Warning in check_dep_version(): ABI version mismatch:
#> lme4 was built with Matrix ABI version 1
#> Current Matrix ABI version is 0
#> Please re-install lme4 from source or restore original 'Matrix' package
#> Warning: replacing previous import 'jmvcore::select' by 'dplyr::select' when
#> loading 'ClinicoPath'
#> Registered S3 methods overwritten by 'ggpp':
#> method from
#> heightDetails.titleGrob ggplot2
#> widthDetails.titleGrob ggplot2
#> Warning: replacing previous import 'DataExplorer::plot_histogram' by
#> 'grafify::plot_histogram' when loading 'ClinicoPath'
#> Warning: replacing previous import 'ROCR::plot' by 'graphics::plot' when
#> loading 'ClinicoPath'
#> Warning: replacing previous import 'dplyr::select' by 'jmvcore::select' when
#> loading 'ClinicoPath'
#> Warning: replacing previous import 'tibble::view' by 'summarytools::view' when
#> loading 'ClinicoPath'
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(ggplot2)
#> Warning: package 'ggplot2' was built under R version 4.3.3
library(rpart)
#> Warning: package 'rpart' was built under R version 4.3.3
library(rpart.plot)
library(knitr)
#> Warning: package 'knitr' was built under R version 4.3.3
library(forcats)
Basic Example: COVID-19 Screening
Let’s start with a simple example using COVID-19 screening data:
# Examine the data structure
str(covid_screening_data)
#> 'data.frame': 1000 obs. of 8 variables:
#> $ patient_id : int 1 2 3 4 5 6 7 8 9 10 ...
#> $ rapid_antigen: Factor w/ 2 levels "Negative","Positive": 1 2 1 1 1 1 1 1 1 1 ...
#> $ pcr : Factor w/ 2 levels "Negative","Positive": 1 2 NA NA 1 1 NA 1 1 NA ...
#> $ chest_ct : Factor w/ 2 levels "Normal","Abnormal": 2 1 1 1 1 1 1 1 1 1 ...
#> $ symptom_score: num 8 6 1 1 5 5 5 4 2 5 ...
#> $ covid_status : Factor w/ 2 levels "Negative","Positive": 2 2 1 1 1 1 1 1 1 1 ...
#> $ age : num 35 33 39 28 62 32 64 23 58 36 ...
#> $ risk_group : Factor w/ 3 levels "High","Low","Medium": 2 2 2 2 2 3 3 3 2 2 ...
# Check disease prevalence
table(covid_screening_data$covid_status)
#>
#> Negative Positive
#> 851 149
prop.table(table(covid_screening_data$covid_status))
#>
#> Negative Positive
#> 0.851 0.149
Running Basic Analysis
# Basic decision panel analysis
covid_panel <- decisionpanel(
data = covid_screening_data,
tests = c("rapid_antigen", "pcr", "chest_ct"),
testLevels = c("Positive", "Positive", "Abnormal"),
gold = "covid_status",
goldPositive = "Positive",
strategies = "all",
optimizationCriteria = "accuracy"
)
Understanding Testing Strategies
Parallel Testing Example
# Simulate parallel testing with ANY rule
# Positive if rapid_antigen OR pcr is positive
parallel_any <- with(covid_screening_data,
rapid_antigen == "Positive" | pcr == "Positive"
)
# Create confusion matrix
conf_matrix_any <- table(
Predicted = parallel_any,
Actual = covid_screening_data$covid_status == "Positive"
)
print(conf_matrix_any)
#> Actual
#> Predicted FALSE TRUE
#> FALSE 573 1
#> TRUE 25 134
# Calculate metrics
sensitivity_any <- conf_matrix_any[2,2] / sum(conf_matrix_any[,2])
specificity_any <- conf_matrix_any[1,1] / sum(conf_matrix_any[,1])
cat("Parallel ANY Rule:\n")
#> Parallel ANY Rule:
cat(sprintf("Sensitivity: %.1f%%\n", sensitivity_any * 100))
#> Sensitivity: 99.3%
cat(sprintf("Specificity: %.1f%%\n", specificity_any * 100))
#> Specificity: 95.8%
Sequential Testing Example
# Simulate sequential testing
# Start with rapid test, only do PCR if rapid is positive
sequential_result <- rep("Negative", nrow(covid_screening_data))
# Those with positive rapid test
rapid_pos_idx <- which(covid_screening_data$rapid_antigen == "Positive")
# Among those, check PCR
sequential_result[rapid_pos_idx] <-
ifelse(covid_screening_data$pcr[rapid_pos_idx] == "Positive",
"Positive", "Negative")
# Create confusion matrix
conf_matrix_seq <- table(
Predicted = sequential_result == "Positive",
Actual = covid_screening_data$covid_status == "Positive"
)
print(conf_matrix_seq)
#> Actual
#> Predicted FALSE TRUE
#> FALSE 851 51
#> TRUE 0 98
# Calculate metrics
sensitivity_seq <- conf_matrix_seq[2,2] / sum(conf_matrix_seq[,2])
specificity_seq <- conf_matrix_seq[1,1] / sum(conf_matrix_seq[,1])
cat("\nSequential Testing:\n")
#>
#> Sequential Testing:
cat(sprintf("Sensitivity: %.1f%%\n", sensitivity_seq * 100))
#> Sensitivity: 65.8%
cat(sprintf("Specificity: %.1f%%\n", specificity_seq * 100))
#> Specificity: 100.0%
# Calculate cost savings
pcr_tests_saved <- sum(covid_screening_data$rapid_antigen == "Negative")
cat(sprintf("PCR tests saved: %d (%.1f%%)\n",
pcr_tests_saved,
pcr_tests_saved/nrow(covid_screening_data) * 100))
#> PCR tests saved: 876 (87.6%)
Cost-Effectiveness Analysis
When costs are considered, the optimal strategy may change:
# Analysis with costs
covid_panel_cost <- decisionpanel(
data = covid_screening_data,
tests = c("rapid_antigen", "pcr", "chest_ct"),
testLevels = c("Positive", "Positive", "Abnormal"),
gold = "covid_status",
goldPositive = "Positive",
strategies = "all",
optimizationCriteria = "utility",
useCosts = TRUE,
testCosts = "5,50,200", # Costs for each test
fpCost = 500, # Cost of false positive
fnCost = 5000 # Cost of false negative
)
Visualization
Performance Comparison Plot
# Create performance comparison data
strategies <- data.frame(
Strategy = c("Rapid Only", "PCR Only", "Parallel ANY", "Sequential"),
Sensitivity = c(65, 95, 98, 62),
Specificity = c(98, 99, 97, 99.9),
Cost = c(5, 50, 55, 15)
)
# Plot sensitivity vs specificity
ggplot(strategies, aes(x = 100 - Specificity, y = Sensitivity)) +
geom_point(aes(size = Cost), alpha = 0.6) +
geom_text(aes(label = Strategy), vjust = -1) +
scale_size_continuous(range = c(3, 10)) +
xlim(0, 5) + ylim(60, 100) +
labs(
title = "Testing Strategy Comparison",
x = "False Positive Rate (%)",
y = "Sensitivity (%)",
size = "Cost ($)"
) +
theme_minimal()
Decision Trees
Decision trees provide clear algorithms for clinical use:
# Generate decision tree
covid_tree <- decisionpanel(
data = covid_screening_data,
tests = c("rapid_antigen", "pcr", "chest_ct", "symptom_score"),
testLevels = c("Positive", "Positive", "Abnormal", ">5"),
gold = "covid_status",
goldPositive = "Positive",
createTree = TRUE,
treeMethod = "cart",
maxDepth = 3
)
Interpreting the Tree
A typical decision tree output might look like:
1. Start with Rapid Antigen Test
├─ If Positive (2% of patients)
│ └─ Confirm with PCR
│ ├─ If Positive → COVID Positive (PPV: 95%)
│ └─ If Negative → COVID Negative (NPV: 98%)
└─ If Negative (98% of patients)
├─ If Symptoms > 5
│ └─ Perform Chest CT
│ ├─ If Abnormal → Perform PCR
│ └─ If Normal → COVID Negative
└─ If Symptoms ≤ 5 → COVID Negative
Advanced Features
Cross-Validation
Validate panel performance using k-fold cross-validation:
# Run with cross-validation
covid_panel_cv <- decisionpanel(
data = covid_screening_data,
tests = c("rapid_antigen", "pcr", "chest_ct"),
testLevels = c("Positive", "Positive", "Abnormal"),
gold = "covid_status",
goldPositive = "Positive",
crossValidate = TRUE,
nFolds = 5,
seed = 123
)
Bootstrap Confidence Intervals
Get uncertainty estimates for performance metrics:
# Run with bootstrap
covid_panel_boot <- decisionpanel(
data = covid_screening_data,
tests = c("rapid_antigen", "pcr", "chest_ct"),
testLevels = c("Positive", "Positive", "Abnormal"),
gold = "covid_status",
goldPositive = "Positive",
bootstrap = TRUE,
bootReps = 1000,
seed = 123
)
Best Practices
- Start Simple: Begin with individual test performance before combinations
- Consider Context: Screening vs. diagnosis requires different strategies
- Validate Results: Use cross-validation or separate test sets
- Include Costs: Real-world decisions must consider resources
- Think Sequentially: Often more efficient than parallel testing
- Set Constraints: Define minimum acceptable performance
- Interpret Clinically: Statistical optimality isn’t everything
Conclusion
The Decision Panel Optimization module provides a systematic approach to combining diagnostic tests. By considering various strategies, costs, and constraints, it helps identify practical testing algorithms that balance performance with resource utilization.
Next Steps
- See the “Clinical Applications” vignette for disease-specific examples
- Review “Advanced Optimization” for complex scenarios
- Check “Implementation Guide” for deploying algorithms in practice
Session Information
sessionInfo()
#> R version 4.3.2 (2023-10-31)
#> Platform: aarch64-apple-darwin20 (64-bit)
#> Running under: macOS 15.5
#>
#> Matrix products: default
#> BLAS: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
#>
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> time zone: Europe/Istanbul
#> tzcode source: internal
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] forcats_1.0.0 knitr_1.50 rpart.plot_3.1.2
#> [4] rpart_4.1.24 ggplot2_3.5.2 dplyr_1.1.4
#> [7] ClinicoPath_0.0.3.33
#>
#> loaded via a namespace (and not attached):
#> [1] igraph_2.1.4 plotly_4.11.0 Formula_1.2-5
#> [4] cutpointr_1.2.1 rematch2_2.1.2 tidyselect_1.2.1
#> [7] vtree_5.1.9 lattice_0.22-7 stringr_1.5.1
#> [10] parallel_4.3.2 caret_7.0-1 dichromat_2.0-0.1
#> [13] png_0.1-8 cli_3.6.5 bayestestR_0.16.0
#> [16] askpass_1.2.1 arsenal_3.6.3 openssl_2.3.3
#> [19] ggeconodist_0.1.0 countrycode_1.6.1 pkgdown_2.1.3
#> [22] textshaping_1.0.1 purrr_1.0.4 officer_0.6.10
#> [25] stars_0.6-8 ggflowchart_1.0.0 broom.mixed_0.2.9.6
#> [28] curl_6.4.0 strucchange_1.5-4 mime_0.13
#> [31] evaluate_1.0.4 coin_1.4-3 V8_6.0.4
#> [34] stringi_1.8.7 pROC_1.18.5 backports_1.5.0
#> [37] desc_1.4.3 lmerTest_3.1-3 XML_3.99-0.18
#> [40] Exact_3.3 tinytable_0.7.0 lubridate_1.9.4
#> [43] httpuv_1.6.16 paletteer_1.6.0 magrittr_2.0.3
#> [46] rappdirs_0.3.3 splines_4.3.2 prodlim_2025.04.28
#> [49] KMsurv_0.1-6 r2rtf_1.1.4 BiasedUrn_2.0.12
#> [52] survminer_0.5.0 logger_0.4.0 epiR_2.0.84
#> [55] wk_0.9.4 networkD3_0.4.1 DT_0.33
#> [58] lpSolve_5.6.23 rootSolve_1.8.2.4 DBI_1.2.3
#> [61] terra_1.8-54 jquerylib_0.1.4 withr_3.0.2
#> [64] reformulas_0.4.1 class_7.3-23 systemfonts_1.2.3
#> [67] rprojroot_2.0.4 leaflegend_1.2.1 lmtest_0.9-40
#> [70] RefManageR_1.4.0 htmlwidgets_1.6.4 fs_1.6.6
#> [73] waffle_1.0.2 ggvenn_0.1.10 labeling_0.4.3
#> [76] gtsummary_2.2.0 cellranger_1.1.0 summarytools_1.1.4
#> [79] extrafont_0.19 lmom_3.2 zoo_1.8-14
#> [82] raster_3.6-32 ggcharts_0.2.1 gt_1.0.0
#> [85] timechange_0.3.0 foreach_1.5.2 patchwork_1.3.1
#> [88] visNetwork_2.1.2 grid_4.3.2 data.table_1.17.6
#> [91] timeDate_4041.110 gsDesign_3.6.8 pan_1.9
#> [94] psych_2.5.6 extrafontdb_1.0 DiagrammeR_1.0.11
#> [97] clintools_0.9.10.1 DescTools_0.99.60 lazyeval_0.2.2
#> [100] yaml_2.3.10 leaflet_2.2.2 useful_1.2.6.1
#> [103] easyalluvial_0.3.2 survival_3.8-3 crosstable_0.8.1
#> [106] lwgeom_0.2-14 RColorBrewer_1.1-3 tidyr_1.3.1
#> [109] progressr_0.15.1 tweenr_2.0.3 later_1.4.2
#> [112] microbenchmark_1.5.0 ggridges_0.5.6 codetools_0.2-20
#> [115] base64enc_0.1-3 jtools_2.3.0 labelled_2.14.1
#> [118] shape_1.4.6.1 estimability_1.5.1 gdtools_0.4.2
#> [121] data.tree_1.1.0 foreign_0.8-90 pkgconfig_2.0.3
#> [124] grafify_5.0.0.1 ggpubr_0.6.0 xml2_1.3.8
#> [127] performance_0.14.0 viridisLite_0.4.2 xtable_1.8-4
#> [130] bibtex_0.5.1 car_3.1-3 plyr_1.8.9
#> [133] httr_1.4.7 rbibutils_2.3 tools_4.3.2
#> [136] globals_0.17.0 hardhat_1.4.1 cols4all_0.8
#> [139] htmlTable_2.4.3 broom_1.0.8 checkmate_2.3.2
#> [142] nlme_3.1-168 survMisc_0.5.6 regions_0.1.8
#> [145] maptiles_0.10.0 crosstalk_1.2.1 assertthat_0.2.1
#> [148] lme4_1.1-37 digest_0.6.37 numDeriv_2016.8-1.1
#> [151] Matrix_1.6-1.1 tmap_4.1 furrr_0.3.1
#> [154] farver_2.1.2 tzdb_0.5.0 reshape2_1.4.4
#> [157] viridis_0.6.5 rapportools_1.2 ModelMetrics_1.2.2.2
#> [160] gghalves_0.1.4 glue_1.8.0 mice_3.18.0
#> [163] cachem_1.1.0 ggswim_0.1.0 polyclip_1.10-7
#> [166] UpSetR_1.4.0 Hmisc_5.2-3 generics_0.1.4
#> [169] visdat_0.6.0 classInt_0.4-11 stats4_4.3.2
#> [172] ggalluvial_0.12.5 mvtnorm_1.3-3 survey_4.4-2
#> [175] parallelly_1.45.0 ISOweek_0.6-2 mnormt_2.1.1
#> [178] here_1.0.1 ggmice_0.1.0 ragg_1.4.0
#> [181] fontBitstreamVera_0.1.1 carData_3.0-5 minqa_1.2.8
#> [184] httr2_1.1.2 giscoR_0.6.1 tcltk_4.3.2
#> [187] coefplot_1.2.8 eurostat_4.0.0 glmnet_4.1-9
#> [190] jmvcore_2.6.3 spacesXYZ_1.6-0 gower_1.0.2
#> [193] mitools_2.4 readxl_1.4.5 datawizard_1.1.0
#> [196] fontawesome_0.5.3 ggsignif_0.6.4 party_1.3-18
#> [199] gridExtra_2.3 shiny_1.10.0 lava_1.8.1
#> [202] tmaptools_3.2 parameters_0.26.0 arcdiagram_0.1.12
#> [205] rmarkdown_2.29 TidyDensity_1.5.0 pander_0.6.6
#> [208] scales_1.4.0 gld_2.6.7 future_1.40.0
#> [211] svglite_2.2.1 fontLiberation_0.1.0 DiagrammeRsvg_0.1
#> [214] ggpp_0.5.8-1 km.ci_0.5-6 rstudioapi_0.17.1
#> [217] cluster_2.1.8.1 janitor_2.2.1 hms_1.1.3
#> [220] anytime_0.3.11 colorspace_2.1-1 rlang_1.1.6
#> [223] jomo_2.7-6 s2_1.1.9 pivottabler_1.5.6
#> [226] ipred_0.9-15 ggforce_0.5.0 mgcv_1.9-1
#> [229] xfun_0.52 coda_0.19-4.1 e1071_1.7-16
#> [232] TH.data_1.1-3 modeltools_0.2-24 matrixStats_1.5.0
#> [235] benford.analysis_0.1.5 recipes_1.3.1 iterators_1.0.14
#> [238] emmeans_1.11.1 randomForest_4.7-1.2 abind_1.4-8
#> [241] tibble_3.3.0 libcoin_1.0-10 ggrain_0.0.4
#> [244] readr_2.1.5 Rdpack_2.6.4 promises_1.3.3
#> [247] sandwich_3.1-1 proxy_0.4-27 compiler_4.3.2
#> [250] leaflet.providers_2.0.0 boot_1.3-31 distributional_0.5.0
#> [253] tableone_0.13.2 polynom_1.4-1 listenv_0.9.1
#> [256] Rcpp_1.0.14 Rttf2pt1_1.3.12 fontquiver_0.2.1
#> [259] DataExplorer_0.8.3 datefixR_1.7.0 units_0.8-7
#> [262] MASS_7.3-60 uuid_1.2-1 insight_1.3.0
#> [265] R6_2.6.1 rstatix_0.7.2 fastmap_1.2.0
#> [268] multcomp_1.4-28 ROCR_1.0-11 vcd_1.4-13
#> [271] mitml_0.4-5 ggdist_3.3.3 nnet_7.3-20
#> [274] gtable_0.3.6 leafem_0.2.4 KernSmooth_2.23-26
#> [277] irr_0.84.1 gtExtras_0.6.0 htmltools_0.5.8.1
#> [280] tidyplots_0.2.2.9000 leafsync_0.1.0 lifecycle_1.0.4
#> [283] sf_1.0-21 zip_2.3.3 kableExtra_1.4.0
#> [286] pryr_0.1.6 nloptr_2.2.1 sass_0.4.10
#> [289] vctrs_0.6.5 flextable_0.9.9 snakecase_0.11.1
#> [292] haven_2.5.5 sp_2.2-0 future.apply_1.11.3
#> [295] bslib_0.9.0 pillar_1.10.2 magick_2.8.7
#> [298] moments_0.14.1 jsonlite_2.0.0 expm_1.0-0