Skip to contents

Comprehensive explainable AI toolkit for interpreting machine learning models in medical and clinical applications. Provides attention maps, feature importance, SHAP values, and other interpretability methods.

Usage

explainableai(
  data,
  analysis_type = "feature_importance",
  features,
  target_var,
  model_predictions,
  image_paths,
  attention_maps,
  shap_method = "kernel_explainer",
  lime_method = "tabular",
  n_samples = 100,
  n_features = 20,
  plot_type = "summary",
  overlay_original = TRUE,
  confidence_level = 0.95,
  background_samples = 100,
  perturbation_method = "random",
  clustering_method = "none",
  interaction_analysis = FALSE,
  local_explanations = TRUE,
  global_explanations = TRUE,
  save_explanations = FALSE,
  explanation_path = "",
  attention_threshold = 0.1,
  colormap = "viridis"
)

Arguments

data

the data as a data frame

analysis_type

type of explainability analysis to perform

features

features/variables for importance analysis

target_var

target variable or model predictions

model_predictions

variable containing model predictions or probabilities

image_paths

variable containing paths to image files

attention_maps

variable containing attention map data or file paths

shap_method

SHAP explainer method based on model type

lime_method

LIME explanation method for different data types

n_samples

number of samples to use for explanation analysis

n_features

number of top important features to display

plot_type

type of visualization for explanations

overlay_original

overlay attention maps on original images

confidence_level

confidence level for statistical intervals

background_samples

number of background samples for SHAP baseline

perturbation_method

method for perturbing features in permutation importance

clustering_method

method for grouping similar features

interaction_analysis

analyze pairwise feature interactions

local_explanations

create individual sample explanations

global_explanations

create overall model explanations

save_explanations

save explanation data for external use

explanation_path

file path to save explanation results

attention_threshold

minimum attention value to display in visualizations

colormap

color palette for explanation visualizations

Value

A results object containing:

results$overviewa html
results$featureimportance$importancetableRanked list of feature importance scores
results$featureimportance$importanceplotBar plot of feature importance scores
results$shapanalysis$shapvaluestableMean absolute SHAP values per feature
results$shapanalysis$shapwaterfalltableIndividual prediction explanations
results$shapanalysis$shapinteractiontableFeature interaction effects
results$shapanalysis$shapsummaryplotOverview of SHAP values for all features
results$shapanalysis$shapwaterfallplotIndividual prediction explanation
results$shapanalysis$shapinteractionplotFeature interaction heatmap
results$limeanalysis$limeexplanationtableLocal explanations for individual predictions
results$limeanalysis$limeplotLocal explanation visualization
results$attentionanalysis$attentionstatstableSummary statistics of attention patterns
results$attentionanalysis$attentionpeakstableHighest attention regions across samples
results$attentionanalysis$attentionheatmapplotAttention map overlays on original images
results$attentionanalysis$attentiondistributionplotDistribution of attention values
results$partialdependence$pdptablePartial dependence effect sizes
results$partialdependence$pdpplotEffect of individual features on predictions
results$partialdependence$iceplotIndividual conditional expectation curves
results$globalexplanations$modelinsightstableOverall model behavior insights
results$globalexplanations$featureclusteringtableGroups of similar features
results$globalexplanations$globalinsightplotOverall model behavior visualization
results$globalexplanations$featureclusterplotDendrogram or cluster plot of features
results$localexplanations$samplewiseexplanationtableDetailed explanations for individual samples
results$localexplanations$localexplanationplotIndividual sample explanation examples
results$validationmetrics$validationtableQuality metrics for explanations
results$validationmetrics$stabilityanalysistableConsistency of explanations across perturbations
results$validationmetrics$validationplotValidation metrics visualization

Examples

# SHAP analysis for feature importance
explainableai(
    data = model_data,
    model_predictions = "predictions_var",
    features = c("feature1", "feature2", "feature3"),
    method = "shap",
    plot_type = "summary"
)

# Attention map analysis for image models
explainableai(
    data = image_data,
    image_paths = "image_path_var",
    attention_maps = "attention_var",
    method = "attention_analysis",
    overlay_original = TRUE
)