Prediction Explanation with Dependence-Aware Shapley Values
AICc formula for several sets, alternative definition
Temp-function for computing the full AICc with several X's etc
Apply dummy variables - this is an internal function intended only to ...
Checks that two extracted feature lists have exactly the same properti...
correction term with trace_input in AICc formula
Make all conditional inference trees
Explain the output of machine learning models with more accurately est...
Define feature combinations, and fetch additional information about ea...
Get feature matrix
Transforms a sample to standardized normal distribution
Transforms new data to standardized normal (dimension 1) based on othe...
Fetches feature information from a given data set
Helper function used in explain.combined
Fetches feature information from a given model object
Provides a data.table with the supported models
Computing single H matrix in AICc-function using the Mahalanobis dista...
Transforms new data to a standardized normal distribution
(Generalized) Mahalanobis distance
Initiate the making of dummy variables
Check that the type of model is supported by the explanation method
Generate permutations of training data using test observations
Get imputed data
Plot of the Shapley value explanations
Generate predictions for different model classes
Calculate Shapley weights for test data
Generate data used for predictions
Process (check and update) data according to specified feature list
sigma_hat_sq-function
Helper function to sample a combination of training and testing rows, ...
Sample conditional variables using the Gaussian copula approach
Sample ctree variables from a given conditional inference tree
Sample conditional Gaussian variables
Calculate Shapley weight
Create an explainer object with Shapley weights for test data.
Updates data by reference according to the updater argument.
Calculate weighted matrix
Calculate weight matrix
Complex machine learning models are often hard to interpret. However, in many situations it is crucial to understand and explain why a model made a specific prediction. Shapley values is the only method for such prediction explanation framework with a solid theoretical foundation. Previously known methods for estimating the Shapley values do, however, assume feature independence. This package implements the method described in Aas, Jullum and Løland (2019) <arXiv:1903.10464>, which accounts for any feature dependence, and thereby produces more accurate estimates of the true Shapley values.
Useful links