This function works in a similar way to shap function from iBreakDown but it calculates explanations for a set of observation and then aggregates them.
shap_aggregated(explainer, new_observations, order =NULL, B =25,...)
Arguments
explainer: a model to be explained, preprocessed by the explain function
new_observations: a set of new observations with columns that correspond to variables used in the model.
order: if not NULL, then it will be a fixed order of variables. It can be a numeric vector or vector with names of variables.
B: number of random paths
...: other parameters like label, predict_function, data, x
Returns
an object of the shap_aggregated class.
Examples
library("DALEX")set.seed(1313)model_titanic_glm <- glm(survived ~ gender + age + fare, data = titanic_imputed, family ="binomial")explain_titanic_glm <- explain(model_titanic_glm, data = titanic_imputed, y = titanic_imputed$survived, label ="glm")bd_glm <- shap_aggregated(explain_titanic_glm, titanic_imputed[1:10,])bd_glm
plot(bd_glm, max_features =3)
References
Explanatory Model Analysis. Explore, Explain and Examine Predictive Models. https://ema.drwhy.ai