Interpretable Machine Learning
Plot Shapley
Plot Tree Surrogate
Predict LocalModel
Predict Tree Surrogate
Predictor object
Turn class probabilities into class labels
Prediction explanations with game theory
Decision tree surrogate model
Compute ALE for 1 categorical feature
Compute ALE for 2 features, one numerical, one categorical
Compute ALE for 2 numerical features
Compute ALE for 1 numerical feature
Extract glmnet effects
Effect of a feature on predictions
Effect of a feature on predictions
Feature importance
returns TRUE if object has predict function
Make machine learning models and predictions interpretable
Impute missing cells of grid
Feature interactions
Interpretation Method
LocalModel
Order levels of a categorical features
Effect of one or two feature(s) on the model predictions (deprecated)
Plot FeatureEffect
Plot FeatureEffect
Plot Feature Importance
Plot Interaction
Plot Local Model
Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <doi:10.48550/arxiv.1801.01489>, accumulated local effects plots described by Apley (2018) <doi:10.48550/arxiv.1612.08468>, partial dependence plots described by Friedman (2001) <www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <doi:10.48550/arXiv.1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>, feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148> and tree surrogate models.
Useful links