Local Interpretable Model-Agnostic Explanations
Indicate model type to lime
Default function to tokenize
Load an example image explanation
Load an example text explanation
Explain model predictions
Interactive explanations
lime: Local Interpretable Model-Agnostic Explanations
Create a model explanation function based on training data
Methods for extending limes model support
Plot a condensed overview of all explanations
Plot the features in an explanation
Display image explanations as superpixel areas
Test super pixel segmentation
Segment image into superpixels
Plot text explanations
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arXiv:1602.04938>.
Useful links