Interpretable Neural Network Based on Generalized Additive Models
Autoplot method for neuralGAM
objects (epistemic-only)
Build and compile a neural network feature model
Deviance of the model
Diagnosis plots to evaluate a fitted neuralGAM
model.
Derivative of the link function
Internal helper: combine epistemic and aleatoric uncertainties via mix...
Internal helper: combine epistemic and aleatoric via variance decompos...
Internal helper: compute uncertainty decomposition (epistemic / aleato...
Internal helper: joint predictive interval (both) via variance combine...
Internal helper: joint epistemic SE on link scale
Internal helper: MC Dropout forward sampling
Extract structured elements from a model formula
Install neuralGAM python requirements
Inverse of the link functions
Link function
Derivative of the Inverse Link Function
neuralGAM: Interpretable Neural Network Based on Generalized Additive ...
Fit a neuralGAM model
Plot training loss history for a neuralGAM model
Visualization of neuralGAM
object with base graphics
Produces predictions from a fitted neuralGAM
object
Short neuralGAM
summary
Objects exported from other packages
Simulate Example Data for NeuralGAM
Summary of a neuralGAM
model
Validate/resolve a Keras activation
Validate/resolve a Keras loss
Weights
Neural Additive Model framework based on Generalized Additive Models from Hastie & Tibshirani (1990, ISBN:9780412343902), which trains a different neural network to estimate the contribution of each feature to the response variable. The networks are trained independently leveraging the local scoring and backfitting algorithms to ensure that the Generalized Additive Model converges and it is additive. The resultant Neural Network is a highly accurate and interpretable deep learning model, which can be used for high-risk AI practices where decision-making should be based on accountable and interpretable algorithms.
Useful links