Building and Training Neural Networks
Accumulated Local Effect Plot (ALE)
Visualize training of Neural Network
Average pooling layer
'cito': Building and training neural networks
CNN
Returns list of parameters the neural network model currently has in u...
Returns list of parameters the neural network model currently has in u...
Calculate average conditional effects
Creation of customized learning rate scheduler objects
Creation of customized optimizer objects
Config hyperparameter tuning
Continues training of a model generated with dnn or cnn for additi...
Convolutional layer
CNN architecture
DNN
Embeddings
list of specials -- taken from enum.R
Linear layer
Maximum pooling layer
Partial Dependence Plot (PDP)
Plot the CNN architecture
Plot the CNN architecture
Creates graph plot which gives an overview of the network architecture...
Predict from a fitted cnn model
Predict from a fitted dnn model
Print pooling layer
Print class citoarchitecture
Print class citocnn
Print class citodnn
Print average conditional effects
Print conv layer
Print linear layer
Print pooling layer
Print method for class summary.citodnn
Print transfer model
Extract Model Residuals
Data Simulation for CNN
Summary citocnn
Summarize Neural Network of class citodnn
combine a list of formula terms as a sum
Transfer learning
Tune hyperparameter
The 'cito' package provides a user-friendly interface for training and interpreting deep neural networks (DNN). 'cito' simplifies the fitting of DNNs by supporting the familiar formula syntax, hyperparameter tuning under cross-validation, and helps to detect and handle convergence problems. DNNs can be trained on CPU, GPU and MacOS GPUs. In addition, 'cito' has many downstream functionalities such as various explainable AI (xAI) metrics (e.g. variable importance, partial dependence plots, accumulated local effect plots, and effect estimates) to interpret trained DNNs. 'cito' optionally provides confidence intervals (and p-values) for all xAI metrics and predictions. At the same time, 'cito' is computationally efficient because it is based on the deep learning framework 'torch'. The 'torch' package is native to R, so no Python installation or other API is required for this package.