compares error values among different calibration models. A boxplots is created from the n error values that were obtained during the n-times repeated Cross-Validation procedure. Different error values are implemented and can be compared:
calibration error = ece, mce, rmse, class 0 cle, class 1 cle (when discrimination=FALSE) For the calculation of the errors, see the respective methods listed in the "see also" section
list_models: list object that contains all error values for all trained calibration models. For the specific format, see the calling function visualize_calibratR.
discrimination: boolean (TRUE or FALSE). If TRUE, discrimination errors are compared between models; if FALSE calibration error is compared, Default: TRUE
Returns
An object of class list, with the following components:
if discrimination=TRUE - sens: ggplot2 boxplot that compares all evaluated calibration models with regard to sensitivity.
spec: ggplot2 boxplot that compares all evaluated calibration models with regard to specificity
acc: ggplot2 boxplot that compares all evaluated calibration models with regard to accuracy
auc: ggplot2 boxplot that compares all evaluated calibration models with regard to AUC
list_errors: list object that contains all discrimination error values that were used to construct the boxplots
if discrimination=FALSE - ece: ggplot2 boxplot that compares all evaluated calibration models with regard to expected calibration error
mce: ggplot2 boxplot that compares all evaluated calibration models with regard to maximum expected calibration error (MCE)
rmse: ggplot2 boxplot that compares all evaluated calibration models with regard to root mean square error (RMSE)
cle_0: ggplot2 boxplot that compares all evaluated calibration models with regard to class 0 classification error (CLE)
cle_1: ggplot2 boxplot that compares all evaluated calibration models with regard to class 1 classification error (CLE)
list_errors: list object that contains all calibration error values that were used to construct the boxplots