metric_set() allows you to combine multiple metric functions together into a new function that calculates all of them at once.
metric_set(...)
Arguments
...: The bare names of the functions to be included in the metric set.
Details
All functions must be either:
Only numeric metrics
A mix of class metrics or class prob metrics
A mix of dynamic, integrated, and static survival metrics
For instance, rmse() can be used with mae() because they are numeric metrics, but not with accuracy() because it is a classification metric. But accuracy() can be used with roc_auc().
The returned metric function will have a different argument list depending on whether numeric metrics or a mix of class/prob metrics were passed in.
When mixing class and class prob metrics, pass in the hard predictions
(the factor column) as the named argument estimate, and the soft predictions (the class probability columns) as bare column names or tidyselect selectors to ....
When mixing dynamic, integrated, and static survival metrics, pass in the time predictions as the named argument estimate, and the survival predictions as bare column names or tidyselect selectors to ....
If metric_tweak() has been used to "tweak" one of these arguments, like estimator or event_level, then the tweaked version wins. This allows you to set the estimator on a metric by metric basis and still use it in a metric_set().
Examples
library(dplyr)# Multiple regression metricsmulti_metric <- metric_set(rmse, rsq, ccc)# The returned function has arguments:# fn(data, truth, estimate, na_rm = TRUE, ...)multi_metric(solubility_test, truth = solubility, estimate = prediction)# Groups are respected on the new metric functionclass_metrics <- metric_set(accuracy, kap)hpc_cv %>% group_by(Resample)%>% class_metrics(obs, estimate = pred)# ---------------------------------------------------------------------------# If you need to set options for certain metrics,# do so by wrapping the metric and setting the options inside the wrapper,# passing along truth and estimate as quoted arguments.# Then add on the function class of the underlying wrapped function,# and the direction of optimization.ccc_with_bias <-function(data, truth, estimate, na_rm =TRUE,...){ ccc( data = data, truth =!!rlang::enquo(truth), estimate =!!rlang::enquo(estimate),# set bias = TRUE bias =TRUE, na_rm = na_rm,...)}# Use `new_numeric_metric()` to formalize this new metric functionccc_with_bias <- new_numeric_metric(ccc_with_bias,"maximize")multi_metric2 <- metric_set(rmse, rsq, ccc_with_bias)multi_metric2(solubility_test, truth = solubility, estimate = prediction)# ---------------------------------------------------------------------------# A class probability example:# Note that, when given class or class prob functions,# metric_set() returns a function with signature:# fn(data, truth, ..., estimate)# to be able to mix class and class prob metrics.# You must provide the `estimate` column by explicitly naming# the argumentclass_and_probs_metrics <- metric_set(roc_auc, pr_auc, accuracy)hpc_cv %>% group_by(Resample)%>% class_and_probs_metrics(obs, VF:L, estimate = pred)