ppv function

Positive predictive value

Positive predictive value

These functions calculate the ppv() (positive predictive value) of a measurement system compared to a reference result (the "truth" or gold standard). Highly related functions are spec(), sens(), and npv().

ppv(data, ...) ## S3 method for class 'data.frame' ppv( data, truth, estimate, prevalence = NULL, estimator = NULL, na_rm = TRUE, case_weights = NULL, event_level = yardstick_event_level(), ... ) ppv_vec( truth, estimate, prevalence = NULL, estimator = NULL, na_rm = TRUE, case_weights = NULL, event_level = yardstick_event_level(), ... )

Arguments

  • data: Either a data.frame containing the columns specified by the truth and estimate arguments, or a table/matrix where the true class results should be in the columns of the table.

  • ...: Not currently used.

  • truth: The column identifier for the true class results (that is a factor). This should be an unquoted column name although this argument is passed by expression and supports quasiquotation (you can unquote column names). For _vec() functions, a factor vector.

  • estimate: The column identifier for the predicted class results (that is also factor). As with truth this can be specified different ways but the primary method is to use an unquoted variable name. For _vec() functions, a factor vector.

  • prevalence: A numeric value for the rate of the "positive" class of the data.

  • estimator: One of: "binary", "macro", "macro_weighted", or "micro" to specify the type of averaging to be done. "binary" is only relevant for the two class case. The other three are general methods for calculating multiclass metrics. The default will automatically choose "binary" or "macro" based on estimate.

  • na_rm: A logical value indicating whether NA

    values should be stripped before the computation proceeds.

  • case_weights: The optional column identifier for case weights. This should be an unquoted column name that evaluates to a numeric column in data. For _vec() functions, a numeric vector, hardhat::importance_weights(), or hardhat::frequency_weights().

  • event_level: A single string. Either "first" or "second" to specify which level of truth to consider as the "event". This argument is only applicable when estimator = "binary". The default uses an internal helper that defaults to "first".

Returns

A tibble with columns .metric, .estimator, and .estimate and 1 row of values.

For grouped data frames, the number of rows returned will be the same as the number of groups.

For ppv_vec(), a single numeric value (or NA).

Details

The positive predictive value (ppv()) is defined as the percent of predicted positives that are actually positive while the negative predictive value (npv()) is defined as the percent of negative positives that are actually negative.

Relevant Level

There is no common convention on which factor level should automatically be considered the "event" or "positive" result when computing binary classification metrics. In yardstick, the default is to use the first level. To alter this, change the argument event_level to "second" to consider the last level of the factor the level of interest. For multiclass extensions involving one-vs-all comparisons (such as macro averaging), this option is ignored and the "one" level is always the relevant result.

Multiclass

Macro, micro, and macro-weighted averaging is available for this metric. The default is to select macro averaging if a truth factor with more than 2 levels is provided. Otherwise, a standard binary calculation is done. See vignette("multiclass", "yardstick") for more information.

Implementation

Suppose a 2x2 table with notation:

Reference
PredictedPositiveNegative
PositiveAB
NegativeCD

The formulas used here are:

Sensitivity=A/(A+C) Sensitivity = A/(A+C) Specificity=D/(B+D) Specificity = D/(B+D) Prevalence=(A+C)/(A+B+C+D) Prevalence = (A+C)/(A+B+C+D) PPV=(SensitivityPrevalence)/((SensitivityPrevalence)+((1Specificity)(1Prevalence))) PPV = (Sensitivity * Prevalence) / ((Sensitivity * Prevalence) + ((1-Specificity) * (1-Prevalence))) NPV=(Specificity(1Prevalence))/(((1Sensitivity)Prevalence)+((Specificity)(1Prevalence))) NPV = (Specificity * (1-Prevalence)) / (((1-Sensitivity) * Prevalence) + ((Specificity) * (1-Prevalence)))

See the references for discussions of the statistics.

Examples

# Two class data("two_class_example") ppv(two_class_example, truth, predicted) # Multiclass library(dplyr) data(hpc_cv) hpc_cv %>% filter(Resample == "Fold01") %>% ppv(obs, pred) # Groups are respected hpc_cv %>% group_by(Resample) %>% ppv(obs, pred) # Weighted macro averaging hpc_cv %>% group_by(Resample) %>% ppv(obs, pred, estimator = "macro_weighted") # Vector version ppv_vec( two_class_example$truth, two_class_example$predicted ) # Making Class2 the "relevant" level ppv_vec( two_class_example$truth, two_class_example$predicted, event_level = "second" ) # But what if we think that Class 1 only occurs 40% of the time? ppv(two_class_example, truth, predicted, prevalence = 0.40)

References

Altman, D.G., Bland, J.M. (1994) ``Diagnostic tests 2: predictive values,'' British Medical Journal, vol 309, 102.

See Also

Other class metrics: accuracy(), bal_accuracy(), detection_prevalence(), f_meas(), j_index(), kap(), mcc(), npv(), precision(), recall(), sens(), spec()

Other sensitivity metrics: npv(), sens(), spec()

Author(s)

Max Kuhn