specificity estimates the specificity (a.k.a. selectivity, or true negative rate -TNR-) for a nominal/categorical predicted-observed dataset.
selectivity alternative to specificity().
TNR alternative to specificity().
FPR estimates the false positive rate (a.k.a fall-out or false alarm) for a nominal/categorical predicted-observed dataset.
specificity( data =NULL, obs, pred, atom =FALSE, pos_level =2, tidy =FALSE, na.rm =TRUE)selectivity( data =NULL, obs, pred, atom =FALSE, pos_level =2, tidy =FALSE, na.rm =TRUE)TNR( data =NULL, obs, pred, atom =FALSE, pos_level =2, tidy =FALSE, na.rm =TRUE)FPR( data =NULL, obs, pred, atom =FALSE, pos_level =2, tidy =FALSE, na.rm =TRUE)
Arguments
data: (Optional) argument to call an existing data frame containing the data.
obs: Vector with observed values (character | factor).
pred: Vector with predicted values (character | factor).
atom: Logical operator (TRUE/FALSE) to decide if the estimate is made for each class (atom = TRUE) or at a global level (atom = FALSE); Default : FALSE. When dataset is "binomial" atom does not apply.
pos_level: Integer, for binary cases, indicating the order (1|2) of the level corresponding to the positive. Generally, the positive level is the second (2) since following an alpha-numeric order, the most common pairs are (Negative | Positive), (0 | 1), (FALSE | TRUE). Default : 2.
tidy: Logical operator (TRUE/FALSE) to decide the type of return. TRUE returns a data.frame, FALSE returns a list; Default : FALSE.
na.rm: Logic argument to remove rows with missing values (NA). Default is na.rm = TRUE.
Returns
an object of class numeric within a list (if tidy = FALSE) or within a data frame (if tidy = TRUE).
Details
The specificity (or selectivity, or true negative rate-TNR-) is a non-normalized coefficient that represents the ratio between the correctly negative predicted cases (or true negative -TN- for binary cases) to the total number of actual observations not belonging to a given class (actual negatives -N- for binary cases).
For binomial cases, specificity=NTN=TN+FPTN
The specificity metric is bounded between 0 and 1. The closer to 1 the better. Values towards zero indicate low performance. For multinomial cases, it can be either estimated for each particular class or at a global level.
Metrica offers 3 identical alternative functions that do the same job: i) specificity, ii) selectivity, and iii) TNR. However, consider when using metrics_summary, only the specificity alternative is used.
The false positive rate (or false alarm, or fall-out) is the complement of the specificity, representing the ratio between the number of false positives (FP) to the actual number of negatives (N). The FPR formula is:
FPR=1−specificity=1−TNR=NFP
The FPR is bounded between 0 and 1. The closer to 0 the better. Low performance is indicated with FPR > 0.5.
set.seed(123)# Two-classbinomial_case <- data.frame(labels = sample(c("True","False"),100,replace =TRUE), predictions = sample(c("True","False"),100, replace =TRUE))# Multi-classmultinomial_case <- data.frame(labels = sample(c("Red","Blue","Green"),100,replace =TRUE), predictions = sample(c("Red","Blue","Green"),100, replace =TRUE))# Get specificity and FPR estimates for two-class casespecificity(data = binomial_case, obs = labels, pred = predictions, tidy =TRUE)FPR(data = binomial_case, obs = labels, pred = predictions, tidy =TRUE)# Get specificity estimate for each class for the multi-class casespecificity(data = multinomial_case, obs = labels, pred = predictions, tidy =TRUE)# Get specificity estimate for the multi-class case at a global levelspecificity(data = multinomial_case, obs = labels, pred = predictions, tidy =TRUE)
References
Ting K.M. (2017) Sensitivity and Specificity. In: Sammut C., Webb G.I. (eds) Encyclopedia of Machine Learning and Data Mining.