Utilities for Scoring and Assessing Predictions
Absolute Error
Add coverage of central prediction intervals
Absolute Error of the Median (Quantile-based Version)
Absolute Error of the Median (Sample-based Version)
Display Number of Forecasts Available
Available metrics in scoringutils
Determines Bias of Quantile Forecasts
Determines Bias of Quantile Forecasts based on the range of the predic...
Determines bias of forecasts
Brier Score
Check Length
Check forecasts
Check whether the desired metrics are available in scoringutils
Check Variable is not NULL
Check Prediction Input For Lower-level Scoring Functions
Check that quantiles are valid
Check input parameters for summarise_scores()
Check Observed Value Input For Lower-level Scoring Functions
Collapse several messages to one
Compare Two Models Based on Subset of Common Forecasts
Correlation Between Metrics
Ranked Probability Score
Delete Columns From a Data.table
Dawid-Sebastiani Score
Find duplicate forecasts
Calculate Geometric Mean
Get unit of a single forecast
Get prediction type of a forecast
Get protected columns from a data frame
Get type of the target true values of a forecast
Infer metric for pairwise comparisons
Interval Score
Check whether object has been checked with check_forecasts()
Log transformation with an additive shift
Log Score for Binary outcomes
Logarithmic score
Determine dispersion of a probabilistic forecast
Make Rows NA in Data for Plotting
Merge Forecast Data And Observations
Do Pairwise Comparisons of Scores
Do Pairwise Comparison for one Set of Forecasts
Simple permutation test
Probability Integral Transformation (data.frame Format)
Probability Integral Transformation (sample-based version)
Visualise Where Forecasts Are Available
Plot Correlation Between Metrics
Create a Heatmap of a Scoring Metric
Plot Interval Coverage
Plot Heatmap of Pairwise Comparisons
PIT Histogram
Plot Predictions vs True Values
Plot Quantile Coverage
Plot Metrics by Range of the Prediction Interval
Plot Coloured Score Table
Plot Contributions to the Weighted Interval Score
Check if predictions are quantile forecasts
Print output from check_forecasts()
Quantile Score
Change Data from a Plain Quantile Format to a Long Range Format
Change Data from a Range Format to a Quantile Format
Change Data from a Sample Based Format to a Quantile Format
Change Data from a Sample Based Format to a Long Interval Range Format
Evaluate forecasts
Evaluate forecasts in a Binary Format
Evaluate forecasts in a Quantile-Based Format
Evaluate forecasts in a Sample-Based Format (Integer or Continuous)
scoringutils: Utilities for Scoring and Assessing Predictions
Squared Error of the Mean (Sample-based Version)
Set unit of a single forecast manually
Squared Error
Summarise scores as produced by score()
Scoringutils ggplot2 theme
Transform forecasts and observed values
Provides a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) <doi:10.1198/016214506000001437>, Jordan, A., Krüger, F., & Lerch, S. (2019) <doi:10.18637/jss.v090.i12>) within a consistent framework for evaluation, comparison and visualisation of forecasts. In addition to proper scoring rules, functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) <doi:10.1371/journal.pcbi.1006785>) of forecasts. Several types of predictions (e.g. binary, discrete, continuous) which may come in different formats (e.g. forecasts represented by predictive samples or by quantiles of the predictive distribution) can be evaluated. Scoring metrics can be used either through a convenient data.frame format, or can be applied as individual functions in a vector / matrix format. All functionality has been implemented with a focus on performance and is robustly tested. Find more information about the package in the accompanying paper (<doi:10.48550/arXiv.2205.07090>).
Useful links