scoringutils1.2.2 package

Utilities for Scoring and Assessing Predictions

abs_error

Absolute Error

add_coverage

Add coverage of central prediction intervals

ae_median_quantile

Absolute Error of the Median (Quantile-based Version)

ae_median_sample

Absolute Error of the Median (Sample-based Version)

avail_forecasts

Display Number of Forecasts Available

available_metrics

Available metrics in scoringutils

bias_quantile

Determines Bias of Quantile Forecasts

bias_range

Determines Bias of Quantile Forecasts based on the range of the predic...

bias_sample

Determines bias of forecasts

brier_score

Brier Score

check_equal_length

Check Length

check_forecasts

Check forecasts

check_metrics

Check whether the desired metrics are available in scoringutils

check_not_null

Check Variable is not NULL

check_predictions

Check Prediction Input For Lower-level Scoring Functions

check_quantiles

Check that quantiles are valid

check_summary_params

Check input parameters for summarise_scores()

check_true_values

Check Observed Value Input For Lower-level Scoring Functions

collapse_messages

Collapse several messages to one

compare_two_models

Compare Two Models Based on Subset of Common Forecasts

correlation

Correlation Between Metrics

crps_sample

Ranked Probability Score

delete_columns

Delete Columns From a Data.table

dss_sample

Dawid-Sebastiani Score

find_duplicates

Find duplicate forecasts

geom_mean_helper

Calculate Geometric Mean

get_forecast_unit

Get unit of a single forecast

get_prediction_type

Get prediction type of a forecast

get_protected_columns

Get protected columns from a data frame

get_target_type

Get type of the target true values of a forecast

infer_rel_skill_metric

Infer metric for pairwise comparisons

interval_score

Interval Score

is_scoringutils_check

Check whether object has been checked with check_forecasts()

log_shift

Log transformation with an additive shift

logs_binary

Log Score for Binary outcomes

logs_sample

Logarithmic score

mad_sample

Determine dispersion of a probabilistic forecast

make_NA

Make Rows NA in Data for Plotting

merge_pred_and_obs

Merge Forecast Data And Observations

pairwise_comparison

Do Pairwise Comparisons of Scores

pairwise_comparison_one_group

Do Pairwise Comparison for one Set of Forecasts

permutation_test

Simple permutation test

pit

Probability Integral Transformation (data.frame Format)

pit_sample

Probability Integral Transformation (sample-based version)

plot_avail_forecasts

Visualise Where Forecasts Are Available

plot_correlation

Plot Correlation Between Metrics

plot_heatmap

Create a Heatmap of a Scoring Metric

plot_interval_coverage

Plot Interval Coverage

plot_pairwise_comparison

Plot Heatmap of Pairwise Comparisons

plot_pit

PIT Histogram

plot_predictions

Plot Predictions vs True Values

plot_quantile_coverage

Plot Quantile Coverage

plot_ranges

Plot Metrics by Range of the Prediction Interval

plot_score_table

Plot Coloured Score Table

plot_wis

Plot Contributions to the Weighted Interval Score

prediction_is_quantile

Check if predictions are quantile forecasts

print.scoringutils_check

Print output from check_forecasts()

quantile_score

Quantile Score

quantile_to_range_long

Change Data from a Plain Quantile Format to a Long Range Format

range_long_to_quantile

Change Data from a Range Format to a Quantile Format

sample_to_quantile

Change Data from a Sample Based Format to a Quantile Format

sample_to_range_long

Change Data from a Sample Based Format to a Long Interval Range Format

score

Evaluate forecasts

score_binary

Evaluate forecasts in a Binary Format

score_quantile

Evaluate forecasts in a Quantile-Based Format

score_sample

Evaluate forecasts in a Sample-Based Format (Integer or Continuous)

scoringutils-package

scoringutils: Utilities for Scoring and Assessing Predictions

se_mean_sample

Squared Error of the Mean (Sample-based Version)

set_forecast_unit

Set unit of a single forecast manually

squared_error

Squared Error

summarise_scores

Summarise scores as produced by score()

theme_scoringutils

Scoringutils ggplot2 theme

transform_forecasts

Transform forecasts and observed values

Provides a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) <doi:10.1198/016214506000001437>, Jordan, A., Krüger, F., & Lerch, S. (2019) <doi:10.18637/jss.v090.i12>) within a consistent framework for evaluation, comparison and visualisation of forecasts. In addition to proper scoring rules, functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) <doi:10.1371/journal.pcbi.1006785>) of forecasts. Several types of predictions (e.g. binary, discrete, continuous) which may come in different formats (e.g. forecasts represented by predictive samples or by quantiles of the predictive distribution) can be evaluated. Scoring metrics can be used either through a convenient data.frame format, or can be applied as individual functions in a vector / matrix format. All functionality has been implemented with a focus on performance and is robustly tested. Find more information about the package in the accompanying paper (<doi:10.48550/arXiv.2205.07090>).

  • Maintainer: Nikos Bosse
  • License: MIT + file LICENSE
  • Last published: 2023-11-29