evaluate function

Evaluating prediction/modeling quality

Evaluating prediction/modeling quality

evaluate is a generic function for evaluating the quality of time series prediction or modeling fitness based on a particular metric defined in an evaluating object. The function invokes particular methods which depend on the class of the first argument.

evaluate(obj, ...) ## S3 method for class 'evaluating' evaluate(obj, test, pred, ...) ## S3 method for class 'fitness' evaluate(obj, mdl, test = NULL, pred = NULL, ...) ## S3 method for class 'error' evaluate(obj, mdl = NULL, test = NULL, pred = NULL, ..., fitness = FALSE)

Arguments

  • obj: An object of class evaluating defining a particular metric.

  • ...: Other parameters passed to eval_func of obj.

  • test: A vector or univariate time series containing actual values for a time series that are to be compared against pred.

  • pred: A vector or univariate time series containing time series predictions that are to be compared against the values in test.

  • mdl: A time series model object for which fitness is to be evaluated.

  • fitness: Should the function compute the fitness quality? If TRUE the function uses mdl to compute fitness error, otherwise, it uses test and pred to compute prediction error.

    For evaluate.fitness, test and pred are ignored and can be set to NULL. For evaluate.error, mdl is ignored if fitness is FALSE, otherwise, test and pred are ignored and can be set to NULL.

Returns

A list containing obj and the computed metric values.

Examples

data(CATS,CATS.cont) mdl <- forecast::auto.arima(CATS[,1]) pred <- forecast::forecast(mdl, h=length(CATS.cont[,1])) evaluate(MSE_eval(), test=CATS.cont[,1], pred=pred$mean) evaluate(MSE_eval(), mdl, fitness=TRUE) evaluate(AIC_eval(), mdl)

See Also

Other evaluate: evaluate.tspred()

Author(s)

Rebecca Pontes Salles

  • Maintainer: Rebecca Pontes Salles
  • License: GPL (>= 2)
  • Last published: 2021-01-21