forecast: The vector or matrix of forecasts values.
na.rm: Logical, defining whether to remove the NAs from the provided data or not.
lower: The lower bound of the prediction interval.
upper: The upper bound of the prediction interval.
level: The confidence level of the constructed interval.
scale: The value that should be used in the denominator of MASE. Can be anything but advised values are: mean absolute deviation of in-sample one step ahead Naive error or mean absolute value of the in-sample actuals.
benchmark: The vector or matrix of the forecasts of the benchmark model.
benchmarkLower: The lower bound of the prediction interval of the benchmark model.
benchmarkUpper: The upper bound of the prediction interval of the benchmark model.
Returns
All the functions return the scalar value.
Details
In case of sMSE, scale needs to be a squared value. Typical one -- squared mean value of in-sample actuals.
If all the measures are needed, then measures function can help.
There are several other measures, see details of pinball
and hm .
Examples
y <- rnorm(100,10,2)testForecast <- rep(mean(y[1:90]),10)MAE(y[91:100],testForecast)MSE(y[91:100],testForecast)MPE(y[91:100],testForecast)MAPE(y[91:100],testForecast)# Measures from Petropoulos & Kourentzes (2015)MASE(y[91:100],testForecast,mean(abs(y[1:90])))sMSE(y[91:100],testForecast,mean(abs(y[1:90]))^2)sPIS(y[91:100],testForecast,mean(abs(y[1:90])))sCE(y[91:100],testForecast,mean(abs(y[1:90])))# Original MASE from Hyndman & Koehler (2006)MASE(y[91:100],testForecast,mean(abs(diff(y[1:90]))))testForecast2 <- rep(y[91],10)# Relative measures, from and inspired by Davydenko & Fildes (2013)rMAE(y[91:100],testForecast2,testForecast)rRMSE(y[91:100],testForecast2,testForecast)rAME(y[91:100],testForecast2,testForecast)GMRAE(y[91:100],testForecast2,testForecast)#### Measures for the prediction intervals# An example with mtcars dataourModel <- alm(mpg~., mtcars[1:30,], distribution="dnorm")ourBenchmark <- alm(mpg~1, mtcars[1:30,], distribution="dnorm")# Produce predictions with the intervalourForecast <- predict(ourModel, mtcars[-c(1:30),], interval="p")ourBenchmarkForecast <- predict(ourBenchmark, mtcars[-c(1:30),], interval="p")MIS(mtcars$mpg[-c(1:30)],ourForecast$lower,ourForecast$upper,0.95)sMIS(mtcars$mpg[-c(1:30)],ourForecast$lower,ourForecast$upper,mean(mtcars$mpg[1:30]),0.95)rMIS(mtcars$mpg[-c(1:30)],ourForecast$lower,ourForecast$upper, ourBenchmarkForecast$lower,ourBenchmarkForecast$upper,0.95)### Also, see pinball function for other measures for the intervals
Fildes R. (1992). The evaluation of extrapolative forecasting methods. International Journal of Forecasting, 8, pp.81-98.
Hyndman R.J., Koehler A.B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22, pp.679-688.
Petropoulos F., Kourentzes N. (2015). Forecast combinations for intermittent demand. Journal of the Operational Research Society, 66, pp.914-924.
Wallstrom P., Segerstedt A. (2010). Evaluation of forecasting error measurements and techniques for intermittent demand. International Journal of Production Economics, 128, pp.625-636.
Davydenko, A., Fildes, R. (2013). Measuring Forecasting Accuracy: The Case Of Judgmental Adjustments To Sku-Level Demand Forecasts. International Journal of Forecasting, 29(3), 510-522. tools:::Rd_expr_doi("10.1016/j.ijforecast.2012.09.002")
Gneiting, T., & Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477), 359–378. tools:::Rd_expr_doi("10.1198/016214506000001437")