Algorithms for Testing Models under Stress
Score Function for Regression
Create Train Index Set
Score Function for Binary Classification
Compare Machine Learning Models
Create Groups for CV
Create Python Virtual Environment
Cross Validation
Spatial Cluster-Based Partitions for Cross-Validation
Cross Validation Function
Data Generation Asymptotic
Data Generation for Linear Regression
Data Generation for Sinusoidal Regression
Distance to Center
Kappa function
Fit Machine Learning Classification Models
Refit Machine Learning Models
Fit Machine Learning Regressor Models
Prediction Methods for Various Models
Check if Python is Available
Asymptotic Regression
Sinusoidal Regression
Root Mean Squarred Error (RMSE)
Score Function for Metrics
Thinning Algorithm for Models with Predict Function
Traditional model evaluation metrics fail to capture model performance under less than ideal conditions. This package employs techniques to evaluate models "under-stress". This includes testing models' extrapolation ability, or testing accuracy on specific sub-samples of the overall model space. Details describing stress-testing methods in this package are provided in Haycock (2023) <doi:10.26076/2am5-9f67>. The other primary contribution of this package is provided to R users access to the 'Python' library 'PyCaret' <https://pycaret.org/> for quick and easy access to auto-tuned machine learning models.