Extract Most "Important" Predictors (Experimental)
Extract Most "Important" Predictors (Experimental)
Extract the most "important" predictors for regression and classification models.
topPredictors(object, n =1L,...)## Default S3 method:topPredictors(object, n =1L,...)## S3 method for class 'train'topPredictors(object, n =1L,...)
Arguments
object: A fitted model object of appropriate class (e.g., "gbm", "lm", "randomForest", etc.).
n: Integer specifying the number of predictors to return. Default is 1 meaning return the single most important predictor.
...: Additional optional arguments to be passed onto varImp.
Details
This function uses the generic function varImp to calculate variable importance scores for each predictor. After that, they are sorted at the names of the n highest scoring predictors are returned.
Examples
## Not run:## Regression example (requires randomForest package to run)#
Load required packages
library(ggplot2)library(randomForest)# Fit a random forest to the mtcars datasetdata(mtcars, package ="datasets")set.seed(101)mtcars.rf <- randomForest(mpg ~ ., data = mtcars, mtry =5, importance =TRUE)# Topfour predictorstop4 <- topPredictors(mtcars.rf, n =4)# Construct partial dependence functions for top four predictorspd <-NULLfor(i in top4){ tmp <- partial(mtcars.rf, pred.var = i) names(tmp)<- c("x","y") pd <- rbind(pd, cbind(tmp, predictor = i))}# Display partial dependence functionsggplot(pd, aes(x, y))+ geom_line()+ facet_wrap(~ predictor, scales ="free")+ theme_bw()+ ylab("mpg")## End(Not run)