Light Gradient Boosting Machine
Dimensions of an lgb.Dataset
Handling of column names of lgb.Dataset
Get one attribute of a lgb.Dataset
Save lgb.Dataset
to a binary file
Get default number of threads used by LightGBM
Shared Dataset parameter docs
Shared parameter docs
Configure Fast Single-Row Predictions
Data preparator for LightGBM datasets with rules (integer)
Main CV logic for LightGBM
Construct Dataset explicitly
Construct validation data
Construct lgb.Dataset
object
Set categorical feature of lgb.Dataset
Set reference of lgb.Dataset
Drop serialized raw bytes in a LightGBM model object
Dump LightGBM model to json
Get record evaluation result from booster
Compute feature importance in a model
Compute feature contribution of prediction
Load LightGBM model
Make a LightGBM object serializable by keeping raw bytes
Parse a LightGBM model json dump
Plot feature importance as a bar graph
Plot feature contribution as a bar graph
Restore the C++ component of a de-serialized LightGBM model
Save LightGBM model
Slice a dataset
Main training logic for LightGBM
Train a LightGBM model
Predict method for LightGBM model
Print method for LightGBM model
Set one attribute of a lgb.Dataset
object
Set maximum number of threads used by LightGBM
Summary method for LightGBM model
Tree based algorithms can be improved by introducing boosting frameworks. 'LightGBM' is one such framework, based on Ke, Guolin et al. (2017) <https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision>. This package offers an R interface to work with it. It is designed to be distributed and efficient with the following advantages: 1. Faster training speed and higher efficiency. 2. Lower memory usage. 3. Better accuracy. 4. Parallel learning supported. 5. Capable of handling large-scale data. In recognition of these advantages, 'LightGBM' has been widely-used in many winning solutions of machine learning competitions. Comparison experiments on public datasets suggest that 'LightGBM' can outperform existing boosting frameworks on both efficiency and accuracy, with significantly lower memory consumption. In addition, parallel experiments suggest that in certain circumstances, 'LightGBM' can achieve a linear speed-up in training time by using multiple machines.