Fully connected feed forward network with dropout after each activation function. The features can either be a single lazy_tensor or one or more numeric columns (but not both).
Dictionary
This Learner can be instantiated using the sugar function lrn():
The activation function. Is initialized to nn_relu.
activation_args :: named list()
A named list with initialization arguments for the activation function. This is intialized to an empty list.
neurons :: integer()
The number of neurons per hidden layer. By default there is no hidden layer. Setting this to c(10, 20) would have a the first hidden layer with 10 neurons and the second with 20.
n_layers :: integer()
The number of layers. This parameter must only be set when neurons has length 1.
p :: numeric(1)
The dropout probability. Is initialized to 0.5.
shape :: integer() or NULL
The input shape of length 2, e.g. c(NA, 5). Only needs to be present when there is a lazy tensor input with unknown shape (NULL). Otherwise the input shape is inferred from the number of numeric features.
Examples
# Define the Learner and set parameter valueslearner = lrn("classif.mlp")learner$param_set$set_values( epochs =1, batch_size =16, device ="cpu", neurons =10)# Define a Tasktask = tsk("iris")# Create train and test setids = partition(task)# Train the learner on the training idslearner$train(task, row_ids = ids$train)# Make predictions for the test rowspredictions = learner$predict(task, row_ids = ids$test)# Score the predictionspredictions$score()
References
Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). Revisiting Deep Learning for Tabular Data.
arXiv, 2106.11959 .
See Also
Other Learner: mlr_learners.tab_resnet, mlr_learners.torch_featureless, mlr_learners_torch, mlr_learners_torch_image, mlr_learners_torch_model