optim_ignite_adagrad function

LibTorch implementation of Adagrad

LibTorch implementation of Adagrad

Proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

optim_ignite_adagrad( params, lr = 0.01, lr_decay = 0, weight_decay = 0, initial_accumulator_value = 0, eps = 1e-10 )

Arguments

  • params: (iterable): list of parameters to optimize or list parameter groups

  • lr: (float, optional): learning rate (default: 1e-2)

  • lr_decay: (float, optional): learning rate decay (default: 0)

  • weight_decay: (float, optional): weight decay (L2 penalty) (default: 0)

  • initial_accumulator_value: the initial value for the accumulator. (default: 0)

    Adagrad is an especially good optimizer for sparse data. It individually modifies learning rate for every single parameter, dividing the original learning rate value by sum of the squares of the gradients. It causes that the rarely occurring features get greater learning rates. The main downside of this method is the fact that learning rate may be getting small too fast, so that at some point a model cannot learn anymore.

  • eps: (float, optional): term added to the denominator to improve numerical stability (default: 1e-10)

Fields and Methods

See OptimizerIgnite.

Examples

if (torch_is_installed()) { ## Not run: optimizer <- optim_ignite_adagrad(model$parameters(), lr = 0.1) optimizer$zero_grad() loss_fn(model(input), target)$backward() optimizer$step() ## End(Not run) }
  • Maintainer: Daniel Falbel
  • License: MIT + file LICENSE
  • Last published: 2025-02-14