optim_ignite_adam function

LibTorch implementation of Adam

LibTorch implementation of Adam

It has been proposed in Adam: A Method for Stochastic Optimization.

optim_ignite_adam( params, lr = 0.001, betas = c(0.9, 0.999), eps = 1e-08, weight_decay = 0, amsgrad = FALSE )

Arguments

  • params: (iterable): iterable of parameters to optimize or dicts defining parameter groups

  • lr: (float, optional): learning rate (default: 1e-3)

  • betas: (Tuple[float, float], optional): coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

  • eps: (float, optional): term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay: (float, optional): weight decay (L2 penalty) (default: 0)

  • amsgrad: (boolean, optional): whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond

    (default: FALSE)

Fields and Methods

See OptimizerIgnite.

Examples

if (torch_is_installed()) { ## Not run: optimizer <- optim_ignite_adam(model$parameters(), lr = 0.1) optimizer$zero_grad() loss_fn(model(input), target)$backward() optimizer$step() ## End(Not run) }
  • Maintainer: Daniel Falbel
  • License: MIT + file LICENSE
  • Last published: 2025-02-14