According to the original paper, decaying average of the squared gradients is computed as follows:
E[g2]t=ρE[g2]t−1+(1−ρ)gt2
RMS of previous squared gradients up to time t:
RMS[gt]=E[g2]t+ϵ
Adadelta update rule:
Δθt=−RMS[g]tRMS[Δθ]t−1θt+1=θt+Δθt
Warning
If you need to move a model to GPU via $cuda(), please do so before constructing optimizers for it. Parameters of a model after $cuda()
will be different objects from those before the call. In general, you should make sure that the objects pointed to by model parameters subject to optimization remain the same over the whole lifecycle of optimizer creation and usage.
Examples
if(torch_is_installed()){## Not run:optimizer <- optim_adadelta(model$parameters, lr =0.1)optimizer$zero_grad()loss_fn(model(input), target)$backward()optimizer$step()## End(Not run)}