chainer.optimizers.AdaGrad

class chainer.optimizers.AdaGrad(lr=0.001, eps=1e-08)[source]

AdaGrad optimizer.

See: http://jmlr.org/papers/v12/duchi11a.html

Parameters:
  • lr (float) – Learning rate.
  • eps (float) – Small value for the numerical stability.

Methods

add_hook(hook, name=None)[source]

Registers a hook function.

Hook function is typically called right after the gradient computation, though the timing depends on the optimization method.

Parameters:
  • hook (function) – Hook function. If hook.call_for_each_param is true, this hook function is called for each parameter by passing the update rule and the parameter. Otherwise, this hook function is called only once each iteration by passing the optimizer.
  • name (str) – Name of the registration. If omitted, hook.name is used by default.
call_hooks()[source]

Invokes hook functions in registration order.

create_update_rule()[source]
new_epoch()[source]

Starts a new epoch.

This method increments the epoch count. Note that if the optimizer depends on the epoch count, then user should call this method appropriately at the beginning of each epoch.

reallocate_cleared_grads()[source]

Reallocate gradients cleared by cleargrad().

This method allocates arrays for all gradients which have None. This method is called before and after every optimizer hook. If an inheriting optimizer does not require this allocation, the optimizer can override this method with a blank function.

remove_hook(name)[source]

Removes a hook function.

Parameters:name (str) – Registered name of the hook function to remove.
serialize(serializer)[source]

Serializes or deserializes the optimizer.

It only saves or loads the following things:

  • Optimizer states
  • Global states (t and epoch)

It does not saves nor loads the parameters of the target link. They should be separately saved or loaded.

Parameters:serializer (AbstractSerializer) – Serializer or deserializer object.
setup(link)[source]
update(lossfun=None, *args, **kwds)[source]

Updates parameters based on a loss function or computed gradients.

This method runs in two ways.

  • If lossfun is given, then it is used as a loss function to compute gradients.
  • Otherwise, this method assumes that the gradients are already computed.

In both cases, the computed gradients are used to update parameters. The actual update routines are defined by the update rule of each parameter.

use_cleargrads(use=True)[source]

Enables or disables use of cleargrads() in update.

Parameters:use (bool) – If True, this function enables use of cleargrads. If False, disables use of cleargrads (zerograds is used).

Deprecated since version v2.0: Note that update() calls cleargrads() by default. cleargrads() is more efficient than zerograds(), so one does not have to call use_cleargrads(). This method remains for backward compatibility.

Attributes

eps

Alias to self.hyperparam.eps

lr

Alias to self.hyperparam.lr