chainer.training.extensions.MicroAverage

class chainer.training.extensions.MicroAverage(numerator_key, denominator_key, result_key, trigger=(1, 'epoch'))[source]

Calculates micro-average ratio.

Give \(N\) batches and values \(\{n_1, \dots, n_N\}\) and \(\{d_1, \dots, d_N\}\), this extension calculates micro-average of these ratio defined as:

\[\frac{\sum_i^N n_i}{\sum_i^N d_i}.\]

A user usually uses the number of examples which a system correctly predict as \(n_i\) and the number of total examples in \(i\)-th batch as \(d_i\). This value is called macro-average of precision.

Note that macro-average is defined as:

\[\frac{1}{N}\sum_i^N (n_i / d_i),\]

It is same to the micro-average when each mini-batch has the same \(d_i\).

You need to report numerator value (the number of correct examples) and denominator value (the number of examples) in your model.

>>> class MyModel(chainer.Link):
...     def __call__(self, x, y):
...         loss = F.softmax_cross_entropy(x, y)
...         correct = (x.data.argmax(axis=1) == y.data).sum()
...         total = len(y.data)
...         reporter.report({'correct': correct, 'total': total}, self)
...         return loss

And then, make an extension with corresponding reporting keys and register it.

>>> ext = extensions.MicroAverage(
...     'main/correct', 'main/total', 'main/accuracy')
Parameters
  • numerator_key (str) – Key string of obserbation storing a numerator value.

  • denominator_key (str) – Key string of obserbation storing a denominator value.

  • result_key (str) – Key string of obserbation to store a result.

  • trigger – Trigger that decides when to calcurate average. This is distinct from the trigger of this extension itself. If it is a tuple in the form <int>, 'epoch' or <int>, 'iteration', it is passed to IntervalTrigger.

Methods

__call__(trainer)[source]

Invokes the extension.

Implementations should override this operator. This method is called at iterations which the corresponding trigger accepts.

Parameters

trainer (Trainer) – Trainer object that calls this operator.

finalize()[source]

Finalizes the extension.

This method is called at the end of the training loop.

initialize(trainer)[source]

Initializes up the trainer state.

This method is called before entering the training loop. An extension that modifies the state of Trainer can override this method to initialize it.

When the trainer has been restored from a snapshot, this method has to recover an appropriate part of the state of the trainer.

For example, ExponentialShift extension changes the optimizer’s hyperparameter at each invocation. Note that the hyperparameter is not saved to the snapshot; it is the responsibility of the extension to recover the hyperparameter. The ExponentialShift extension recovers it in its initialize method if it has been loaded from a snapshot, or just setting the initial value otherwise.

Parameters

trainer (Trainer) – Trainer object that runs the training loop.

on_error(trainer, exc, tb)[source]

Handles the error raised during training before finalization.

This method is called when an exception is thrown during the training loop, before finalize. An extension that needs different error handling from finalize, can override this method to handle errors.

Parameters
  • trainer (Trainer) – Trainer object that runs the training loop.

  • exc (Exception) – arbitrary exception thrown during update loop.

  • tb (traceback) – traceback object of the exception

serialize(serializer)[source]

Serializes the extension state.

It is called when a trainer that owns this extension is serialized. It serializes nothing by default.

Attributes

default_name

Default name of the extension.

It is the name of the class by default. Implementation can override this property, or provide a class attribute to hide it.

name = None
priority = 200
trigger = (1, 'iteration')