class, iterator, target, converter=convert.concat_examples, device=None, eval_hook=None, eval_func=None, *, progress_bar=False)[source]

Trainer extension to evaluate models on a validation set.

This extension evaluates the current models by a given evaluation function. It creates a Reporter object to store values observed in the evaluation function on each iteration. The report for all iterations are aggregated to DictSummary. The collected mean values are further reported to the reporter object of the trainer, where the name of each observation is prefixed by the evaluator name. See Reporter for details in naming rules of the reports.

Evaluator has a structure to customize similar to that of StandardUpdater. The main differences are:

  • There are no optimizers in an evaluator. Instead, it holds links to evaluate.

  • An evaluation loop function is used instead of an update function.

  • Preparation routine can be customized, which is called before each evaluation. It can be used, e.g., to initialize the state of stateful recurrent networks.

There are two ways to modify the evaluation behavior besides setting a custom evaluation function. One is by setting a custom evaluation loop via the eval_func argument. The other is by inheriting this class and overriding the evaluate() method. In latter case, users have to create and handle a reporter object manually. Users also have to copy the iterators before using them, in order to reuse them at the next time of evaluation. In both cases, the functions are called in testing mode (i.e., chainer.config.train is set to False).

This extension is called at the end of each epoch by default.

  • iterator – Dataset iterator for the validation dataset. It can also be a dictionary of iterators. If this is just an iterator, the iterator is registered by the name 'main'.

  • target – Link object or a dictionary of links to evaluate. If this is just a link object, the link is registered by the name 'main'.

  • converter – Converter function to build input arrays. concat_examples() is used by default.

  • device – Device to which the validation data is sent. Negative value indicates the host memory (CPU).

  • eval_hook – Function to prepare for each evaluation process. It is called at the beginning of the evaluation. The evaluator extension object is passed at each call.

  • eval_func – Evaluation function called at each iteration. The target link to evaluate as a callable is used by default.

  • progress_bar – Boolean flag to show a progress bar while training, which is similar to ProgressBar. (default: False)


The argument progress_bar is experimental. The interface can change in the future.

  • ~Evaluator.converter – Converter function.

  • ~Evaluator.device – Device to which the validation data is sent.

  • ~Evaluator.eval_hook – Function to prepare for each evaluation process.

  • ~Evaluator.eval_func – Evaluation function called at each iteration.



Executes the evaluator extension.

Unlike usual extensions, this extension can be executed without passing a trainer object. This extension reports the performance on validation dataset using the report() function. Thus, users can use this extension independently from any trainer by manually configuring a Reporter object.


trainer (Trainer) – Trainer object that invokes this extension. It can be omitted in case of calling this extension manually.


Result dictionary that contains mean statistics of values reported by the evaluation function.

Return type



Evaluates the model and returns a result dictionary.

This method runs the evaluation loop over the validation dataset. It accumulates the reported values to DictSummary and returns a dictionary whose values are means computed by the summary.

Note that this function assumes that the main iterator raises StopIteration or code in the evaluation loop raises an exception. So, if this assumption is not held, the function could be caught in an infinite loop.

Users can override this method to customize the evaluation routine.


This method encloses eval_func calls with function.no_backprop_mode() context, so all calculations using FunctionNodes inside eval_func do not make computational graphs. It is for reducing the memory consumption.


Result dictionary. This dictionary is further reported via report() without specifying any observer.

Return type



Finalizes the evaluator object.

This method calls the finalize method of each iterator that this evaluator has. It is called at the end of training loops.


Returns a dictionary of all iterators.


Returns a dictionary of all target links.


Returns the iterator of the given name.


Returns the target link of the given name.


Initializes up the trainer state.

This method is called before entering the training loop. An extension that modifies the state of Trainer can override this method to initialize it.

When the trainer has been restored from a snapshot, this method has to recover an appropriate part of the state of the trainer.

For example, ExponentialShift extension changes the optimizer’s hyperparameter at each invocation. Note that the hyperparameter is not saved to the snapshot; it is the responsibility of the extension to recover the hyperparameter. The ExponentialShift extension recovers it in its initialize method if it has been loaded from a snapshot, or just setting the initial value otherwise.


trainer (Trainer) – Trainer object that runs the training loop.

on_error(trainer, exc, tb)[source]

Handles the error raised during training before finalization.

This method is called when an exception is thrown during the training loop, before finalize. An extension that needs different error handling from finalize, can override this method to handle errors.

  • trainer (Trainer) – Trainer object that runs the training loop.

  • exc (Exception) – arbitrary exception thrown during update loop.

  • tb (traceback) – traceback object of the exception


Serializes the extension state.

It is called when a trainer that owns this extension is serialized. It serializes nothing by default.

__eq__(value, /)

Return self==value.

__ne__(value, /)

Return self!=value.

__lt__(value, /)

Return self<value.

__le__(value, /)

Return self<=value.

__gt__(value, /)

Return self>value.

__ge__(value, /)

Return self>=value.


default_name = 'validation'
name = None
priority = 300
trigger = (1, 'epoch')