chainer.training.updaters.MultiprocessParallelUpdater

class chainer.training.updaters.MultiprocessParallelUpdater(iterators, optimizer, converter=<function concat_examples>, devices=None)[source]

Implementation of a multiprocess parallel GPU Updater.

This is an implementation of Updater that uses multiple GPUs with multi-process data parallelism. It uses Nvidia NCCL for communication between multiple GPUs.

It behaves similarly to StandardUpdater. The update routine is modified to support data-parallel computation on multiple GPUs in one machine. It is based on synchronous parallel SGD: it parallelizes the gradient computation over a mini-batch, and updates the parameters only in the main device.

It does not transfer the values collected by Reporter in the sub devices to the main device. So you can only see the reported values in the main device.

Parameters:
  • iterators – List of dataset iterator for the training dataset. The number of the iterators must be same to the number of GPUs you use.
  • optimizer – Optimizer to update parameters. The model should be attached to the optimizer.
  • converter – Converter function to build input arrays. Each batch extracted by the iterator is split equally between the devices and then passed with corresponding device option to this function. concat_examples() is used by default.
  • devices – Dictionary or list of devices to which the training data is sent. The master device will be the first one in the list or the value attached to the key 'main'.

Methods

static available()[source]
connect_trainer(trainer)[source]

Connects the updater to the trainer that will call it.

The typical usage of this method is to register additional links to the reporter of the trainer. This method is called at the end of the initialization of Trainer. The default implementation does nothing.

Parameters:trainer (Trainer) – Trainer object to which the updater is registered.
finalize()[source]
get_all_optimizers()[source]

Gets a dictionary of all optimizers for this updater.

Returns:Dictionary that maps names to optimizers.
Return type:dict
get_iterator(name)[source]

Gets the dataset iterator of given name.

Parameters:name (str) – Name of the dataset iterator.
Returns:Corresponding dataset iterator.
Return type:Iterator
get_optimizer(name)[source]

Gets the optimizer of given name.

Parameters:name (str) – Name of the optimizer.
Returns:Corresponding optimizer.
Return type:Optimizer
serialize(serializer)[source]

Serializes the current state of the updater object.

setup_workers()[source]
update()[source]

Updates the parameters of the target model.

This method implements an update formula for the training task, including data loading, forward/backward computations, and actual updates of parameters.

This method is called once at each iteration of the training loop.

update_core()[source]

Attributes

epoch
epoch_detail
is_new_epoch
previous_epoch_detail