chainer.dataset.ConcatWithAsyncTransfer

class chainer.dataset.ConcatWithAsyncTransfer(stream=None, compute_stream=None)[source]

Interface to concatenate data and transfer them to GPU asynchronously.

It enables to transfer next batch of input data to GPU while GPU is running kernels for training using current batch of input data.

An instance of this class is mainly intended to be used as a converter function of an updater like below.

from chainer.dataset import convert
...
updater = chainer.training.updaters.StandardUpdater(
               ...,
               converter=convert.ConcatWithAsyncTransfer(),
               ...)
Parameters
  • stream (cupy.cuda.Stream) – CUDA stream. If None, a stream is automatically created on the first call. Data transfer operation is launched asynchronously using the stream.

  • compute_stream (cupy.cuda.Stream) – CUDA stream used for compute kernels. If not None, CUDA events are created/used to avoid global synchronization and overlap execution of compute kernels and data transfers as much as possible. If None, global synchronization is used instead.

Methods

__call__(batch, device=None, padding=None)[source]

Concatenate data and transfer them to GPU asynchronously.

See also chainer.dataset.concat_examples().

Parameters
  • batch (list) – A list of examples.

  • device (int) – Device ID to which each array is sent.

  • padding – Scalar value for extra elements.

Returns

Array, a tuple of arrays, or a dictionary of arrays. The type depends on the type of each example in the batch.

__eq__(value, /)

Return self==value.

__ne__(value, /)

Return self!=value.

__lt__(value, /)

Return self<value.

__le__(value, /)

Return self<=value.

__gt__(value, /)

Return self>value.

__ge__(value, /)

Return self>=value.