MultiprocessIterator(dataset, batch_size, repeat=True, shuffle=True, n_processes=None, n_prefetch=1, shared_mem=None)¶
Dataset iterator that loads examples in parallel.
This is an implementation of
Iteratorthat loads examples with worker processes. It uses the standard
multiprocessingmodule to parallelize the loading. The dataset is sent to the worker processes in the standard way using pickle.
Note that this iterator effectively prefetches the examples for the next batch asynchronously after the current batch is returned.
This iterator saves
Nonein snapshots since some serializers do not support
- dataset (Dataset) – Dataset to iterate.
- batch_size (int) – Number of examples within each batch.
- repeat (bool) – If
True, it infinitely loops over the dataset. Otherwise, it stops iteration at the end of the first epoch.
- shuffle (bool) – If
True, the order of examples is shuffled at the beginning of each epoch. Otherwise, examples are extracted in the order of indexes.
- n_processes (int) – Number of worker processes. The number of CPUs is used by default.
- n_prefetch (int) – Number of prefetch batches.
- shared_mem (int) – The size of using shared memory per data.
None, size is adjusted automatically.