Make a context manager which disables back-propagation.
In this context, Chainer does not make a computational graph. It has the benefit of reducing memory consumption. However, a
Variablecreated in this context does not hold a reference to the
FunctionNodethat created itself so no gradients are accumulated by
In the following example,
yis created in this context, which means that calling
yhas no effect on the gradients of
>>> x = chainer.Variable(np.array([1,], np.float32)) >>> with chainer.no_backprop_mode(): ... y = x + 1 >>> y.backward() >>> x.grad is None True
chainer.force_backprop_mode()for details on how to override this context.