chainer.no_backprop_mode

chainer.no_backprop_mode()[source]

Make a context manager which disables back-propagation.

In this context, Chainer does not make a computational graph. It has the benefit of reducing memory consumption. However, a Variable created in this context does not hold a reference to the FunctionNode that created itself so no gradients are accumulated by backward().

In the following example, y is created in this context, which means that calling backward() on y has no effect on the gradients of x.

>>> x = chainer.Variable(np.array([1,], np.float32))
>>> with chainer.no_backprop_mode():
...     y = x + 1
>>> y.backward()
>>> x.grad is None
True

Note

chainer.no_backprop_mode() implicitly applies ChainerX’s counterpart chainerx.no_backprop_mode(), but not vice versa. Also, setting enable_backprop configuration does not affect ChainerX.

See also

See chainer.force_backprop_mode() for details on how to override this context.