Make a context manager which disables back-propagation.

In this context, Chainer does not make a computational graph. It has the benefit of reducing memory consumption. However, a Variable created in this context does not hold a reference to the FunctionNode that created itself so no gradients are accumulated by backward().

In the following example, y is created in this context, which means that calling backward() on y has no effect on the gradients of x.

>>> x = chainer.Variable(np.array([1,], 'f'))
>>> with chainer.no_backprop_mode():
...     y = x + 1
>>> y.backward()
>>> x.grad is None

See also

See force_backprop_mode() for details on how to override this context.