chainer.no_backprop_mode¶
- chainer.no_backprop_mode()[source]¶
Make a context manager which disables back-propagation.
In this context, Chainer does not make a computational graph. It has the benefit of reducing memory consumption. However, a
Variable
created in this context does not hold a reference to theFunctionNode
that created itself so no gradients are accumulated bybackward()
.In the following example,
y
is created in this context, which means that callingbackward()
ony
has no effect on the gradients ofx
.>>> x = chainer.Variable(np.array([1,], np.float32)) >>> with chainer.no_backprop_mode(): ... y = x + 1 >>> y.backward() >>> x.grad is None True
Note
chainer.no_backprop_mode()
implicitly applies ChainerX’s counterpartchainerx.no_backprop_mode()
, but not vice versa. Also, settingenable_backprop
configuration does not affect ChainerX.See also
See
chainer.force_backprop_mode()
for details on how to override this context.