- chainerx.backward(outputs, *, enable_double_backprop=False)¶
On backpropagation (a.k.a. backprop), the computational graph is traversed backward starting from the output arrays, up until the root arrays on which
ndarray.require_grad()have been called.
ndarray.gradheld by the output arrays as the initial gradients. You can manually assign them before calling this function. Otherwise, they are assumed to be 1.
To enable higher order differentiation, pass
enable_double_backprop=Trueso that you can further run backpropagation from the resulting gradient arrays. Note that enabling it results in larger memory consumption needed to store the gradients w.r.t intermediate arrays that are required for the second gradient computation.
The whole process of backpropagation is executed in C++, except those operations whose backward computation falls back to the corresponding Python implementation. Currently this function does not release the GIL at all.
outputs (ndarray or list of ndarrays) – Output arrays from which backpropagation starts.
enable_double_backprop (bool) – If
True, a computational trace of the whole backpropagation procedure is recorded to the computational graph so that one can further do backpropagation from the resulting gradients.