chainer.gradient_check.numerical_grad¶
-
chainer.gradient_check.
numerical_grad
(f, inputs, grad_outputs, eps=0.001, detect_nondifferentiable=False, diff_atol=0, diff_rtol=0.01, center_outputs=None)[source]¶ Computes numerical gradient by finite differences.
This function is used to implement gradient check. For usage example, see unit tests of
chainer.functions
.By default,
numerical_grad
computes the gradient to the first order ofeps
.- Parameters
f (callable) – Python function with no arguments that runs forward computation and returns the result.
inputs (tuple of arrays) – Tuple of arrays that should be treated as inputs. Each element of them is slightly modified to realize numerical gradient by finite differences.
grad_outputs (tuple of arrays or scalars) – Tuple of arrays or scalars that are treated as output gradients.
eps (float) – Epsilon value of finite differences.
detect_nondifferentiable (bool) –
False
by default. IfTrue
,numerical_grad
checks whetherf
is differentiable atinputs
. It requires evaluation off
at 5 points instead of 2. As a side effect, the accuracy of numerical gradient will be increased to the third order ofeps
. If it turns out thatf
is non-differentiable atinput
,numerical_grad
raisesNondifferentiableError
.diff_atol (float) – Absolute tolerance of fitting error of non-differentiable point detection.
diff_rtol (float) – Tolerance of fitting error of non-differentiable point detection relative to the output values of
f
.center_outputs (tuple of arrays or None) – Only used if
detect_nondifferentiable
isTrue
. If specified, these arrays are used as the outputs off
atinputs
. Otherwise, it is calculated. It can be used to reduce the computation if these arrays are already calculated before callingnumerical_grad
.
- Returns
Numerical gradient arrays corresponding to
inputs
.- Return type