- chainer.gradient_check.numerical_grad(f, inputs, grad_outputs, eps=0.001, detect_nondifferentiable=False, diff_atol=0, diff_rtol=0.01, center_outputs=None)¶
Computes numerical gradient by finite differences.
This function is used to implement gradient check. For usage example, see unit tests of
numerical_gradcomputes the gradient to the first order of
f (callable) – Python function with no arguments that runs forward computation and returns the result.
inputs (tuple of arrays) – Tuple of arrays that should be treated as inputs. Each element of them is slightly modified to realize numerical gradient by finite differences.
grad_outputs (tuple of arrays or scalars) – Tuple of arrays or scalars that are treated as output gradients.
eps (float) – Epsilon value of finite differences.
detect_nondifferentiable (bool) –
Falseby default. If
fis differentiable at
inputs. It requires evaluation of
fat 5 points instead of 2. As a side effect, the accuracy of numerical gradient will be increased to the third order of
eps. If it turns out that
fis non-differentiable at
diff_atol (float) – Absolute tolerance of fitting error of non-differentiable point detection.
diff_rtol (float) – Tolerance of fitting error of non-differentiable point detection relative to the output values of
center_outputs (tuple of arrays or None) – Only used if
True. If specified, these arrays are used as the outputs of
inputs. Otherwise, it is calculated. It can be used to reduce the computation if these arrays are already calculated before calling
Numerical gradient arrays corresponding to
- Return type