chainer.gradient_check.numerical_grad(f, inputs, grad_outputs, eps=0.001, detect_nondifferentiable=False, diff_atol=0, diff_rtol=0.01, center_outputs=None)[source]

Computes numerical gradient by finite differences.

This function is used to implement gradient check. For usage example, see unit tests of chainer.functions.

By default, numerical_grad computes the gradient to the first order of eps.

Parameters
• f (callable) – Python function with no arguments that runs forward computation and returns the result.

• inputs (tuple of arrays) – Tuple of arrays that should be treated as inputs. Each element of them is slightly modified to realize numerical gradient by finite differences.

• grad_outputs (tuple of arrays or scalars) – Tuple of arrays or scalars that are treated as output gradients.

• eps (float) – Epsilon value of finite differences.

• detect_nondifferentiable (bool) – False by default. If True, numerical_grad checks whether f is differentiable at inputs. It requires evaluation of f at 5 points instead of 2. As a side effect, the accuracy of numerical gradient will be increased to the third order of eps. If it turns out that f is non-differentiable at input, numerical_grad raises NondifferentiableError.

• diff_atol (float) – Absolute tolerance of fitting error of non-differentiable point detection.

• diff_rtol (float) – Tolerance of fitting error of non-differentiable point detection relative to the output values of f.

• center_outputs (tuple of arrays or None) – Only used if detect_nondifferentiable is True. If specified, these arrays are used as the outputs of f at inputs. Otherwise, it is calculated. It can be used to reduce the computation if these arrays are already calculated before calling numerical_grad.

Returns

Numerical gradient arrays corresponding to inputs.

Return type

tuple