# chainer.Function¶

class chainer.Function[source]

Old-style interface of a differentiable function.

This class provides an interface to implement an old-style differentiable function (i.e., the function application is recorded to the computational graph). The subclass of Function that implement forward() and backward() can be used to run the forward computation and automatically induce the backpropagation procedure.

There is another way to implement such a function: subclassing FunctionNode. There are mainly two differences between them.

1. The differentiable backprop is available for FunctionNode, while it is not for Function because the backward() of the latter directly operates on the arrays instead of Variable objects so that it cannot record the history of the computation.

2. The information passed to backward() is different. In FunctionNode, which inputs the function node has to compute the gradients w.r.t. is passed so that it can omit unnecessary computations, while Function always has to compute gradients w.r.t. all the input nodes. The FunctionNode also accepts the current gradient values of the input nodes so that the accumulation work can be merged with the gradient computation if an efficient kernel is available.

This class uses FunctionAdapter to convert the interface to that of FunctionNode and adds the FunctionNode object to the computational graph.

See FunctionNode for the details of building the computational graph in Chainer.

Methods

__call__(*inputs)[source]

Applies forward propagation with chaining backward references.

This method creates a new FunctionAdapter object and runs the forward propagation using it.

See FunctionNode for the detailed behavior of building the computational graph.

Parameters

inputs – Tuple of input Variable or N-dimensional array objects. If the input is N-dimensional array, it is automatically wrapped with Variable.

Returns

One Variable object or a tuple of multiple Variable objects.

add_hook(hook, name=None)[source]

Registers a function hook.

See FunctionNode.add_hook() for the detail.

Parameters
• hook (FunctionHook) – Function hook to be registered.

• name (str) – Name of the function hook. name must be unique among function hooks registered to the function. If None, default name of the function hook is used.

backward(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays.

It delegates the procedure to backward_cpu() or backward_gpu() by default. Which it selects is determined by the type of input arrays and output gradient arrays. Implementations of Function must implement either CPU/GPU methods or this method, if the function is intended to be backprop-ed.

Parameters
• inputs – Tuple of input arrays.

Returns

Tuple of input gradient arrays. Some or all of them can be None, if the function is not differentiable on inputs.

Return type

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

backward_cpu(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays on CPU.

Parameters
Returns

Tuple of input gradient numpy.ndarray object(s). Some or all of them can be None, if the function is not differentiable on corresponding inputs.

Return type

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

backward_gpu(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays on GPU.

Parameters
Returns

Tuple of input gradient cupy.ndarray object(s). Some or all of them can be None, if the function is not differentiable on corresponding inputs.

Return type

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

check_type_forward(in_types)[source]

Checks types of input data before forward propagation.

Before forward() is called, this function is called. You need to validate types of input data in this function using the type checking utilities.

Parameters

in_types (TypeInfoTuple) – The type information of input data for forward().

delete_hook(name)[source]

Unregisters the specified function hook.

Parameters

name (str) – the name of the function hook to be unregistered.

forward(inputs)[source]

Applies forward propagation to input arrays.

It delegates the procedure to forward_cpu() or forward_gpu() by default. Which it selects is determined by the type of input arrays. Implementations of Function must implement either CPU/GPU methods or this method.

Parameters

inputs – Tuple of input array(s).

Returns

Tuple of output array(s).

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

forward_cpu(inputs)[source]

Applies forward propagation to input arrays on CPU.

Parameters

inputs – Tuple of numpy.ndarray object(s).

Returns

Tuple of numpy.ndarray object(s).

Return type

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

forward_gpu(inputs)[source]

Applies forward propagation to input arrays on GPU.

Parameters

inputs – Tuple of cupy.ndarray object(s).

Returns

Tuple of cupy.ndarray object(s).

Return type

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

retain_inputs(indexes)[source]

Lets specified input variable nodes keep data arrays.

By calling this method from forward(), the function can specify which inputs are required for backprop.

If this method is not called, the function keeps all input arrays. If you want to release all input arrays, call this method by passing an empty sequence. Note that this behavior is different from that of FunctionNode.retain_inputs().

Note that this method must not be called from the outside of forward().

Parameters

indexes (iterable of int) – Indexes of input variables that the function will require for backprop.

retain_outputs(indexes, retain_after_backward=False)[source]

Lets specified output variable nodes keep data arrays.

By calling this method from forward(), the function can specify which outputs are required for backprop. If this method is not called, any output variables are not marked to keep the data array at the point of returning from __call__(). The retained arrays are stored to output_data.

Note

It is STRONGLY RECOMMENDED that you use this method if the function requires some or all output arrays in backprop. The function can also use output arrays just by keeping references to them directly, whereas it might influence on the performance of later function applications to the output variables.

Note that this method must not be called from the outside of forward().

Parameters
• indexes (iterable of int) – Indexes of input variables that the function will require for backprop.

• retain_after_backward (bool) – This option has no effect. It is left only for the backward compatibility.

unchain()[source]

Purges in/out nodes and this function itself from the graph.

See FunctionNode.unchain() for the detail.

__eq__()

Return self==value.

__ne__()

Return self!=value.

__lt__()

Return self<value.

__le__()

Return self<=value.

__gt__()

Return self>value.

__ge__()

Return self>=value.

Attributes

inputs

The input nodes of the function.

label

Short text that represents the function.

The default implementation returns its type name. Each function should override it to give more information.

local_function_hooks

Ordered Dictionary of registered function hooks.

See FunctionNode.local_function_hooks for the detail.

node

The FunctionAdapter object that wraps this Function.

If the Function does not have a node object, this property automatically creates a new one.

output_data

A tuple of the retained output arrays.

It has the same length as the outputs. Elements that are not retained are set to None.

outputs

Weak references to the output nodes of the function.

rank

The topological ordinal of the corresponding function node.

stack