chainer.Function

class chainer.Function[source]

Function on variables with backpropagation ability.

All function implementations defined in chainer.functions inherit this class.

The main feature of this class is keeping track of function applications as a backward graph. When a function is applied to Variable objects, its forward() method is called on data fields of input variables, and at the same time it chains references from output variable nodes to the function and from the function to its input nodes.

Note

As of v2.0, the input/output variables and their corresponding variable nodes in the graph are distinguished. Function acts like a function on Variable objects that returns Variable objects as outputs, whereas these objects do not appear directly in the graph. Instead, their corresponding VariableNode objects are inserted to the graph.

Note

As of v1.5, a function instance cannot be used twice in any computational graphs. In order to reuse a function object multiple times, use copy.copy() before the function applications to make a copy of the instance.

This restriction also means that we cannot make a stateful function anymore. For example, it is now not allowed to let a function hold parameters. Define a function as a pure (stateless) procedure, and use Link to combine it with parameter variables.

Example

Let x an instance of Variable and f an instance of Function taking only one argument. Then a line

>>> import numpy, chainer, chainer.functions as F
>>> x = chainer.Variable(numpy.zeros(10))
>>> f = F.Identity()
>>> y = f(x)

computes a new variable y and creates backward references. Actually, backward references are set as per the following diagram:

x.node <--- f <--- y.node

If an application of another function g occurs as

>>> g = F.Identity()
>>> z = g(x)

then the graph grows with a branch:

         |--- f <--- y.node
x.node <-+
         |--- g <--- z.node

Note that the branching is correctly managed on backward computation, i.e. the gradients from f and g are accumulated to the gradient of x.

Every function implementation should provide forward_cpu(), forward_gpu(), backward_cpu() and backward_gpu(). Alternatively, one can provide forward() and backward() instead of separate methods. Backward methods have default implementations that just return None, which indicates that the function is non- differentiable.

For functions that do not need a part of inputs in backward computation, there is a way to possibly reduce the memory consumption by quickly releasing the input arrays after the forward propagation. This is done by calling retain_inputs() from inside of forward() (including forward_cpu() and forward_gpu()). See the documentation of retain_inputs() for details.

For functions that need a part of outputs in backward computation, it is strongly recommended to call retain_outputs() from inside of forward() (including forward_cpu() and forward_gpu()). It marks the specified output variable nodes to retain the data. The retained data can be accessed by output_arrays property.

Variables:
  • inputs – A tuple or list of input variables.
  • outputs – A tuple or list of output variables.
  • output_data – A tuple of retained output arrays. It has the same length as outputs. The data of variables that are not retained are set to None. See retain_outputs() for details.

Methods

__call__(*inputs)[source]

Applies forward propagation with chaining backward references.

Basic behavior is expressed in documentation of Function class.

Note

If the data attribute of input variables exist on GPU device, then, before it calls forward() method, the appropriate device is selected, so in most cases implementers do not need to take care of device selection.

Parameters:inputs – Tuple of input Variable, numpy.ndarray or cupy.ndarray objects. If the input is an numpy.ndarray or a cupy.ndarray, it is automatically wrapped with Variable.
Returns:One Variable object or a tuple of multiple Variable objects.
add_hook(hook, name=None)[source]

Registers the function hook.

Parameters:
  • hook (FunctionHook) – Function hook to be registered.
  • name (str) – Name of the function hook. name must be unique among function hooks registered to the function. If None, default name of the function hook is used.
backward(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays.

It delegates the procedure to backward_cpu() or backward_gpu() by default. Which it selects is determined by the type of input arrays and output gradient arrays. Implementations of Function must implement either CPU/GPU methods or this method, if the function is intended to be backprop-ed.

Parameters:
  • inputs – Tuple of input arrays.
  • grad_outputs – Tuple of output gradient arrays.
Returns:

Tuple of input gradient arrays. Some or all of them can be None, if the function is not differentiable on inputs.

Return type:

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

backward_cpu(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays on CPU.

Parameters:
Returns:

Tuple of input gradient numpy.ndarray object(s). Some or all of them can be None, if the function is not differentiable on corresponding inputs.

Return type:

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

backward_gpu(inputs, grad_outputs)[source]

Applies backprop to output gradient arrays on GPU.

Parameters:
Returns:

Tuple of input gradient cupy.ndarray object(s). Some or all of them can be None, if the function is not differentiable on corresponding inputs.

Return type:

tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

check_type_forward(in_types)[source]

Checks types of input data before forward propagation.

Before forward() is called, this function is called. You need to validate types of input data in this function using the type checking utilities.

Parameters:in_types (TypeInfoTuple) – The type information of input data for forward().
delete_hook(name)[source]

Unregisters the function hook.

Parameters:name (str) – the name of the function hook to be unregistered.
forward(inputs)[source]

Applies forward propagation to input arrays.

It delegates the procedure to forward_cpu() or forward_gpu() by default. Which it selects is determined by the type of input arrays. Implementations of Function must implement either CPU/GPU methods or this method.

Parameters:inputs – Tuple of input array(s).
Returns:Tuple of output array(s).

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

forward_cpu(inputs)[source]

Applies forward propagation to input arrays on CPU.

Parameters:inputs – Tuple of numpy.ndarray object(s).
Returns:Tuple of numpy.ndarray object(s).
Return type:tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

forward_gpu(inputs)[source]

Applies forward propagation to input arrays on GPU.

Parameters:inputs – Tuple of cupy.ndarray object(s).
Returns:Tuple of cupy.ndarray object(s).
Return type:tuple

Warning

Implementations of Function must take care that the return value must be a tuple even if it returns only one array.

retain_inputs(indexes)[source]

Lets specified input variable nodes keep data arrays.

By calling this method from forward(), the function can specify which inputs are required for backprop.

If this method is not called, the function keeps all input arrays. If you want to release all input arrays, call this method by passing an empty sequence.

Note that this method must not be called from the outside of forward method.

Parameters:indexes (iterable of int) – Indexes of input variables that the function does not require for backprop.
retain_outputs(indexes, retain_after_backward=False)[source]

Lets specified output variable nodes keep data arrays.

By calling this method from forward(), the function can specify which outputs are required for backprop. If this method is not called, any output variables are not marked to keep the data array at the point of returning from __call__(). The retained arrays are stored to output_data.

Note

It is STRONGLY RECOMMENDED to use this method if the function requires some or all output arrays in backprop. The function can also use output arrays just by keeping references to them directly, whereas it might influence on the performance of later function applications to the output variables.

Note that this method must not be called from the outside of forward method.

Parameters:
  • indexes (iterable of int) – Indexes of input variables that the function does not require for backprop.
  • retain_after_backward (bool) – If True, a reference to the outputs will remain after the backprop of the function is over. If False, the reference will be deleted.
unchain()[source]

Purges in/out nodes and this function itself from the graph.

This method is called from Variable.unchain_backward() method.

Attributes

label

Short text that represents the function.

The default implementation returns its type name. Each function should override it to give more information.

local_function_hooks

Ordered Dictionary of registered function hooks.

Contrary to chainer.thread_local.function_hooks, which registers its elements to all functions, Function hooks in this property is specific to this function.

rank = 0
stack