chainer.FunctionAdapter

class chainer.FunctionAdapter(function)[source]

Adapter class to wrap Function with FunctionNode.

While FunctionNode provides the interface of new-style differentiable functions, the old-style Function can still be used for the backward compatibility. This class provides an adapter of there interface; it adds FunctionNode interface to any Function object by delegation.

Note

The ownership of FunctionAdapter and Function is a bit tricky. At the initialization, FunctionAdapter is owned by the Function object. Once the function is applied to variables, the ownership is reversed; the adapter becomes the owner of the Function object and the Function object changes the reference to a weak one.

Parameters:function (Function) – The function object to wrap.

New in version 3.0.0.

Methods

__call__(*args, **kwargs)[source]
add_hook(hook, name=None)[source]

Registers a function hook.

Parameters:
  • hook (FunctionHook) – Function hook to be registered.
  • name (str) – Name of the function hook. The name must be unique among function hooks registered to this function. If None, the default name of the function hook is used.
apply(inputs)[source]

Computes output variables and grows the computational graph.

Basic behavior is expressed in the documentation of FunctionNode.

Note

If the data attribute of input variables exist on a GPU device, that device is made current before calling forward(), so implementors do not need to take care of device selection in most cases.

Parameters:inputs – Tuple of input variables. Each element can be either Variable, numpy.ndarray, or cupy.ndarray. If the element is an ndarray, it is automatically wrapped with Variable.
Returns:A tuple of output Variable objects.
backward(target_input_indexes, grad_outputs)[source]
backward_accumulate(target_input_indexes, grad_outputs, grad_inputs)[source]

Computes gradients w.r.t.specified inputs and accumulates them.

This method provides a way to fuse the backward computation and the gradient accumulations in the case that the multiple functions are applied to the same variable.

Users have to override either of this method or backward(). It is often simpler to implement backward() and is recommended if you do not need to provide efficient gradient accumulation.

Parameters:
  • target_input_indexes (tuple of int) – Indices of the input variables w.r.t. which the gradients are required. It is guaranteed that this tuple contains at least one element.
  • grad_outputs (tuple of Variable) – Gradients w.r.t. the output variables. If the gradient w.r.t. an output variable is not given, the corresponding element is None.
  • grad_inputs (tuple of Variable) – Gradients w.r.t. the input variables specified by target_input_indexes. These values are computed by other computation paths. If there is no gradient value existing for the variable, the corresponding element is None. See also the note below.
Returns:

Tuple of variables that represent the gradients w.r.t. specified input variables. Unlike backward(), the length of the tuple must be same as that of target_input_indices.

Note

When the same variable is passed to the multiple input arguments of a function, only the first position of grad_inputs corresponding to these input arguments may contain the gradient variable corresponding to that input variable, and other entries are set to None. This is an implementation-detail convention to avoid the complication of correctly accumulating gradients in such a case. This behavior might be changed in a future version.

check_type_forward(in_types)[source]
delete_hook(name)[source]

Unregisters the function hook.

Parameters:name (str) – The name of the function hook to be unregistered.
forward(inputs)[source]
forward_cpu(inputs)[source]

Computes the output arrays from the input NumPy arrays.

Parameters:inputs – Tuple of input numpy.ndarray objects.
Returns:Tuple of output arrays. Each element can be NumPy or CuPy arrays.

Warning

Implementation of FunctionNode must take care that the return value must be a tuple even if it returns only one array.

forward_gpu(inputs)[source]

Computes the output arrays from the input CuPy arrays.

Parameters:inputs – Tuple of input cupy.ndarray objects.
Returns:Tuple of output arrays. Each element can be NumPy or CuPy arrays.

Warning

Implementation of FunctionNode must take care that the return value must be a tuple even if it returns only one array.

get_retained_inputs()[source]

Returns a tuple of retained input variables.

This method is used to retrieve the input variables retained in forward().

Returns:A tuple of retained input variables.
get_retained_outputs()[source]

Returns a tuple of retained output variables.

This method is used to retrieve the output variables retained in forward().

Returns:A tuple of retained output variables.

Note

This method does a tricky thing to support the case of an output node garbage-collected before this method is called; in this case, this method creates a fresh variable node that acts as an output node of the function node.

retain_inputs(indexes)[source]

Lets specified input variable nodes keep data arrays.

By calling this method from forward(), the function node can specify which inputs are required for backprop. The input variables with retained arrays can then be obtained by calling get_retained_inputs() from inside backward().

Unlike Function, the function node DOES NOT keep input arrays by default. If you want to keep some or all input arrays, do not forget to call this method.

Note that this method must not be called from the outside of :meth:`forward`.

Parameters:indexes (iterable of int) – Indexes of input variables that the function will require for backprop.
retain_outputs(indexes)[source]

Lets specified output variable nodes keep data arrays.

By calling this method from forward(), the function node can specify which outputs are required for backprop. If this method is not called, no output variables will be marked to keep their data array at the point of returning from apply(). The output variables with retained arrays can then be obtained by calling get_retained_outputs() from inside backward().

Note

It is recommended to use this method if the function requires some or all output arrays in backprop. The function can also use output arrays just by keeping references to them directly, although it might affect the performance of later function applications on the output variables.

Note that this method must not be called from the outside of :meth:`forward`.

Parameters:indexes (iterable of int) – Indexes of output variables that the function will require for backprop.
unchain()[source]

Purges in/out nodes and this function node itself from the graph.

Attributes

function

The Function object that this adapter is wrapping.

inputs = None
label
local_function_hooks

Ordered dictionary of registered function hooks.

Contrary to chainer.thread_local.function_hooks, which registers its elements to all functions, Function hooks in this property is specific to this function.

output_data

A tuple of the retained output arrays.

This property is mainly used by Function. Users basically do not have to use this property; use get_retained_outputs() instead.

outputs = None
rank = 0
stack = None