Chainer
stable

Chainer Documents

  • Installation
  • Guides
  • Neural Net Examples
  • Reference
    • Variable and Parameter
    • Functions
    • Link and Chains
      • Learnable connections
        • chainer.links.Bias
        • chainer.links.Bilinear
        • chainer.links.ChildSumTreeLSTM
        • chainer.links.Convolution2D
        • chainer.links.ConvolutionND
        • chainer.links.Deconvolution2D
        • chainer.links.DeconvolutionND
        • chainer.links.DepthwiseConvolution2D
        • chainer.links.DilatedConvolution2D
        • chainer.links.EmbedID
        • chainer.links.GRU
        • chainer.links.Highway
        • chainer.links.Inception
        • chainer.links.InceptionBN
        • chainer.links.Linear
        • chainer.links.LocalConvolution2D
        • chainer.links.LSTM
        • chainer.links.MLPConvolution2D
        • chainer.links.NaryTreeLSTM
        • chainer.links.NStepBiGRU
        • chainer.links.NStepBiLSTM
        • chainer.links.NStepBiRNNReLU
        • chainer.links.NStepBiRNNTanh
        • chainer.links.NStepGRU
        • chainer.links.NStepLSTM
        • chainer.links.NStepRNNReLU
        • chainer.links.NStepRNNTanh
        • chainer.links.Parameter
        • chainer.links.Scale
        • chainer.links.StatefulGRU
        • chainer.links.StatelessGRU
        • chainer.links.StatefulMGU
        • chainer.links.StatelessMGU
        • chainer.links.StatefulPeepholeLSTM
        • chainer.links.StatefulZoneoutLSTM
        • chainer.links.StatelessLSTM
      • Activation/loss/normalization functions with parameters
      • Machine learning models
      • Pre-trained models
      • Link and Chain base classes
    • Optimizers
    • Weight Initializers
    • Training Tools
    • Datasets
    • Iterator
    • Serializers
    • Utilities
    • Configuring Chainer
    • Debug Mode
    • Visualization of Computational Graph
    • Caffe Reference Model Support
    • Caffe Model Export Support
    • Assertion and Testing

Other

  • API Compatibility Policy
  • Contribution Guide
  • Tips and FAQs
  • Upgrade Guide
  • Comparison with Other Frameworks
  • License

Community

  • Slack Chat
  • Forums
  • Examples in Awesome Chainer
Chainer
  • Docs »
  • Reference »
  • Link and Chains »
  • chainer.links.LSTM
  • Edit on GitHub

chainer.links.LSTM¶

class chainer.links.LSTM(in_size, out_size=None, lateral_init=None, upward_init=None, bias_init=None, forget_bias_init=None)[source]¶

Fully-connected LSTM layer.

This is a fully-connected LSTM layer as a chain. Unlike the lstm() function, which is defined as a stateless activation function, this chain holds upward and lateral connections as child links.

It also maintains states, including the cell state and the output at the previous time step. Therefore, it can be used as a stateful LSTM.

This link supports variable length inputs. The mini-batch size of the current input must be equal to or smaller than that of the previous one. The mini-batch size of c and h is determined as that of the first input x. When mini-batch size of i-th input is smaller than that of the previous input, this link only updates c[0:len(x)] and h[0:len(x)] and doesn’t change the rest of c and h. So, please sort input sequences in descending order of lengths before applying the function.

Parameters:
  • in_size (int) – Dimension of input vectors. If it is None or omitted, parameter initialization will be deferred until the first forward data pass at which time the size will be determined.
  • out_size (int) – Dimensionality of output vectors.
  • lateral_init – A callable that takes numpy.ndarray or cupy.ndarray and edits its value. It is used for initialization of the lateral connections. May be None to use default initialization.
  • upward_init – A callable that takes numpy.ndarray or cupy.ndarray and edits its value. It is used for initialization of the upward connections. May be None to use default initialization.
  • bias_init – A callable that takes numpy.ndarray or cupy.ndarray and edits its value It is used for initialization of the biases of cell input, input gate and output gate.and gates of the upward connection. May be a scalar, in that case, the bias is initialized by this value. If it is None, the cell-input bias is initialized to zero.
  • forget_bias_init – A callable that takes numpy.ndarray or cupy.ndarray and edits its value It is used for initialization of the biases of the forget gate of the upward connection. May be a scalar, in that case, the bias is initialized by this value. If it is None, the forget bias is initialized to one.
Variables:
  • upward (Linear) – Linear layer of upward connections.
  • lateral (Linear) – Linear layer of lateral connections.
  • c (Variable) – Cell states of LSTM units.
  • h (Variable) – Output at the previous time step.

Example

There are several ways to make a LSTM link.

Let a two-dimensional input array \(x\) be:

>>> x = np.zeros((1, 10), dtype=np.float32)
  1. Give both in_size and out_size arguments:

    >>> l = L.LSTM(10, 20)
    >>> h_new = l(x)
    >>> h_new.shape
    (1, 20)
    
  2. Omit in_size argument or fill it with None:

    The below two cases are the same.

    >>> l = L.LSTM(20)
    >>> h_new = l(x)
    >>> h_new.shape
    (1, 20)
    
    >>> l = L.LSTM(None, 20)
    >>> h_new = l(x)
    >>> h_new.shape
    (1, 20)
    

Methods

__call__(x)[source]¶

Updates the internal state and returns the LSTM outputs.

Parameters:x (Variable) – A new batch from the input sequence.
Returns:Outputs of updated LSTM units.
Return type:Variable
__getitem__(name)[source]¶

Equivalent to getattr.

add_link(name, link)[source]¶

Registers a child link to this chain.

Deprecated since version v2.0.0: Assign the child link directly to an attribute within init_scope() instead. For example, the following code

chain.add_link('l1', L.Linear(3, 5))

can be replaced by the following line.

with chain.init_scope():
    chain.l1 = L.Linear(3, 5)

The latter is easier for IDEs to keep track of the attribute’s type.

Parameters:
  • name (str) – Name of the child link. This name is also used as the attribute name.
  • link (Link) – The link object to be registered.
add_param(name, shape=None, dtype=<class 'numpy.float32'>, initializer=None)[source]¶

Registers a parameter to the link.

Deprecated since version v2.0.0: Assign a Parameter object directly to an attribute within init_scope() instead. For example, the following code

link.add_param('W', shape=(5, 3))

can be replaced by the following assignment.

with link.init_scope():
    link.W = chainer.Parameter(None, (5, 3))

The latter is easier for IDEs to keep track of the attribute’s type.

Parameters:
  • name (str) – Name of the parameter. This name is also used as the attribute name.
  • shape (int or tuple of ints) – Shape of the parameter array. If it is omitted, the parameter variable is left uninitialized.
  • dtype – Data type of the parameter array.
  • initializer – If it is not None, the data is initialized with the given initializer. If it is an array, the data is directly initialized by it. If it is callable, it is used as a weight initializer. Note that in these cases, dtype argument is ignored.
add_persistent(name, value)[source]¶

Registers a persistent value to the link.

The registered value is saved and loaded on serialization and deserialization. The value is set to an attribute of the link.

Parameters:
  • name (str) – Name of the persistent value. This name is also used for the attribute name.
  • value – Value to be registered.
addgrads(link)[source]¶

Accumulates gradient values from given link.

This method adds each gradient array of the given link to corresponding gradient array of this link. The accumulation is even done across host and different devices.

Parameters:link (Link) – Source link object.
children()[source]¶

Returns a generator of all child links.

Returns:A generator object that generates all child links.
cleargrads()[source]¶

Clears all gradient arrays.

This method should be called before the backward computation at every iteration of the optimization.

copy()[source]¶

Copies the link hierarchy to new one.

The whole hierarchy rooted by this link is copied. There are three modes to perform copy. Please see the document for the argument mode below.

The name of the link is reset on the copy, since the copied instance does not belong to the original parent chain (even if exists).

Parameters:mode (str) – It should be either init, copy, or share. init means parameter variables under the returned link object is re-initialized by calling their initialize() method, so that all the parameters may have different initial values from the original link. copy means that the link object is deeply copied, so that its parameters are not re-initialized but are also deeply copied. Thus, all parameters have same initial values but can be changed independently. share means that the link is shallowly copied, so that its parameters’ arrays are shared with the original one. Thus, their values are changed synchronously. The default mode is share.
Returns:Copied link object.
Return type:Link
copyparams(link)[source]¶

Copies all parameters from given link.

This method copies data arrays of all parameters in the hierarchy. The copy is even done across the host and devices. Note that this method does not copy the gradient arrays.

Parameters:link (Link) – Source link object.
count_params()[source]¶

Counts the total number of parameters.

This method counts the total number of scalar values included in all the Parameters held by this link and its descendants.

If the link containts uninitialized parameters, this method raises a warning.

Returns:The total size of parameters (int)
disable_update()[source]¶

Disables update rules of all parameters under the link hierarchy.

This method sets the enabled flag of the update rule of each parameter variable to False.

enable_update()[source]¶

Enables update rules of all parameters under the link hierarchy.

This method sets the enabled flag of the update rule of each parameter variable to True.

init_scope()[source]¶

Creates an initialization scope.

This method returns a context manager object that enables registration of parameters (and links for Chain) by an assignment. A Parameter object can be automatically registered by assigning it to an attribute under this context manager.

Example

In most cases, the parameter registration is done in the initializer method. Using the init_scope method, we can simply assign a Parameter object to register it to the link.

class MyLink(chainer.Link):
    def __init__(self):
        super().__init__()
        with self.init_scope():
            self.W = chainer.Parameter(0, (10, 5))
            self.b = chainer.Parameter(0, (5,))
links(skipself=False)[source]¶

Returns a generator of all links under the hierarchy.

Parameters:skipself (bool) – If True, then the generator skips this link and starts with the first child link.
Returns:A generator object that generates all links.
namedlinks(skipself=False)[source]¶

Returns a generator of all (path, link) pairs under the hierarchy.

Parameters:skipself (bool) – If True, then the generator skips this link and starts with the first child link.
Returns:A generator object that generates all (path, link) pairs.
namedparams(include_uninit=True)[source]¶

Returns a generator of all (path, param) pairs under the hierarchy.

Parameters:include_uninit (bool) – If True, it also generates uninitialized parameters.
Returns:A generator object that generates all (path, parameter) pairs. The paths are relative from this link.
params(include_uninit=True)[source]¶

Returns a generator of all parameters under the link hierarchy.

Parameters:include_uninit (bool) – If True, it also generates uninitialized parameters.
Returns:A generator object that generates all parameters.
register_persistent(name)[source]¶

Registers an attribute of a given name as a persistent value.

This is a convenient method to register an existing attribute as a persistent value. If name has been already registered as a parameter, this method removes it from the list of parameter names and re-registers it as a persistent value.

Parameters:name (str) – Name of the attribute to be registered.
repeat(n_repeat, mode='init')[source]¶

Repeats this link multiple times to make a Sequential.

This method returns a Sequential object which has the same Link multiple times repeatedly. The mode argument means how to copy this link to repeat.

Example

You can repeat the same link multiple times to create a longer Sequential block like this:

class ConvBNReLU(chainer.Chain):

    def __init__(self):
        super(ConvBNReLU, self).__init__()
        with self.init_scope():
            self.conv = L.Convolution2D(
                None, 64, 3, 1, 1, nobias=True)
            self.bn = L.BatchNormalization(64)

    def __call__(self, x):
        return F.relu(self.bn(self.conv(x)))

net = ConvBNReLU().repeat(16, mode='init')

The net object contains 16 blocks, each of which is ConvBNReLU. And the mode was init, so each block is re-initialized with different parameters. If you give copy to this argument, each block has same values for its parameters but its object ID is different from others. If it is share, each block is same to others in terms of not only parameters but also the object IDs because they are shallow-copied, so that when the parameter of one block is changed, all the parameters in the others also change.

Parameters:
  • n_repeat (int) – Number of times to repeat.
  • mode (str) – It should be either init, copy, or share. init means parameters of each repeated element in the returned Sequential will be re-initialized, so that all elements have different initial parameters. copy means that the parameters will not be re-initialized but object itself will be deep-copied, so that all elements have same initial parameters but can be changed independently. share means all the elements which consist the resulting Sequential object are same object because they are shallow-copied, so that all parameters of elements are shared with each other.
reset_state()[source]¶

Resets the internal state.

It sets None to the c and h attributes.

serialize(serializer)[source]¶

Serializes the link object.

Parameters:serializer (AbstractSerializer) – Serializer object.
set_state(c, h)[source]¶

Sets the internal state.

It sets the c and h attributes.

Parameters:
  • c (Variable) – A new cell states of LSTM units.
  • h (Variable) – A new output at the previous time step.
to_cpu()[source]¶

Copies parameter variables and persistent values to CPU.

This method does not handle non-registered attributes. If some of such attributes must be copied to CPU, the link implementation must override this method to do so.

Returns: self

to_gpu(device=None)[source]¶

Copies parameter variables and persistent values to GPU.

This method does not handle non-registered attributes. If some of such attributes must be copied to GPU, the link implementation must override this method to do so.

Parameters:device – Target device specifier. If omitted, the current device is used.

Returns: self

to_intel64()[source]¶

Copies parameter variables and persistent values to CPU.

zerograds()[source]¶

Initializes all gradient arrays by zero.

This method can be used for the same purpose of cleargrads, but less efficient. This method is left for backward compatibility.

Deprecated since version v1.15: Use cleargrads() instead.

Attributes

update_enabled¶

True if at least one parameter has an update rule enabled.

within_init_scope¶

True if the current code is inside of an initialization scope.

See init_scope() for the details of the initialization scope.

xp¶

Array module for this link.

Depending on which of CPU/GPU this link is on, this property returns numpy or cupy.

Next Previous

© Copyright 2015, Preferred Networks, inc. and Preferred Infrastructure, inc.. Revision bc3516b0.

Built with Sphinx using a theme provided by Read the Docs.
Read the Docs v: stable
Versions
latest
stable
v5.0.0a1
v4.0.0
v4
v3.5.0
v3.4.0
v3.3.0
v3.2.0
v3.1.0
v3.0.0
v3
v2.1.0
v2.0.2
v1.24.0
v1.23.0
v1.22.0
v1.21.0
v1.20.0.1
v1.19.0
v1.18.0
v1.17.0
v1.16.0
v1.15.0.1
v1.14.0
v1.13.0
v1.12.0
v1.11.0
v1.10.0
v1.9.1
v1.8.2
v1.7.2
v1.6.2.1
v1.5.1
v1.4.1
v1.3.2
v1.2.0
v1.1.2
v1.0.1
v2-docs-cupy
simple-rtd
math-roman
fix-document
deprecate-zerograd
batch_norm_cudnn2
add-try-trainer-class-tutorial
Downloads
pdf
htmlzip
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.