chainer.functions.n_step_lstm

chainer.functions.n_step_lstm(n_layers, dropout_ratio, hx, cx, ws, bs, xs)[source]

Stacked Uni-directional Long Short-Term Memory function.

This function calculates stacked Uni-directional LSTM with sequences. This function gets an initial hidden state \(h_0\), an initial cell state \(c_0\), an input sequence \(x\), weight matrices \(W\), and bias vectors \(b\). This function calculates hidden states \(h_t\) and \(c_t\) for each time \(t\) from input \(x_t\).

\[\begin{split}i_t &= \sigma(W_0 x_t + W_4 h_{t-1} + b_0 + b_4) \\ f_t &= \sigma(W_1 x_t + W_5 h_{t-1} + b_1 + b_5) \\ o_t &= \sigma(W_2 x_t + W_6 h_{t-1} + b_2 + b_6) \\ a_t &= \tanh(W_3 x_t + W_7 h_{t-1} + b_3 + b_7) \\ c_t &= f_t \cdot c_{t-1} + i_t \cdot a_t \\ h_t &= o_t \cdot \tanh(c_t)\end{split}\]

As the function accepts a sequence, it calculates \(h_t\) for all \(t\) with one call. Eight weight matrices and eight bias vectors are required for each layers. So, when \(S\) layers exists, you need to prepare \(8S\) weigth matrices and \(8S\) bias vectors.

If the number of layers n_layers is greather than \(1\), input of k-th layer is hidden state h_t of k-1-th layer. Note that all input variables except first layer may have different shape from the first layer.

Warning

train and use_cudnn arguments are not supported anymore since v2. Instead, use chainer.using_config('train', train) and chainer.using_config('use_cudnn', use_cudnn) respectively. See chainer.using_config().

Parameters:
  • n_layers (int) – Number of layers.
  • dropout_ratio (float) – Dropout ratio.
  • hx (chainer.Variable) – Variable holding stacked hidden states. Its shape is (S, B, N) where S is number of layers and is equal to n_layers, B is mini-batch size, and N is dimention of hidden units.
  • cx (chainer.Variable) – Variable holding stacked cell states. It has the same shape as hx.
  • ws (list of list of chainer.Variable) – Weight matrices. ws[i] represents weights for i-th layer. Each ws[i] is a list containing eight matrices. ws[i][j] is corresponding with W_j in the equation. Only ws[0][j] where 0 <= j < 4 is (I, N) shape as they are multiplied with input variables. All other matrices has (N, N) shape.
  • bs (list of list of chainer.Variable) – Bias vectors. bs[i] represnents biases for i-th layer. Each bs[i] is a list containing eight vectors. bs[i][j] is corresponding with b_j in the equation. Shape of each matrix is (N,) where N is dimention of hidden units.
  • xs (list of chainer.Variable) – A list of Variable holding input values. Each element xs[t] holds input value for time t. Its shape is (B_t, I), where B_t is mini-batch size for time t, and I is size of input units. Note that this functions supports variable length sequences. When sequneces has different lengths, sort sequences in descending order by length, and transpose the sorted sequence. transpose_sequence() transpose a list of Variable() holding sequence. So xs needs to satisfy xs[t].shape[0] >= xs[t + 1].shape[0].
Returns:

This functions returns a tuple concaining three elements,

hy, cy and ys.

  • hy is an updated hidden states whose shape is same as hx.
  • cy is an updated cell states whose shape is same as cx.
  • ys is a list of Variable . Each element ys[t] holds hidden states of the last layer corresponding to an input xs[t]. Its shape is (B_t, N) where B_t is mini-batch size for time t, and N is size of hidden units. Note that B_t is the same value as xs[t].

Return type:

tuple