chainer.functions.n_step_lstm¶

chainer.functions.
n_step_lstm
(n_layers, dropout_ratio, hx, cx, ws, bs, xs)[source]¶ Stacked Unidirectional Long ShortTerm Memory function.
This function calculates stacked Unidirectional LSTM with sequences. This function gets an initial hidden state \(h_0\), an initial cell state \(c_0\), an input sequence \(x\), weight matrices \(W\), and bias vectors \(b\). This function calculates hidden states \(h_t\) and \(c_t\) for each time \(t\) from input \(x_t\).
\[\begin{split}i_t &= \sigma(W_0 x_t + W_4 h_{t1} + b_0 + b_4) \\ f_t &= \sigma(W_1 x_t + W_5 h_{t1} + b_1 + b_5) \\ o_t &= \sigma(W_2 x_t + W_6 h_{t1} + b_2 + b_6) \\ a_t &= \tanh(W_3 x_t + W_7 h_{t1} + b_3 + b_7) \\ c_t &= f_t \cdot c_{t1} + i_t \cdot a_t \\ h_t &= o_t \cdot \tanh(c_t)\end{split}\]As the function accepts a sequence, it calculates \(h_t\) for all \(t\) with one call. Eight weight matrices and eight bias vectors are required for each layer. So, when \(S\) layers exist, you need to prepare \(8S\) weigth matrices and \(8S\) bias vectors.
If the number of layers
n_layers
is greater than \(1\), the input of thek
th layer is the hidden stateh_t
of thek1
th layer. Note that all input variables except the first layer may have different shape from the first layer.Warning
train
anduse_cudnn
arguments are not supported anymore since v2. Instead, usechainer.using_config('train', train)
andchainer.using_config('use_cudnn', use_cudnn)
respectively. Seechainer.using_config()
.Parameters:  n_layers (int) – The number of layers.
 dropout_ratio (float) – Dropout ratio.
 hx (Variable) – Variable holding stacked hidden states.
Its shape is
(S, B, N)
whereS
is the number of layers and is equal ton_layers
,B
is the minibatch size, andN
is the dimension of the hidden units.  cx (Variable) – Variable holding stacked cell states.
It has the same shape as
hx
.  ws (list of list of
Variable
) – Weight matrices.ws[i]
represents the weights for the ith layer. Eachws[i]
is a list containing eight matrices.ws[i][j]
corresponds to \(W_j\) in the equation. Onlyws[0][j]
where0 <= j < 4
are(I, N)
shaped as they are multiplied with input variables, whereI
is the size of the input andN
is the dimension of the hidden units. All other matrices are(N, N)
shaped.  bs (list of list of
Variable
) – Bias vectors.bs[i]
represents the biases for the ith layer. Eachbs[i]
is a list containing eight vectors.bs[i][j]
corresponds to \(b_j\) in the equation. The shape of each matrix is(N,)
whereN
is the dimension of the hidden units.  xs (list of
Variable
) – A list ofVariable
holding input values. Each elementxs[t]
holds input value for timet
. Its shape is(B_t, I)
, whereB_t
is the minibatch size for timet
. The sequences must be transposed.transpose_sequence()
can be used to transpose a list ofVariable
s each representing a sequence. When sequences has different lengths, they must be sorted in descending order of their lengths before transposing. Soxs
needs to satisfyxs[t].shape[0] >= xs[t + 1].shape[0]
.
Returns: This functions returns a tuple concaining three elements,
hy
,cy
andys
.hy
is an updated hidden states whose shape is the same ashx
.cy
is an updated cell states whose shape is the same ascx
.ys
is a list ofVariable
. Each elementys[t]
holds hidden states of the last layer corresponding to an inputxs[t]
. Its shape is(B_t, N)
whereB_t
is the minibatch size for timet
, andN
is size of hidden units. Note thatB_t
is the same value asxs[t]
.
Return type: Note
The dimension of hidden units is limited to only one size
N
. If you want to use variable dimension of hidden units, please usechainer.functions.lstm
.See also
Example
>>> batchs = [3, 2, 1] # support variable length sequences >>> in_size, out_size, n_layers = 3, 2, 2 >>> dropout_ratio = 0.0 >>> xs = [np.ones((b, in_size)).astype('f') for b in batchs] >>> [x.shape for x in xs] [(3, 3), (2, 3), (1, 3)] >>> h_shape = (n_layers, batchs[0], out_size) >>> hx = np.ones(h_shape).astype(np.float32) >>> cx = np.ones(h_shape).astype(np.float32) >>> w_in = lambda i, j: in_size if i == 0 and j < 4 else out_size >>> ws = [] >>> bs = [] >>> for n in range(n_layers): ... ws.append([np.ones((out_size, w_in(n, i))).astype('f') for i in range(8)]) ... bs.append([np.ones((out_size,)).astype('f') for _ in range(8)]) ... >>> ws[0][0].shape # ws[0][:4].shape are (out_size, in_size) (2, 3) >>> ws[1][0].shape # others are (out_size, out_size) (2, 2) >>> bs[0][0].shape (2,) >>> hy, cy, ys = F.n_step_lstm( ... n_layers, dropout_ratio, hx, cx, ws, bs, xs) >>> hy.shape (2, 3, 2) >>> cy.shape (2, 3, 2) >>> [y.shape for y in ys] [(3, 2), (2, 2), (1, 2)]