# chainer.functions.prelu¶

chainer.functions.prelu(x, W)[source]

Parametric ReLU function.

It accepts two arguments: an input x and a weight array W and computes the output as

$\begin{split}PReLU(x_i) = \begin{cases} x_i & (x_i>0) \\ W_i * x_i & (otherwise)\end{cases}\end{split}$
Parameters
Returns

Output variable

Return type

Variable

Example

>>> x = np.arange(-3, 3, dtype=np.float32).reshape((2, 3))
>>> x
array([[-3., -2., -1.],
[ 0.,  1.,  2.]], dtype=float32)
>>> W = np.array([0.01, 0.1, 1], dtype=np.float32)
>>> W
array([0.01, 0.1 , 1.  ], dtype=float32)
>>> F.prelu(x, W)
variable([[-0.03, -0.2 , -1.  ],
[ 0.  ,  1.  ,  2.  ]])


Note

When the PReLU function is combined with two-dimensional convolution, the elements of parameter $$W$$ are typically shared across the same filter of different pixels. In order to support such usage, this function supports the shape of parameter array that indicates leading dimensions of input arrays except the batch dimension.

For example, if $$W$$ has the shape of $$(2, 3, 4)$$, $$x$$ must have the shape of $$(B, 2, 3, 4, S_1, ..., S_N)$$ where $$B$$ is the batch size and the number of trailing $$S$$’s $$N$$ is an arbitrary non-negative integer.

Warning

$$W$$ is a trainable parameter in the original paper (https://arxiv.org/abs/1502.01852). To train $$W$$, use chainer.links.PReLU instead.

chainer.links.PReLU to manage the model parameter W.