mygrad.nnet.activations.leaky_relu#

mygrad.nnet.activations.leaky_relu(x: ArrayLike, slope: float, *, constant: Optional[bool] = None) Tensor[source]#

Returns the leaky rectified linear activation elementwise along x.

The leaky ReLU is given by max(x, 0) + slope*min(x, 0).

Parameters
xArrayLike

Input data.

slopeUnion[Real, mygrad.Tensor]

The slope of the negative activation.

constantOptional[bool]

If True, the returned tensor is a constant (it does not back-propagate a gradient).

Returns
mygrad.Tensor

The result of apply the “leaky relu” function elementwise to x.

Examples

>>> import mygrad as mg
>>> from mygrad.nnet.activations import leaky_relu
>>> x = mg.arange(-5, 6)
>>> x
Tensor([-5, -4, -3, -2, -1,  0,  1,  2,  3,  4,  5])
>>> y = leaky_relu(x, slope=0.1); y
>>> Tensor([-0.5, -0.4, -0.3, -0.2, -0.1,  0. ,  1. ,  2. ,  3. ,  4. ,  5. ])
>>> y.backward()
>>> x.grad
array([0.1, 0.1, 0.1, 0.1, 0.1, 0. , 1. , 1. , 1. , 1. , 1. ])

(Source code, png, hires.png, pdf)

../_images/mygrad-nnet-activations-leaky_relu-1.png