Neural network operations (mygrad.nnet)#

Layer operations#

batchnorm(x, *[, gamma, beta, constant])

Performs batch normalization on x.

conv_nd(x, filter_bank, *, stride[, ...])

Use filter_bank (w) to perform strided N-dimensional neural network-style convolutions (see Notes) over x..

max_pool(x, pool, stride, *[, constant])

Perform max-pooling over the last N dimensions of a data batch.

gru(X, Uz, Wz, bz, Ur, Wr, br, Uh, Wh, bh[, ...])

Performs a forward pass of sequential data through a Gated Recurrent Unit layer, returning the 'hidden-descriptors' arrived at by utilizing the trainable parameters as follows.

Losses#

focal_loss(class_probs, targets, *[, alpha, ...])

Return the per-datum focal loss.

margin_ranking_loss(x1, x2, y, margin, *[, ...])

Computes the margin average margin ranking loss. Equivalent to::.

multiclass_hinge(x, y_true[, hinge, constant])

Computes the average multiclass hinge loss.

negative_log_likelihood(x, y_true, *[, ...])

Returns the (weighted) negative log-likelihood loss between log-probabilities and y_true.

softmax_crossentropy(x, y_true, *[, constant])

Given the classification scores of C classes for N pieces of data,

softmax_focal_loss(scores, targets, *[, ...])

Applies the softmax normalization to the input scores before computing the per-datum focal loss.

Activations#

elu(x, alpha, *[, constant])

Returns the exponential linear activation (ELU) elementwise along x.

glu(x[, axis, constant])

Returns the Gated Linear Unit A * σ(B), where A and B are split from x.

hard_tanh(x, *[, lower_bound, upper_bound, ...])

Returns the hard hyperbolic tangent function.

leaky_relu(x, slope, *[, constant])

Returns the leaky rectified linear activation elementwise along x.

logsoftmax(x[, axis, constant])

Applies the log-softmax activation function.

selu(x, *[, constant])

Returns the scaled exponential linear activation (SELU) elementwise along x.

sigmoid(x, *[, constant])

Applies the sigmoid activation function.

softmax(x[, axis, constant])

Applies the softmax activation function.

soft_sign(x, *[, constant])

Returns the soft sign function x / (1 + |x|).

relu(x, *[, constant])

Applies the recitfied linear unit activation function.

tanh(x[, out, where, dtype, constant])

Hyperbolic tangent, element-wise.

Initializers#

dirac(*shape[, dtype, constant])

Initialize a mygrad.Tensor according to the Dirac initialization procedure described by Zagoruyko and Komodakis.

glorot_normal(*shape[, gain, dtype, constant])

Initialize a mygrad.Tensor according to the normal initialization procedure described by Glorot and Bengio.

glorot_uniform(*shape[, gain, dtype, constant])

Initialize a mygrad.Tensor according to the uniform initialization procedure described by Glorot and Bengio.

he_normal(*shape[, gain, dtype, constant])

Initialize a mygrad.Tensor according to the normal initialization procedure described by He et al.

he_uniform(*shape[, gain, dtype, constant])

Initialize a mygrad.Tensor according to the uniform initialization procedure described by He et al.

normal(*shape[, mean, std, dtype, constant])

Initialize a mygrad.Tensor by drawing from a normal (Gaussian) distribution.

uniform(*shape[, lower_bound, upper_bound, ...])

Initialize a mygrad.Tensor by drawing from a uniform distribution.

Sliding Window View Utility#

sliding_window_view(arr, window_shape, step)

Create a sliding window view over the trailing dimensions of an array.