mygrad.operation_base.Operation#

class mygrad.operation_base.Operation[source]#

Base class for all tensor operations that support back-propagation of gradients.

Consider the Operation-instance f. A forward-pass through f is defined via f.__call__(...). Thus, given tensors a and b, a computational graph is defined f.__call__(a, b) -> c, where the “creator” of tensor c is recorded as f:

(node: a) --+
             -> [operation: f(a, b)] --> (node: c)
(node: b) --+

Back-propagating through c will instruct f to back-propagate the gradient to its inputs, which are recorded as a and b. Each node then back-propagates to any Operation-instance that is recorded as its creator, and so on.

Methods

__call__(*input_vars, **kwargs)

Performs a forward pass, f, of this Operation.

backward(grad, **kwargs)

Back-propagates the gradient through all of the operation's inputs, which are stored in the tuple self.variables.

backward_var(grad, index, **kwargs)

Given grad = dℒ/df, computes ∂ℒ/∂x_{i}, where x_{i} is one of x1, ...., xn.

grad_post_process_fn

__init__()[source]#

Methods

__init__()

backward(grad, **kwargs)

Back-propagates the gradient through all of the operation's inputs, which are stored in the tuple self.variables.

backward_var(grad, index, **kwargs)

Given grad = dℒ/df, computes ∂ℒ/∂x_{i}, where x_{i} is one of x1, ...., xn.

grad_post_process_fn(grad, var_shape)

Attributes

can_return_view

variables