mygrad.operation_base.Operation.backward#

Operation.backward(grad: ndarray, **kwargs)[source]#

Back-propagates the gradient through all of the operation’s inputs, which are stored in the tuple self.variables.

Constant tensors (tensor.constant is True) skipped by this process.

Parameters
gradnumpy.ndarray

The back-propagated total derivative with respect to the present operation (f): d(out)/df