MyGrad is a lightweight library that adds automatic differentiation to NumPy – its only dependency is NumPy!
>>> import mygrad as mg >>> import numpy as np >>> x = mg.tensor([1., 2., 3.]) # like numpy.array, but supports backprop! >>> f = np.sum(x * x) # tensors work with numpy functions! >>> f.backward() # triggers automatic differentiation >>> x.grad # stores [df/dx0, df/dx1, df/dx2] array([2., 4., 6.])
MyGrad’s primary goal is to make automatic differentiation an accessible and easy to use across the Python/NumPy ecosystem. As such, it strives to behave and feel exactly like NumPy so that users need not learn yet another array-based math library. Of the various modes and flavors of auto-diff, MyGrad supports backpropagation from a scalar quantity.
NumPy’s ufuncs are richly supported. We can even differentiate through an operation that occur in-place on a tensor and applies a boolean mask to the results:
>>> x = mg.tensor([1., 2., 3.]) >>> y = mg.zeros_like(x) >>> np.multiply(x, x, where=[True, False, True], out=y) >>> y.backward() >>> x.grad array([2., 0., 6.])
NumPy’s view semantics are also mirrored to a high fidelity: performing basic indexing and similar operations on tensors will produce a “view” of that tensor’s data, thus a tensor and its view share memory. This relationship will also manifest between the derivatives stored by a tensor and its views!
>>> x = mg.arange(9.).reshape(3, 3) >>> diag_view = np.einsum("ii->i", x) # returns a view of the diagonal elements of `x` >>> x, diag_view (Tensor([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]), Tensor([0., 4., 8.])) # views share memory >>> np.shares_memory(x, diag_view) True # mutating a view affects its base (and all other views) >>> diag_view *= -1 # mutates x in-place >>> x Tensor([[-0., 1., 2.], [ 3., -4., 5.], [ 6., 7., -8.]]) >>> (x ** 2).backward() >>> x.grad, diag_view.grad (array([[ -0., 2., 4.], [ 6., -8., 10.], [ 12., 14., -16.]]), array([ -0., -8., -16.])) # the gradients have the same view relationship! >>> np.shares_memory(x.grad, diag_view.grad) True
Basic and advanced indexing is fully supported
>>> (x[x < 4] ** 2).backward() >>> x.grad array([[0., 2., 4.], [6., 0., 0.], [0., 0., 0.]])
NumPy arrays and other array-likes play nicely with MyGrad’s tensor. These behave like constants during automatic differentiation
>>> x = mg.tensor([1., 2., 3.]) >>> constant = [-1., 0., 10] # can be a numpy array, list, or any other array-like >>> (x * constant).backward() # all array-likes are treated as constants >>> x.grad array([-1., 0., 10.])
- Installing MyGrad
- Introducing MyGrad
- MyGrad’s Tensor
- Views and In-Place Operations
- Performance Tips
- Writing Your Own Operations
- Tensor creation routines (
- Tensor manipulation routines (
- Linear algebra (
- Mathematical functions (
- Indexing Routines (
- Neural network operations (
- Computational graph visualization(