mygrad.std(x: ArrayLike, axis: Union[None, int, Tuple[int, ...]] = None, ddof: int = 0, keepdims: bool = False, *, constant: Optional[bool] = None) Tensor[source]#

Compute the standard deviation along the specified axis.

Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.


Array containing numbers whose standard deviation is desired.

axisOptional[int, Tuple[int, …]]

Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.

ddofint, optional (default=0)

“Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of elements. By default ddof is zero.

keepdimsbool, optional (default=False)

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.


If True, this tensor is treated as a constant, and thus does not facilitate back propagation (i.e. constant.grad will always return None).

Defaults to False for float-type data. Defaults to True for integer-type data.

Integer-type tensors must be constant.



The variance is the average of the squared deviations from the mean, i.e., var = mean(abs(x - x.mean())**2).

The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables.


>>> import mygrad as mg
>>> import numpy as np
>>> a = mg.Tensor([[1, 2],
...                [3, 4]])
>>> mg.std(a)
>>> mg.std(a, axis=0)
Tensor([ 1.,  1.])
>>> mg.std(a, axis=1)
Tensor([ 0.5,  0.5])

In single precision, var() can be inaccurate:

>>> a = mg.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> mg.std(a)

Computing the variance in float64 is more accurate:

>>> mg.std(a, dtype=np.float64)