mygrad.mean(x: ArrayLike, axis: Union[None, int, Tuple[int, ...]] = None, keepdims: bool = False, *, constant: Optional[bool] = None) Tensor[source]#

Mean of tensor elements over a given axis.

axisOptional[int, Tuple[ints, …]

Axis or axes along which a mean is performed. The default, axis=None, will mean all of the elements of the input tensor. If axis is negative it counts from the last to the first axis.

If axis is a tuple of ints, a mean is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.

keepdimsbool, optional

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input tensor.


If True, this tensor is treated as a constant, and thus does not facilitate back propagation (i.e. constant.grad will always return None).

Defaults to False for float-type data. Defaults to True for integer-type data.

Integer-type tensors must be constant.


A Tensor with the same shape as self, with the specified axis/axes removed. If self is a 0-d tensor, or if axis is None, a 0-dim Tensor is returned.


>>> import mygrad as mg
>>> import numpy as np
>>> a = mg.Tensor([[1, 2],
...                [3, 4]])
>>> mg.mean(a)
>>> mg.mean(a, axis=0)
Tensor([ 2.,  3.])
>>> mg.mean(a, axis=1)
Tensor([ 1.5,  3.5])

In single precision, mean can be inaccurate:

>>> a = mg.zeros((2, 512*512), dtype=np.float32)
>>> a[0, :] = 1.0
>>> a[1, :] = 0.1
>>> mg.mean(a)

Computing the mean in float64 is more accurate:

>>> mg.mean(a, dtype=np.float64)