amin

paddle. amin ( x: Tensor, axis: int | Sequence[int] | None = None, keepdim: bool = False, name: str | None = None, *, out: Tensor | None = None ) Tensor [source]

Computes the minimum of tensor elements over the given axis

Note

The difference between min and amin is: If there are multiple minimum elements, amin evenly distributes gradient between these equal values, while min propagates gradient to all of them.

Parameters
  • x (Tensor) – A tensor, the data type is float32, float64, int32, int64, the dimension is no more than 4.

  • axis (int|list|tuple|None, optional) – The axis along which the minimum is computed. If None, compute the minimum over all elements of x and return a Tensor with a single element, otherwise must be in the range \([-x.ndim, x.ndim)\). If \(axis[i] < 0\), the axis to reduce is \(x.ndim + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • out (Tensor|None, optional) – Output tensor. If provided in dynamic graph, the result will be written to this tensor and also returned. The returned tensor and out share memory and autograd meta. Default: None.

  • name (str|None, optional) – Name for the operation (optional, default is None). For more information, please refer to api_guide_Name.

Returns

Tensor, results of minimum on the specified axis of input tensor, it’s data type is the same as input’s Tensor.

Keyword Arguments

out (Tensor, optional) – The output tensor.

Examples

>>> # type: ignore
>>> import paddle
>>> # data_x is a Tensor with shape [2, 4] with multiple minimum elements
>>> # the axis is a int element

>>> x = paddle.to_tensor([[0.2, 0.1, 0.1, 0.1],
...                         [0.1, 0.1, 0.6, 0.7]],
...                         dtype='float64', stop_gradient=False)
>>> # There are 5 minimum elements:
>>> # 1) amin evenly distributes gradient between these equal values,
>>> #    thus the corresponding gradients are 1/5=0.2;
>>> # 2) while min propagates gradient to all of them,
>>> #    thus the corresponding gradient are 1.
>>> result1 = paddle.amin(x)
>>> result1.backward()
>>> result1
Tensor(shape=[], dtype=float64, place=Place(cpu), stop_gradient=False,
0.10000000)
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.20000000, 0.20000000, 0.20000000],
 [0.20000000, 0.20000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result1_min = paddle.min(x)
>>> result1_min.backward()
>>> result1_min
Tensor(shape=[], dtype=float64, place=Place(cpu), stop_gradient=False,
0.10000000)


>>> x.clear_grad()
>>> result2 = paddle.amin(x, axis=0)
>>> result2.backward()
>>> result2
Tensor(shape=[4], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.10000000, 0.10000000, 0.10000000, 0.10000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.50000000, 1.        , 1.        ],
 [1.        , 0.50000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result3 = paddle.amin(x, axis=-1)
>>> result3.backward()
>>> result3
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.10000000, 0.10000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.33333333, 0.33333333, 0.33333333],
 [0.50000000, 0.50000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result4 = paddle.amin(x, axis=1, keepdim=True)
>>> result4.backward()
>>> result4
Tensor(shape=[2, 1], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.10000000],
 [0.10000000]])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.33333333, 0.33333333, 0.33333333],
 [0.50000000, 0.50000000, 0.        , 0.        ]])

>>> # data_y is a Tensor with shape [2, 2, 2]
>>> # the axis is list
>>> y = paddle.to_tensor([[[0.2, 0.1], [0.1, 0.1]],
...                       [[0.1, 0.1], [0.6, 0.7]]],
...                       dtype='float64', stop_gradient=False)
>>> result5 = paddle.amin(y, axis=[1, 2])
>>> result5.backward()
>>> result5
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.10000000, 0.10000000])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[0.        , 0.33333333],
  [0.33333333, 0.33333333]],
 [[0.50000000, 0.50000000],
  [0.        , 0.        ]]])

>>> y.clear_grad()
>>> result6 = paddle.amin(y, axis=[0, 1])
>>> result6.backward()
>>> result6
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.10000000, 0.10000000])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[0.        , 0.33333333],
  [0.50000000, 0.33333333]],
 [[0.50000000, 0.33333333],
  [0.        , 0.        ]]])