group_norm

paddle. group_norm ( x: Tensor, num_groups: int, epsilon: float = 1e-05, weight: Tensor | None = None, bias: Tensor | None = None, data_format: DataLayout1D | DataLayout2D | DataLayout3D = 'NCHW', name: str | None = None ) Tensor [source]

nn.GroupNorm is recommended. For more information, please refer to GroupNorm .

This function has two functionalities, depending on the parameters passed:

  1. group_norm(Tensor input, int num_groups, Tensor weight = None, Tensor bias = None, float eps = 1e-05):

    PyTorch compatible group_norm.

  2. ``group_norm(Tensor x, int num_groups, float epsilon = 1e-05, Tensor weight = None, Tensor bias = None,

    System Message: WARNING/2 (/usr/local/lib/python3.10/site-packages/paddle/nn/functional/norm.py:docstring of paddle.nn.functional.norm.group_norm, line 6); backlink

    Inline literal start-string without end-string.

    DataLayout1D | DataLayout2D | DataLayout3D data_format = ‘NCHW’, str | None name = None)``: The original paddle.nn.functional.group_norm, see the following docs.

Parameters
  • x (Tensor) – Input Tensor with shape: attr:(batch, num_features, *). alias: input.

  • num_groups (int) – The number of groups that divided from channels.

  • epsilon (float, optional) – The small value added to the variance to prevent division by zero. Default: 1e-05. alias: eps.

  • weight (Tensor, optional) – The weight Tensor of group_norm, with shape: attr:[num_channels]. Default: None.

  • bias (Tensor, optional) – The bias Tensor of group_norm, with shape: attr:[num_channels]. Default: None.

  • data_format (str, optional) – Specify the input data format. Support “NCL”, “NCHW”, “NCDHW”, “NLC”, “NHWC” or “NDHWC”. Default: “NCHW”.

  • name (str|None, optional) – Name for the GroupNorm, default is None. For more information, please refer to api_guide_Name..

Returns

Tensor, the output has the same shape with x.

Examples

>>> import paddle
>>> paddle.seed(100)
>>> x = paddle.arange(48, dtype="float32").reshape((2, 6, 2, 2))
>>> group_norm_out = paddle.nn.functional.group_norm(x, num_groups=6)

>>> print(group_norm_out)
Tensor(shape=[2, 6, 2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]]],
 [[[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]],
  [[-1.34163547, -0.44721183],
   [ 0.44721183,  1.34163547]]]])