QuantConfig

class paddle.quantization. QuantConfig ( activation: QuanterFactory | None, weight: QuanterFactory | None ) [source]

Configure how to quantize a model or a part of the model. It will map each layer to an instance of SingleLayerConfig by the settings. It provides diverse methods to set the strategies of quantization.

Parameters
  • activation (QuanterFactory | None) – The global quantizer used to quantize the activations.

  • weight (QuanterFactory | None) – The global quantizer used to quantize the weights.

Examples

>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver

>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> q_config = QuantConfig(activation=quanter, weight=quanter)
>>> print(q_config)
Global config:
activation: FakeQuanterWithAbsMaxObserver(name=None,moving_rate=0.9,bit_length=8,dtype=float32)
weight: FakeQuanterWithAbsMaxObserver(name=None,moving_rate=0.9,bit_length=8,dtype=float32)
add_layer_config ( layer: Layer | list[Layer], activation: QuanterFactory | None = None, weight: QuanterFactory | None = None ) None

add_layer_config

Set the quantization config by layer. It has the highest priority among all the setting methods.

Parameters
  • layer (Layer|list[Layer]]) – One or a list of layers.

  • activation (QuanterFactory | None) – Quanter used for activations. Default is None.

  • weight (QuanterFactory | None) – Quanter used for weights. Default is None.

Examples

>>> import paddle
>>> from paddle.nn import Linear
>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver

>>> class Model(paddle.nn.Layer):
...    def __init__(self):
...        super().__init__()
...        self.fc = Linear(576, 120)
>>> model = Model()
>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> q_config = QuantConfig(activation=None, weight=None)
>>> q_config.add_layer_config([model.fc], activation=quanter, weight=quanter)
>>> 
>>> print(q_config)
Global config:
None
Layer prefix config:
{'linear_0': <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680ee0>}
add_name_config ( layer_name: str | list[str], activation: QuanterFactory | None = None, weight: QuanterFactory | None = None ) None

add_name_config

Set the quantization config by full name of layer. Its priority is lower than add_layer_config.

Parameters
  • layer_name (str|list[str]) – One or a list of layers’ full name.

  • activation (QuanterFactory | None) – Quanter used for activations. Default is None.

  • weight (QuanterFactory | None) – Quanter used for weights. Default is None.

Examples

>>> import paddle
>>> from paddle.nn import Linear
>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver

>>> class Model(paddle.nn.Layer):
...     def __init__(self):
...         super().__init__()
...         self.fc = Linear(576, 120)
>>> model = Model()
>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> q_config = QuantConfig(activation=None, weight=None)
>>> q_config.add_name_config([model.fc.full_name()], activation=quanter, weight=quanter)
>>> 
>>> print(q_config)
Global config:
None
Layer prefix config:
{'linear_0': <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680fd0>}
add_type_config ( layer_type: type[Layer] | list[type[Layer]], activation: QuanterFactory | None = None, weight: QuanterFactory | None = None ) None

add_type_config

Set the quantization config by the type of layer. The layer_type should be subclass of paddle.nn.Layer. Its priority is lower than add_layer_config and add_name_config.

Parameters
  • layer_type (type[Layer] | list[type[Layer]]) – One or a list of layers’ type. It should be subclass of

  • layer. (paddle.nn.Layer. Python build-in function type() can be used to get the type of a) –

  • activation (QuanterFactory | None) – Quanter used for activations. Default is None.

  • weight (QuanterFactory | None) – Quanter used for weights. Default is None.

Examples

>>> import paddle
>>> from paddle.nn import Linear
>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver

>>> class Model(paddle.nn.Layer):
...     def __init__(self):
...         super().__init__()
...         self.fc = Linear(576, 120)
>>> model = Model()
>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> q_config = QuantConfig(activation=None, weight=None)
>>> q_config.add_type_config([Linear], activation=quanter, weight=quanter)
>>> 
>>> print(q_config)
Global config:
None
Layer type config:
{<class 'paddle.nn.layer.common.Linear'>: <paddle.quantization.config.SingleLayerConfig object at 0x7fe41a680a60>}
add_qat_layer_mapping ( source: type[Layer], target: type[Layer] ) None

add_qat_layer_mapping

Add rules converting layers to simulated quantization layers before quantization-aware training. It will convert layers with type source to layers with type target. source and target should be subclass of paddle.nn.Layer. And a default mapping is provided by property default_qat_layer_mapping.

Parameters
  • source (type[Layer]) – The type of layers that will be converted.

  • target (type[Layer]) – The type of layers that will be converted to.

Examples

>>> import paddle
>>> from paddle.nn import Conv2D
>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver
>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> q_config = QuantConfig(activation=None, weight=None)
>>> class CustomizedQuantedConv2D(paddle.nn.Layer):
...     def forward(self, x):
...         pass
...         # add some code for quantization simulation
>>> q_config.add_qat_layer_mapping(Conv2D, CustomizedQuantedConv2D)
add_customized_leaf ( layer_type: type[Layer] ) None

add_customized_leaf

Declare the customized layer as leaf of model for quantization. The leaf layer is quantized as one layer. The sublayers of leaf layer will not be quantized.

Parameters

layer_type (type[Layer]) – The type of layer to be declared as leaf.

Examples

>>> from paddle.nn import Sequential
>>> from paddle.quantization import QuantConfig
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver
>>> q_config = QuantConfig(activation=None, weight=None)
>>> q_config.add_customized_leaf(Sequential)
property customized_leaves : list[type[Layer]]

Get all the customized leaves.

details ( ) str

details

Get the formatted details of current config.