memory_summary

paddle.device.cuda. memory_summary ( device: _CudaPlaceLike | None = None ) None [source]

Get detailed summary of the CUDA memory usage for the specified device, printed in three distinct sections: Global Summary, Allocator Summary, and Distribution. This function prints the summary directly to the terminal.

Parameters

device (paddle.CUDAPlace|int|str|None, optional) – The device, the id of the device or the string name of device like ‘gpu:x’. If device is None, the device is the current device. Default: None.

The summary includes: 1. Global Summary: GPU utilization rates and physical memory information (similar to nvidia-smi). 2. Allocator Summary: Memory allocated by the PaddlePaddle’s allocator (Total, Used, Free),

System Message: ERROR/3 (/usr/local/lib/python3.10/site-packages/paddle/device/cuda/__init__.py:docstring of paddle.device.cuda.memory_summary, line 14)

Unexpected indentation.

including a Weighted Fragmentation Rate.

System Message: WARNING/2 (/usr/local/lib/python3.10/site-packages/paddle/device/cuda/__init__.py:docstring of paddle.device.cuda.memory_summary, line 10)

Block quote ends without a blank line; unexpected unindent.

  1. Distribution: A wide pivot table showing the size distribution of allocated blocks (split by common sizes like 1M, 10M, … 3G).

Examples

>>> 
>>> import paddle
>>> paddle.device.set_device('gpu')  # or '<custom_device>'

>>> paddle.device.cuda.memory_summary(0)