memory_summary
- paddle.device.cuda. memory_summary ( device: _CudaPlaceLike | None = None ) None [source]
-
Get detailed summary of the CUDA memory usage for the specified device, printed in three distinct sections: Global Summary, Allocator Summary, and Distribution. This function prints the summary directly to the terminal.
- Parameters
-
device (paddle.CUDAPlace|int|str|None, optional) – The device, the id of the device or the string name of device like ‘gpu:x’. If device is None, the device is the current device. Default: None.
The summary includes: 1. Global Summary: GPU utilization rates and physical memory information (similar to nvidia-smi). 2. Allocator Summary: Memory allocated by the PaddlePaddle’s allocator (Total, Used, Free),
including a Weighted Fragmentation Rate.
Distribution: A wide pivot table showing the size distribution of allocated blocks (split by common sizes like 1M, 10M, … 3G).
Examples
>>> >>> import paddle >>> paddle.device.set_device('gpu') # or '<custom_device>' >>> paddle.device.cuda.memory_summary(0)
