Code:https://github.com/XuehaiPan/nvitop
相信大家在用NVIDIA-GPU训练网络模型的时候,都会习惯性的在终端nvidia-smi一下吧?最直接的目的是为了查看哪些卡正在使用,哪些卡处在空闲,然后挑选空闲的卡号进行网络训练。
「了解哪块卡处在空闲只是普通算法工程师的普通需求」
咱们作为一名资深的算法工程师,毕竟身兼多职:上要开发AI算法,下要管理服务器,左要带新人,右要PPT汇报上级。
对于管理服务器:刚买的新服务器你得装系统吧?得装Driver,Cuda,Cudnn吧?时不时还得盯一下服务器各个卡的运行状况,毕竟刚入职的小年青有时候一顿操作,一个人占用全服务器95%以上的内存把服务器直接卡死也不是没有可能。
nvitop是一个非常全面的NVIDIA-GPU设备运行状况的实时监控工具,它将GPU利用率,显存占比,卡号使用者,CPU利用率,进程使用时间,命令行等等集于一身,并以差异化的颜色进行个性化展示,安装过程也非常简单,强烈大家推荐使用,让自己在管理服务器的时候事半功倍!
以下图1展示了nvitop和nvidia-smi命令的界面对比结果:
「NviTop」An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
nvitop是一款交互式NVIDIA-GPU设备性能&资源&进程的实时监测工具。
相比于nvidia-smi命令,nvitop在实时监控GPU设备资源&性能上具备全方位优势:
完整API说明文档请移步:https://nvitop.readthedocs.io
以下展示了nvitop命令工具的部分效果图:
显示比nvidia-smi更全面的资源监控信息,和更直观的表现形式
可以作为资源监控器一直运行,而不是只能单独查看一次结果
可在监控模式下响应用户输入(来自键盘或鼠标),比gpustat和py3nvml更具优势
适用于Linux和Windows
易于集成到其它应用程序中,而不仅仅是监控功能(与nvidia-htop和nvtop相比)
「官方安装教程」https://github.com/XuehaiPan/nvitop/blob/main/README.md
因为nvitop采用纯Python编写,所以推荐采用pip命令进行安装,官方安装说明文档中提供了五种安装方式:
- pipx run nvitop
-
- pip3 install --upgrade nvitop
-
- conda install -c conda-forge nvitop
-
- pip3 install git+https://github.com/XuehaiPan/nvitop.git#egg=nvitop
-
- git clone --depth=1 https://github.com/XuehaiPan/nvitop.git
- cd nvitop
- pip3 install .
-
「注意」如果在安装后遇到nvitop: command not found错误,请检查你是否将Python控制台脚本路径(例如,${HOME}/.local/bin)添加到您的PATH环境变量中。或者,你可以直接使用python3 -m nvitop命令
- # Monitor mode (when the display mode is omitted, `NVITOP_MONITOR_MODE` will be used)
- $ nvitop # or use `python3 -m nvitop`
-
- # Automatically configure the display mode according to the terminal size
- $ nvitop -m auto # shortcut: `a` key
-
- # Arbitrarily display as `full` mode
- $ nvitop -m full # shortcut: `f` key
-
- # Arbitrarily display as `compact` mode
- $ nvitop -m compact # shortcut: `c` key
-
- # Specify query devices (by integer indices)
- $ nvitop -o 0 1 # only show <GPU 0> and <GPU 1>
-
- # Only show devices in `CUDA_VISIBLE_DEVICES` (by integer indices or UUID strings)
- $ nvitop -ov
-
- # Only show GPU processes with the compute context (type: 'C' or 'C+G')
- $ nvitop -c
-
- # Use ASCII characters only
- $ nvitop -U # useful for terminals without Unicode support
-
- # For light terminals
- $ nvitop --light
-
- # For spectrum-like bar charts (requires the terminal supports 256-color)
- $ nvitop --colorful
-
- import os
-
- import torch
- import torch.nn as nn
- import torch.nn.functional as F
- from torch.utils.tensorboard import SummaryWriter
-
- from nvitop import CudaDevice, ResourceMetricCollector
- from nvitop.callbacks.tensorboard import add_scalar_dict
-
- # Build networks and prepare datasets
- ...
-
- # Logger and status collector
- writer = SummaryWriter()
- collector = ResourceMetricCollector(devices=CudaDevice.all(), # log all visible CUDA devices and use the CUDA ordinal
- root_pids={os.getpid()}, # only log the descendant processes of the current process
- interval=1.0) # snapshot interval for background daemon thread
-
- # Start training
- global_step = 0
- for epoch in range(num_epoch):
- with collector(tag='train'):
- for batch in train_dataset:
- with collector(tag='batch'):
- metrics = train(net, batch)
- global_step += 1
- add_scalar_dict(writer, 'train', metrics, global_step=global_step)
- add_scalar_dict(writer, 'resources', # tag='resources/train/batch/...'
- collector.collect(),
- global_step=global_step)
-
- add_scalar_dict(writer, 'resources', # tag='resources/train/...'
- collector.collect(),
- global_step=epoch)
-
- with collector(tag='validate'):
- metrics = validate(net, validation_dataset)
- add_scalar_dict(writer, 'validate', metrics, global_step=epoch)
- add_scalar_dict(writer, 'resources', # tag='resources/validate/...'
- collector.collect(),
- global_step=epoch)
-
-
「详见」https://nvitop.readthedocs.io/
本文介绍了一款NVIDIA-GPU设备的性能实时监控工具,相比于其它监控工具而言(eg,nvidia-smi,nvidia-htop,py3nvml,nvtop等等),具有全方位碾压的优势,推荐大家使用!