- Provide any other necessary or useful information regarding your issue, such as (code) examples or related links.
When submitting a bug report, please follow the following template:
Summary
When I run a simply GPU compatible package program in Sublime Text (like PyTorch or Tensorflow doing one matrix multiplication or something) with the “build” command (ctrl + b), after the program is done running, the memory usage for python is still present in my GPUS.
I am using NVIDIA Titan Xp GPUs. I experience output like this:
nvidia-smi
Thu Feb 8 12:36:59 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 387.34 Driver Version: 387.34 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:0B:00.0 Off | N/A |
| 31% 54C P2 88W / 250W | 3723MiB / 12189MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:41:00.0 Off | N/A |
| 39% 64C P2 81W / 250W | 458MiB / 12189MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 TITAN Xp Off | 00000000:42:00.0 On | N/A |
| 23% 37C P8 17W / 250W | 508MiB / 12186MiB | 11% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 30427 C python 407MiB |
| 0 44580 C /usr/bin/python3 355MiB |
| 0 44618 C /usr/bin/python3 355MiB |
| 0 44643 C /usr/bin/python3 389MiB |
| 0 44668 C /usr/bin/python3 355MiB |
| 0 44691 C /usr/bin/python3 389MiB |
| 0 44718 C /usr/bin/python3 429MiB |
| 0 122351 C /usr/NX/bin/nxnode.bin 204MiB |
| 1 30427 C python 383MiB |
| 2 1223 G /usr/lib/xorg/Xorg 214MiB |
| 2 1891 G compiz 218MiB |
+-----------------------------------------------------------------------------+
…
Expected behavior
I would expect the GPU memory to be released, as it is with any other Tensorflow or PyTorch program I run when I do not run it through Sublime’s “build” command and output.
When I completely shut down Sublime, I get the behavior I would hope for:
nvidia-smi
Thu Feb 8 12:43:31 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 387.34 Driver Version: 387.34 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:0B:00.0 Off | N/A |
| 31% 53C P2 88W / 250W | 1402MiB / 12189MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:41:00.0 Off | N/A |
| 39% 64C P2 80W / 250W | 410MiB / 12189MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 TITAN Xp Off | 00000000:42:00.0 On | N/A |
| 23% 36C P8 17W / 250W | 460MiB / 12186MiB | 8% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 30427 C python 407MiB |
| 0 122351 C /usr/NX/bin/nxnode.bin 204MiB |
| 1 30427 C python 383MiB |
| 2 1223 G /usr/lib/xorg/Xorg 203MiB |
| 2 1891 G compiz 228MiB |
+-----------------------------------------------------------------------------+
Actual behavior
Actual behavior is noted in summary.
Steps to reproduce
- Make small PyTorch / Tensorflow program. Something as simple as:
import numpy as np
import torch
x = torch.cuda.FloatTensor(1000,50000).normal_()
y = torch.cuda.FloatTensor(20, 50000).random_()
print(x)
z = torch.mm(x,y.t())
print(z)
-
check nvidia GPU usage by running from the command line
nvidia-smi
-
See if these memory values are still there after running build command on program in Sublime.
Environment
- Operating system and version: Ubuntu 16.04,
- Sublime Text:
- Build 3