Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
| gpu_resources [2017/03/29 14:05] – csteel | gpu_resources [2024/03/26 13:52] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== GPU Resources ====== | ====== GPU Resources ====== | ||
| + | This is a collaborative resource, please improve it. Login using your MCIN user name and ID and add your discoveries. | ||
| + | |||
| + | ===== Items of Interest / for Discussion? ===== | ||
| + | |||
| + | |||
| + | |||
| + | ==== Resources ==== | ||
| + | |||
| + | * [ OpenACC - Tutorial - Steps to More Science ]( https:// | ||
| + | |||
| + | "Here are three simple steps to start accelerating your code with GPUs. We will be using PGI OpenACC compiler for C, C++, FORTRAN, along with tools from the PGI Community Edition." | ||
| + | |||
| + | * [ Performance Portability from GPUs to CPUs with OpenACC ](https:// | ||
| + | |||
| + | * [ Data Center Management Tools ]( http:// | ||
| + | |||
| + | * The GPU Deployment Kit | ||
| + | * Ganglia | ||
| + | * Slurm | ||
| + | * NVIDIA Docker | ||
| + | * Others??? | ||
| + | |||
| + | " | ||
| + | |||
| + | |||
| + | ===== Preventing Job Clobbering ===== | ||
| + | |||
| + | There are currently 3 GPU's in ace-gpu-1. To select one of the three (0, 1, 2), set the CUDA_VISIBLE_DEVICES environment variable. This can be accomplished by adding the following line to your ~/ | ||
| + | |||
| + | < | ||
| + | export CUDA_VISIBLE_DEVICES=X | ||
| + | </ | ||
| + | |||
| + | This will only take effect when you log in, so log out and back in and try the following to ensure that it worked: | ||
| + | |||
| + | < | ||
| + | echo $CUDA_VISIBLE_DEVICES | ||
| + | </ | ||
| + | |||
| + | If it outputs the ID that you selected then you're ready to use the GPU. | ||
| + | |||
| + | ==== Sharing a single GPU ==== | ||
| + | To configure TensorFlow to not pre-allocate all GPU memory you can use the following Python code: | ||
| + | |||
| + | < | ||
| + | # configures TensorFlow to not try to grab all the GPU memory | ||
| + | config = tf.ConfigProto(allow_soft_placement=True) | ||
| + | config.gpu_options.allow_growth = True | ||
| + | session = tf.Session(config=config) | ||
| + | K.set_session(session) | ||
| + | </ | ||
| + | |||
| + | This has been found to work only to a certain extent, and when there are several jobs that use a significant amount of the GPU resources, jobs can still be ruined even when using the above code | ||
| ===== GPU Info ===== | ===== GPU Info ===== | ||
| + | |||
| + | For CPU and GPU usage: | ||
| + | |||
| + | < | ||
| + | glances | ||
| + | </ | ||
| + | |||
| + | Other info | ||
| < | < | ||
| Line 24: | Line 85: | ||
| </ | </ | ||
| + | Nvidia Visual Profiler (https:// | ||
| + | < | ||
| + | / | ||
| + | </ | ||
| + | |||
| + | |||
| + | ===== GPU Accounting ===== | ||
| + | |||
| + | SysAdmins: to enable Accounting mode | ||
| + | < | ||
| + | sudo nvidia-smi -i 0 -am ENABLED | ||
| + | </ | ||
| + | |||
| + | Users: to check if Accounting mode enabled or disabled | ||
| + | < | ||
| + | nvidia-smi -i 0 -q -d ACCOUNTING | ||
| + | </ | ||
| + | |||
| + | Output example: | ||
| + | |||
| + | < | ||
| + | ==============NVSMI LOG============== | ||
| + | |||
| + | Timestamp | ||
| + | Driver Version | ||
| + | |||
| + | Attached GPUs : 1 | ||
| + | GPU 0000: | ||
| + | Accounting Mode : Enabled | ||
| + | Accounting Mode Buffer Size : 1920 | ||
| + | Accounted Processes | ||
| + | Process ID : 15819 | ||
| + | GPU Utilization | ||
| + | Memory Utilization | ||
| + | Max memory usage : 187 MiB | ||
| + | Time : 3769 ms | ||
| + | Is Running | ||
| + | ... | ||
| + | </ | ||
| + | Users: to check GPU stats per process: | ||
| + | < | ||
| + | nvidia-smi -i 0 --query-accounted-apps=gpu_name, | ||
| + | </ | ||
| + | |||
| + | Output example: | ||
| + | |||
| + | < | ||
| + | gpu_name, pid, gpu_utilization [%], max_memory_usage [MiB], time [ms] | ||
| + | TITAN X (Pascal), 15819, 100 %, 187 MiB, 3769 ms | ||
| + | TITAN X (Pascal), 15633, 87 %, 8465 MiB, 200626 ms | ||
| + | TITAN X (Pascal), 15944, 0 %, 153 MiB, 382 ms | ||
| + | TITAN X (Pascal), 16000, 0 %, 155 MiB, 299 ms | ||
| + | TITAN X (Pascal), 15862, 80 %, 8465 MiB, 215039 ms | ||
| + | TITAN X (Pascal), 15842, 41 %, 425 MiB, 721223 ms | ||
| + | TITAN X (Pascal), 16294, 74 %, 8465 MiB, 231517 ms | ||
| + | TITAN X (Pascal), 16436, 70 %, 10425 MiB, 229470 ms | ||
| + | TITAN X (Pascal), 16118, 40 %, 155 MiB, 1310156 ms | ||
| + | TITAN X (Pascal), 16908, 72 %, 8465 MiB, 511122 ms | ||
| + | TITAN X (Pascal), 17102, 73 %, 8465 MiB, 833806 ms | ||
| + | TITAN X (Pascal), 17900, 0 %, 153 MiB, 358 ms | ||
| + | TITAN X (Pascal), 18018, 0 %, 153 MiB, 235 ms | ||
| + | TITAN X (Pascal), 17632, 75 %, 8465 MiB, 823193 ms | ||
| + | TITAN X (Pascal), 18376, 74 %, 8529 MiB, 827336 ms | ||
| + | TITAN X (Pascal), 18637, 74 %, 8465 MiB, 547161 ms | ||
| + | TITAN X (Pascal), 16377, 54 %, 153 MiB, 0 ms | ||
| + | TITAN X (Pascal), 18752, 55 %, 8465 MiB, 0 ms | ||
| + | </ | ||
| + | |||
| + | Users: Accounting help | ||
| + | < | ||
| + | nvidia-smi --help-query-accounted-apps | ||
| + | </ | ||
| + | |||
| + | ==== nvidia-smi flags used ==== | ||
| + | |||
| + | < | ||
| + | -i, | ||
| + | -am | ||
| + | -q, | ||
| + | -d, | ||
| + | UTILIZATION, | ||
| + | COMPUTE, PIDS, PERFORMANCE, | ||
| + | PAGE_RETIREMENT, | ||
| + | Flags can be combined with comma e.g. ECC,POWER. | ||
| + | Sampling data with max/min/avg is also returned | ||
| + | for POWER, UTILIZATION and CLOCK display types. | ||
| + | Doesn' | ||
| + | </ | ||
| + | |||
| + | * [[http:// | ||
| + | |||
| + | * [[http:// | ||
| ===== Deep Learning ===== | ===== Deep Learning ===== | ||