Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
| gpu_resources [2017/06/08 17:36] – [Preventing Job Clobbering] adoyle | gpu_resources [2024/03/26 13:52] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 34: | Line 34: | ||
| </ | </ | ||
| + | This will only take effect when you log in, so log out and back in and try the following to ensure that it worked: | ||
| + | |||
| + | < | ||
| + | echo $CUDA_VISIBLE_DEVICES | ||
| + | </ | ||
| + | |||
| + | If it outputs the ID that you selected then you're ready to use the GPU. | ||
| + | |||
| + | ==== Sharing a single GPU ==== | ||
| To configure TensorFlow to not pre-allocate all GPU memory you can use the following Python code: | To configure TensorFlow to not pre-allocate all GPU memory you can use the following Python code: | ||
| Line 44: | Line 53: | ||
| </ | </ | ||
| + | This has been found to work only to a certain extent, and when there are several jobs that use a significant amount of the GPU resources, jobs can still be ruined even when using the above code | ||
| ===== GPU Info ===== | ===== GPU Info ===== | ||