...
GPU computing and CUDA resources
As part of the recent research cluster summer 2011 upgrade, one compute node was provisioned with two Nvidia Tesla M2050 GPU processors. GPU There are three nVidia GPU types available:
GPU | quantity | compute node | partition |
---|---|---|---|
K20 | 2 | alpha025, omega025 | gpu |
M2070 | 2 | m4c29, m4c60 | m4 |
M2050 | 2 | m3n45, m3n46 | m3 |
GPU processing is an excellent means to achieve shorter run times for many algorithms. There are two several approaches to to use this resource. One is to program in NvidianVidia's programming language Cuda. The other Another approach is to use Matlab and other commercial applications that have GPU support.
Note, Nvidia Cuda , such as Mathematica, Maple, Abaqus and Ansys. Note, nVidia's Cuda language and applications such as Matlab require specific coding to use gpu resources.
Note: Over time different versions of Cuda and sdk will change. Check the current versions available with the module command.
> module available
You'll find the CUDA toolkit in /opt/shared/cudatoolkit and the GPU computing SDK in /opt/shared/gpucomputingsdk. The SDK contains a number of CUDA sample C applications that can be found at /opt/shared/gpucomputingsdk/4.2.9/C. Compiled samples can be found in /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release.
How does one find out gpu specific info?
> srun -p gpu /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release/deviceQuery
Another option:
> srun -p gpu nvidia-smi -a
...
> cp /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release/simpleStreams .
> module load cuda
> srun -p gpu ./simpleStreams
To view a description of sample codes cuda codes from the command line:
> lynx file:///opt/shared/gpucomputingsdk/4.2.9/C/Samples.html
or
> firefox file:///opt/shared/gpucomputingsdk/4.0.17/C/Samples.html
Cuda versions are only available on compute nodes under directory, /usr/local/. This means that you will need to compile on a node and not on the headnode. Loading the version 6 module of cuda will modify your shell environment thus providing access.
> module load cuda/6.5.12
to obtain bash shell access to compile a cuda program:
> srun --pty --x11=first -p gpu bash
The name of the cuda compiler is nvcc and other tools can be found in:
/opt/shared/cudatoolkit/4.2.9/cuda/binThe nvcc the nvcc command-line help file is obtained:
> nvcc -hAlso, you can view local cuda pdf docs on the cluster using evince
Where is the html and pdf documentation located?
/usr/local/cuda-6.5/doc
To view pdf docs from the command line, chose one:
> evince /optusr/shared/gpucomputingsdk/4.2.9/C/doc/programming_guidelocal/cuda-6.5/doc/pdf/CUDA_C_ProgrammingBest_Practices_Guide.pdfOther pdf documents in:
/opt/shared/cudatoolkit/4.2.9/cuda/doc/ and /opt/shared/gpucomputingsdk/4.2.9/C/doc/
How does one find out gpu specific and performance info?
> deviceQuery
or
> nvidia-smi
Where is the deviceQuery command?
> which deviceQuery
What gpu libraries are available on the cluster for linear algebra methods?
Cula routines are available for Cuda has support and additional support can be found in Cula routines. Cula addresses dense and sparse matrix settings related methods and access is via the module environment. To see what is current:
> module available
Cula install directory is /opt/shared/cula/
...
Look around the web as there are many similar GPU resources.