Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • access to MPI based parallel programs
  • access to larger amounts of memory than 32bit computers offer
  • access to the large public domain of scientific computing programs
  • access to multiple compilers
  • assess to large amounts of storage
  • access to batch processing for running numerous independent serial jobs
  • access to 64bit versions of programs you may already have on your 32bit desktop

...

The following programs provide thread based parallelism as well: Comsol, Matlab
The Note, the default settings for Matlab are number of threads equals number of cores on a compute node.  However it is up to you to specify in slurm what is needed.

When does 64bit computing matter?

...

How do I run a compiled mpi based program across two nodes using 8 cores?

> salloc -N2 -n8 -p mpi

> module load openmpi

> srun   ...srun options...   yourcode

 

This will submit your code executable to the slurm partition using openmpi.  See the slurm section of the wiki for further examples.

...

No.  The mpi partition is the only option.

Can parallel jobs be sent to any slurm partition?
No... The slurm mpi partition is where parallel jobs are supported. This partition has a limit of 128 cores/cpus per job request.

...

Try the broadcast example found in the PGI directory tree:
/opt/shared/pgi/linux86-64/7.2-3/EXAMPLES/MPI/mpihello/mynane.c

As an example that requests 8 cores:

>module load pgi

>module load

...

openmpi

>pgcc myname.c -o myname -Mmpi -I/opt/pgi/linux86-64/7.2-3/include/

Note, there may be other versions of pgi and openmpi available via modules:

> module available


Where are  the Portland executables ?
When you load the module for Portland, all executables will be on your path.
> ls /opt/shared/pgi/linux86-64/7.2-3/bin

...

See attachment for Intel white paper introduction pdf document.

GPU computing and CUDA resources

...

There are three nVidia GPU types available: 

GPUquantitycompute nodepartition
K202alpha025, omega025gpu
M20702m4c29, m4c60m4
M20502m3n45, m3n46batch

 

 GPU processing is an excellent means to achieve shorter run times for many algorithms. There are two several approaches to to use this resource. One is to program in NvidianVidia's programming language Cuda. The other Another approach is to use Matlab and other commercial applications that have GPU support.
Note, Nvidia Cuda , such as Mathematica, Maple, Abaqus and Ansys.  Note, nVidia's Cuda language and applications such as Matlab require specific coding to use gpu resources.

Note: Over time different versions of Cuda and sdk will change. Check the current versions available with the module command.
> module available

You'll find the CUDA toolkit in /opt/shared/cudatoolkit and the GPU computing SDK in /opt/shared/gpucomputingsdk. The SDK contains a number of CUDA sample C applications that can be found at /opt/shared/gpucomputingsdk/4.2.9/C. Compiled samples can be found in /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release.

How does one find out gpu specific info?
> srun  -p gpu  /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release/deviceQuery

Another option:

> srun -p gpu    nvidia-smi -a

...

> cp /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release/simpleStreams    .
> module load cuda
> srun  -p gpu   ./simpleStreams

To view a description of sample codes cuda codes from the command line:
> lynx file:///opt/shared/gpucomputingsdk/4.2.9/C/Samples.html
or
> firefox file:///opt/shared/gpucomputingsdk/4.0.17/C/Samples.html

Cuda versions are only available on compute nodes under directory, /usr/local/.  This means that you will need to compile on a node and not on the headnode.  Loading the version 6 module of cuda will modify your shell environment thus providing access. 

> module load cuda/6.5.12

to see what the details are:

> module display cuda/6.5.12

to obtain bash shell access to compile a cuda program:

>  srun --pty --x11=first -p gpu bash

The name of the cuda compiler is nvcc and other tools can be found in:
/opt/shared/cudatoolkit/4.2.9/cuda/binThe nvcc the nvcc command-line  help file is obtained:
> nvcc -hAlso, you can view local cuda pdf docs on the cluster using evince

Where is the html and pdf documentation located?

/usr/local/cuda-6.5/doc

To view pdf docs from the command line using a xserver:
> evince /optusr/shared/gpucomputingsdk/4.2.9/C/doc/programming_guidelocal/cuda-6.5/doc/pdf/CUDA_C_Best_ProgrammingPractices_Guide.pdfOther pdf documents in:
/opt/shared/cudatoolkit/4.2.9/cuda/doc/ and /opt/shared/gpucomputingsdk/4.2.9/C/doc/

How does one find out gpu specific and performance info?
> deviceQuery

or

> nvidia-smi

Where is the deviceQuery command? And how to find out more device info?
 

> which deviceQuery

> srun --pty --x11=first -p gpu deviceQuery

> srun --pty --x11=first -p gpu nvidia-smi  -h


 How does one make a gpu batch job?

Your compiled cuda program is submitted via a script that slurm's sbatch command can read.  Use a text editor to create a file called, device_query.sh.  Here we run the deviceQuery command:

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH -c 2
#SBATCH --output=gpu.%N.%j.out
#SBATCH --error=gpu.%N.%j.err
module load cuda/6.5.12
deviceQuery   >  my_device_results.out

To submit this file to the gpu partition,  run:

> sbatch device_query.sh

 

 

 

What gpu libraries are available on the cluster for linear algebra methods?
Cula routines are available for  Cuda has support and additional support can be found in  Cula routines.  Cula addresses dense and sparse matrix settings related methods and access is via the module environment. To see what is current:
> module available
Cula install directory is /opt/shared/cula/

...

There are many Cuda programming resources on the web and of course the Nvidia Cuda website.

Stanford Seminars on High Performance Computing with CUDA
Stanford has posted videos from the Spring 2011 seminar series held at the Institute for Computational and Mathematical Engineering (ICME). The ICME is directed by Professor Margot Gerritsen.

  • Lecture 1: Intro to HPC with CUDA 1 (Cyril Zeller)
  • Lecture 2: Intro to HPC with CUDA 2 (Justin Luitjens)
  • Lecture 3: Optimizations 1 - Global Memory (Inderaj Bains)
  • Lecture 4: Optimizations 2 - Shared Memory (Steven Rennich)
  • Lecture 5: Finite Difference Stencils on Regular Grids (Paulius Micikevicius)

The following Stanford Univ. video lectures are available for viewing.

 

HPC & GPU Supercomputing Group of Boston
A group for the application of cutting-edge HPC & GPU supercomputing technology to cutting-edge business problems.Prof. Lorena Barba’s research group at Boston University
She is a computational scientist and fluid dynamicist with research interests including GPU computing.

Look around the web as there are many similar GPU resources.