Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • access to MPI based parallel programs
  • access to larger amounts of memory than 32bit computers offer
  • access to the large public domain of scientific computing programs
  • access to multiple compilers
  • assess to large amounts of storage
  • access to batch processing for running numerous independent serial jobs
  • access to 64bit versions of programs you may already have on your 32bit desktop

...

The following programs provide thread based parallelism as well: Comsol, Matlab
The Note, the default settings for Matlab are number of threads equals number of cores on a compute node.  However it is up to you to specify in slurm what is needed.

When does 64bit computing matter?

...

See attachment for Intel white paper introduction pdf document.

GPU computing and CUDA resources

There are three nVidia GPU types available: 

...

Where is the deviceQuery command? And how to find out more device info?
 

> which deviceQuery

 > srun --pty --x11=first -p gpu deviceQuery

> srun --pty --x11=first -p gpu nvidia-smi  -h


 How does one make a gpu batch job?

Your compiled cuda program is submitted via a script that slurm's sbatch command can read.  Use a text editor to create a file called, device_query.sh.  Here we run the deviceQuery command:

#!/bin/bash
#SBATCH --partition=gpu
#SBATCH -c 2
#SBATCH --output=gpu.%N.%j.out
#SBATCH --error=gpu.%N.%j.err
module load cuda/6.5.12
deviceQuery   >  my_device_results.out

To submit this file to the gpu partition,  run:

> sbatch device_query.sh

 

 

 

What gpu libraries are available on the cluster for linear algebra methods?
 Cuda has support and additional support can be found in  Cula routines.  Cula addresses dense and sparse matrix related methods and access is via the module environment. To see what is current:
> module available
Cula install directory is /opt/shared/cula/

...