...
How do I run a compiled mpi based program across two nodes using 8 cores?
> salloc -bash-3.2$ bsub -I -q parallel_public -a mvapich2 -n 8 mpirun.lsf N2 -n8 -p mpi
> module load openmpi
> srun yourcode
This will submit your code executable to the parallel queue using mvapich2 and requesting 8 cpusslurm partition using openmpi.
Is there a queue slurm partition for testing parallel programs that require a short run time? There is queue called paralleltest_public just for this purpose. It has a run time limit of 10 minutes and a high priority
No. The mpi partition is the option.
Can parallel jobs be sent to any queueslurm partition?
No... The Parallel_public queue and the test version slurm mpi partition is where most parallel jobs should goare supported. This queue partition has a limit of 64 128 cores/cpus . If you need access to more cores, we can add you to the Express_public queue which has a limit of 256 cores.per job request.
What mpi software is available?
OpenMPI , Mvapich and Mvapich2 are available on the cluster as a loadable module. is the default slurm supported mpi. Other types are available and requires a slurm option to override. Once the corresponding module is loaded, your environment will provide access to the various MPI compilers.
> module load openmpi
For example, OpenMPI provides the following:
mpic++ mpicxx mpicc mpiCC mpif77 mpif90
...
>module load pgi
>module load mvapich2
>pgcc myname.c -o myname -Mmpi -I/opt/pgi/linux86-64/7.2-3/include/
>bsub -I -q parallel_public -a mvapich2 -n 8 mpirun.lsf ./myname
Where are all the Portland executables and what are their names?
When you load the module for Portland, all executables will be on your path.
> ls /opt/pgi/linux86-64/7.2-3/bin
...
How does one find out gpu specific info?
> bsub -Ip -q short_gpu /opt/shared/gpucomputingsdk/4.2.9/C/bin/linux/release/deviceQuery
Another option:
> bsub -q short_gpu -o gpu_info.txt nvidia-smi -a
...