Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

How do I run a compiled mpi based program across two nodes using 8 cores?

> salloc -N2 -n8 -p mpi

> module load openmpi

> srun yourcode

 

This will submit your code executable to the slurm partition using openmpi.  See the slurm section of the wiki for further examples.

...

No.  The mpi partition is the only option.

Can parallel jobs be sent to any slurm partition?
No... The slurm mpi partition is where parallel jobs are supported. This partition has a limit of 128 cores/cpus per job request.

...

Try the broadcast example found in the PGI directory tree:
/opt/shared/pgi/linux86-64/7.2-3/EXAMPLES/MPI/mpihello/mynane.c

As an example that requests 8 cores:

>module load pgi

>module load openmpi

>pgcc myname.c -o myname -Mmpi -I/opt/pgi/linux86-64/7.2-3/include/

Note, there may be other versions of pgi and openmpi available via modules:

> module available


Where are  the Portland executables ?
When you load the module for Portland, all executables will be on your path.
> ls /opt/shared/pgi/linux86-64/7.2-3/bin

...