The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Installing PetSc in your home directory.

PetSc is a complex suite of programs that is best installed in your home directory. This will allow access to all the makefiles corresponding to different downloaded programs that you might add.

Steps

Command/Task

Comments

0

/cluster/shared/dmarshal/petsc-3.0.0-p8.tar

Where to find distribution

1

PETSC_DIR=/your/build/dir; export PETSC_DIR

Configure the package to install in /your/build/dir of choice and create environment variable

1a

cd $PETSC_DIR; cp /cluster/shared/dmarshal/petsc-3.0.0-p8.tar .
tar -xvf petsc-3.0.0-p8.tar

commands

1b

module load openmpi

access to openmpi

1c

./config/configure.py --download-f-blas-lapack=1 --download-hypre=1
--with-batch --download-superlu=1 --download-ml=1
--with-mpi-dir=/usr/mpi/gcc/openmpi-1.2.6/ --with-mpi-shared=1

the install command

1d

bsub ./conftest

Submit the config to one cluster node

1e

./reconfigure.py

Run the reconfig to setup mpi support

2

PETSC_ARCH=linux-gnu-c-debug; export PETSC_ARCH

create environment variables

3

make all

compile and install

4

bsub -Ip -q paralleltest_public -a openmpi -n 2 mpirun.lsf
./your-executable

test mpi job submissions on 2 nodes after you have built something

  • No labels