The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.
Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet
For additional information, please contact Research Technology Services at tts-research@tufts.edu
PetSc installation
Installing PetSc in your home directory.
PetSc is a complex suite of programs that is best installed in your home directory. This will allow access to all the makefiles corresponding to different downloaded programs that you might add. Due to the size of this program, your default home directory will be insufficient to hold the following installation. Please contact via email cluster-support@tufts.edu to inform us of your PetSc needs and an adjustment to your home directory quota will be requested.
Steps |
Command/Task |
Comments |
---|---|---|
0 |
/cluster/shared/dmarshal/ISO/petsc-lite-3.2-p2.tar |
Where to find distribution or get latests from http://www.mcs.anl.gov/petsc/petsc-2/download/index.html |
1 |
PETSC_DIR=/your/build/dir ; export PETSC_DIR |
Configure the package to install in /your/build/dir of choice and create environment variable |
1a |
cd $PETSC_DIR ; cp /cluster/shared/dmarshal/ISO/petsc-lite-3.2-p2.tar . |
note the . in the cp command at the end |
1b |
module load openmpi |
access to openmpi |
1c |
./config/configure.py --download-f-blas-lapack=1 --download-hypre=1 |
the install command |
1d |
bsub ./conftest |
Submit the config to one cluster node |
1e |
./reconfigure.py |
Run the reconfig to setup mpi support |
2 |
PETSC_ARCH=linux-gnu-c-debug; export PETSC_ARCH |
create environment variables |
3 |
make all |
compile and install |
4 |
bsub -Ip -q paralleltest_public -a openmpi -n 2 mpirun.lsf |
test mpi job submissions on 2 nodes after you have built something |
For additional information, please contact Research Technology Services at tts-research@tufts.edu