These instructions are specifically for users of RPI’s AMOS System (the IBM Blue Gene/Q at the CCI). They could be followed for any system behind a firewall, where direct access to GitHub is not allowed.
Get Source Code
You can’t clone directly from GitHub to the landing pad, so we’ll have get the ROSS code another way:
- First we’ll create a directory and init a git repo on the CCI file system. Login to the landing pad.
1 2 3 4 5 $ cd barn $ mkdir ROSS $ cd ROSS $ git init $ git config receive.denyCurrentBranch ignore
- Now on your local machine, go to your ROSS directory. We need to create a remote branch that we can push to.
1 2 $ git remote add amos USERNAME@lp01.ccni.rpi.edu:/gpfs/u/home/PROJECT/USERNAME/barn/ROSS $ git push amos master
I called it amos, but you can call it whatever you want. Also make sure to change your username and project.
- Back on the landing pad:
1 $ git reset --hard HEAD
WARNING: If you make changes to the code on the BG/Q, they will be overwritten if you push from your local machine to your CCI remote.
- Assuming you’re still logged into the landing pad, we need to login to the front end node of the BG/Q:
1 2 $ ssh q $ cd barn
- Create a new ross-build directory and change to it:
1 2 $ mkdir ross-build $ cd ross-build
- We need to load the xl module and set the appropriate variables:
1 2 3 $ module load xl $ export ARCH=bgq $ export CC=mpixlc
- We use CMake to build ROSS.
1 $ ccmake ../ROSS
You’ll want to change
CMAKE_INSTALL_PREFIX to the directory you want ROSS installation files, e.g.,
If you’re using one of the ROSS models, you’ll want to set
ROSS_BUILD_MODELS to ON.
- Finally, we can build:
1 2 $ make $ make install
You can create a bash script with the run(s) you want to do. Here’s an example:
1 2 3 4 5 6 7 #!/bin/bash #SBATCH --job-name=name-your-job #SBATCH -D /gpfs/u/home/PROJ/USERNAME/scratch/phold-results #SBATCH --mail-type=ALL #SBATCH --mail-user=your-email-address srun -o my-run.log -N 128 --ntasks-per-node=64 --overcommit $HOME/barn/ross-build/models/phold/phold --synch=3 &
This runs PHOLD with 8192 PEs (128 nodes * 64 tasks per node). The lines beginning with #SBATCH set the working directory, job name and your email address. You’ll be notified by email once your job starts running and when it stops running (whether it ends successfully or fails).
To submit this example job, do the following:
1 sbatch -p medium -N 128 -t 720 ./my-run-script.sh
For more details about using the CCI BG/Q, you can refer to the Wiki.