This is an old revision of the document!
====== Channelflow on trillian ====== Some extra instructions for installing and using channelflow on [[http://trillian-use.sr.unh.edu/index.php/Main_Page|trillian]], UNH's CRAY XE6m-200 supercomputer. ===== Installation ===== Log on to ''trillian.sr.unh.edu'' using ''ssh'', then run these commands. 1. Configure the Cray computing environment. <code> module unload cray-hdf5-parallel module unload cce module unload xt-asyncpe module load gcc module load fftw module load cmake </code> 2. Check out channelflow source code into a ~/svnrepos/channelflow directory <code> mkdir ~/svnrepos cd ~/svnrepos svn co http://svn.channelflow.org/channelflow </code> 3. Configure, compile, and install channelflow in a separate build directory <code> mkdir ~/channelflow mkdir ~/channelflow/build cd ~/channelflow/build cmake -DCMAKE_C_COMPILER=/opt/gcc/4.7.2/bin/gcc -DCMAKE_CXX_COMPILER=/opt/gcc/4.7.2/bin/g++ -DWITH_FFTW=/opt/fftw/3.3.0.2/x86_64/lib -DWITH_HDF5=~gibson/packages/hdf5-1.8.15/lib -DWITH_EIGEN3=~gibson/packages/eigen-3.2.4 -DCMAKE_INSTALL_PREFIX=~/channelflow ~/svnrepos/channelflow/trunk/ make make test make install </code> The ''make test'' should report "100% tests passed". If everything went well you should now have the following directory structure for channelflow: <code> ~/svnrepos/channelflow # pristine channelflow source code from subversion ~/channelflow/build # build directory, where channelflow was configured and compiled ~/channelflow/include # include directory, with all the channelflow header files (e.g. flowfield.h) ~/channelflow/lib # library directory, with static and shared libs libchflow.a and libchflow.so </code> ===== Submitting jobs ===== If you start a computation by just typing in a command at the command prompt, it'll execute on trillian's login node. That's a no-no! (except for software builds and quick tests). Instead, you should run any long computation but submitting it to the [[https://hpcc.usc.edu/support/documentation/running-a-job-on-the-hpcc-cluster-using-pbs/PBS|PBS]] (Portable Batch System) job control system. PBS will then farm the job out to one of the compute nodes. There are three main PBS commands: * **qsub** submit a job to the PBS queue * **qstat** list jobs running in the queue * **qdel** delete a job from the queue ''qsub'' can be a bit tedious and complicated, so I wrote a some bash code for a simplified interface. Place the following in your ''~/.bashrc'' file (and then run ''source ~/.bashrc'') to make your current shell process and load this code). <code> PATH=$PATH:~/channelflow/bin module unload cray-hdf5-parallel module unload cce module unload xt-asyncpe module load gcc module load fftw module load cmake module load pbs function qsubmit() { tag=$1 shift echo "#PBS -N $tag" > tmp.pbs echo "#PBS -l nodes=1:ppn=1,walltime=48:00:00" >> tmp.pbs echo "#PBS -j oe" >> tmp.pbs echo "#PBS -m ae" >> tmp.pbs echo "cd $(pwd)" >> tmp.pbs echo aprun $* >> tmp.pbs qsub tmp.pbs } </code> This bit of code tells bash where your channelflow programs are, configures the Cray programming environment correctly, and then defines a ''qsubmit'' function, which can be used as follows <code> cd ~/simulations/test-qsubmit qsubmit qtest couette -R 500 u0.h5 </code> This will submit the command ''couette -R 500 u0.h5'' to the PBS queue with jobname "qtest", in working directory ''~/simulations/test-qsubmit''.