1. I installed the software, how do I test if it is correctly installed?

Because different users have different settings and requirements for their clusters or workstations, we provide a general job handling script for you to customize to your needs.

To assist with job submission script customization, example input files are available under the $SILCSBIODIR/examples folder. For SILCS, use the following commands to make sure the software is correctly installed and the job handling script is working. If you are only interested in SSFEP simulations, you may skip to the SSFEP section below. mkdir -p test/silcs cd test/silcs cp$SILCSBIODIR/examples/silcs/p38a.pdb .
$SILCSBIODIR/silcs/1_setup_silcs_boxes prot=p38a.pdb$SILCSBIODIR/silcs/2a_run_gcmd prot=p38a.pdb nproc=1

If the script failed to run at the 1_setup-silcs_boxes step, the software is not correctly installed, whereas if the script failed to run at the 2a_run_gcmd step, the job handling script needs to be edited. The job handling scripts for SILCS are:

• templates/silcs/job_mc_md.tmpl
• templates/silcs/job_gen_maps.tmpl
• templates/silcs/pymol_fragmap.tmpl
• templates/silcs/vmd_fragmap.tmpl
• templates/silcs/job_cleanup.tmpl

For SSFEP, use the following commands to make sure the software is correctly installed and the job handling script is working.

mkdir -p test/ssfep
cd test/ssfep
cp $SILCSBIODIR/examples/ssfep/* .$SILCSBIODIR/ssfep/1_setup_ssfep prot=4ykr.pdb lig=lig.mol2
$SILCSBIODIR/silcs/2_run_md_ssfep prot=4ykr.pdb lig=lig.mol2 nproc=1 If the script failed to run at the 1_setup_ssfep step, the software is not correctly installed, whereas if the script failed to run at the 2_run_md_ssfep step, the job handling script needs to be edited. The job handling scripts for SSFEP are: • templates/ssfep/job_lig_md.tmpl • templates/ssfep/job_prot_lig_md.tmpl • templates/ssfep/job_dG.tmpl Typically the header portion of the job submission script requires editing. Please contact support@silcsbio.com if you need assistance. 2. I don’t have a cluster but I have a GPU workstation. What can I do? We recommend the use of compute clusters over workstations, due to the computational resources required to perform simulations (especially in the case of SILCS). Typical timing information given in the introduction section is based on using 10 compute nodes all at the same time. However, as workstations with powerful GPUs are becoming more common, questions about whether a single GPU workstation can be used as a compute node are increasing. For those users with a GPU workstation, we offer a simple queue system. This queue system is intended only for use on a single compute node and is designed to mimic a typical queue system, but lacks more sophisticated features like resource control or multiple compute node support. This queue system can be found in the download page. Download and place the files in a folder that is in your PATH. If you have administrtive privileges on the workstation, place the files in a widely-accessible directory. Once the files are placed, start up the scheduler by using the following command: scheduler --workers=<number of workers> Set the number of workers as the number of parallel jobs that can be run at the same time. For example, if the workstation has 2 GPUs, set the number of workers as 2. This will require the user to adjust the nproc variable appropriately while using SILCS or SSFEP job handling scripts. Typically, nproc = (total number of CPUs) / (number of workers). For example, if the workstation has 16 CPUs and 2 GPUs, then you may want to run two jobs at the same time, with each using 8 CPUs and one GPU. In this case, set the number of workers to 2 and the nproc variable to 8. 3. I compiled my GROMACS with MPI and my job is not running. Please contact us so we can repackage the files with the appropriate command using mpirun instead. Alternatively, you may edit the job handling script to edit the GROMACS command. For example, the mdrun command is specified at the top of templates/ssfep/job_lig_md.tmpl file: mdrun="${GMXDIR}/gmx mdrun -nt $nproc" You may edit this to mdrun="mpirun -np$nproc ${GMXDIR}/gmx mdrun" 4. GROMACS on the head node does not run because the head node and compute node have different operating systems and GROMACS was compiled on the compute node. In this case, we recommend compilng GROMACS on the head node and compiling mdrun only on the compute node. Building only mdrun can be done by supplying the -DGMX_BUILD_MDRUN_ONLY=on keyword to the cmake command in the build process. Once the mdrun program is built, place it in the same$GMXDIR folder. Now template files needs to be edited to use the mdrun command properly on the compute node.

For example, the mdrun command is specified at the top of the templates/ssfep/job_lig_md.tmpl file:

mdrun="${GMXDIR}/gmx mdrun -nt$nproc"

You may edit this to

mdrun="${GMXDIR}/mdrun -nt$nrpoc"

5. I got “error while loading shared libraries: libcudart.so.8.0: cannot open shared object file: No such file or directory” message during my setup.

If you encounter this error, the most likely culprit is that GROMACS was compiled on a machine having a GPU whereas the current machine where the command is being executed does not have a GPU.

It may be possible the necessary library is already available for the machine even though it does not have a GPU. So, check if the libcudart.so file exists on the current machine. The most likely place is /usr/local/cuda/lib64. If the file exists in that location, add the folder in LD_LIBRARY_PATH.

If the library is not available on the current machine, we recommend following FAQ #4 to compile GROMACS and mdrun separately.