Frequently Asked Questions¶
1. I installed the software, how do I test if it is correctly installed?
Because every user has different settings and requirements for their cluster or workstation, we are only able to provide a generic job handling script. Therefore, this job handling script must be edited and tested once the software is installed.
To assist with job submission script customization, example input files are available under the
$SILCSBIO/examples
folder.For SILCS, use the following commands to make sure the software is correctly installed and the job handling script is working. If you are only interested in SSFEP simulations, you may skip to the SSFEP section below.
mkdir -p test/silcs cd test/silcs cp $SILCSBIODIR/examples/silcs/p38a.pdb . $SILCSBIODIR/silcs/1_setup_silcs_boxes prot=p38a.pdb $SILCSBIODIR/silcs/2a_run_gcmd prot=p38a.pdb nproc=1If the script failed to run at the
1_setup-silcs_boxes
step, the software is not correctly installed, whereas if the script failed to run at2a_run_gcmd
step, the job handling script needs to be edited. The job handling scripts for SILCS are:
templates/silcs/job_mc_md.tmpl
templates/silcs/job_gen_maps.tmpl
templates/silcs/job_regen_maps.tmpl
templates/silcs/job_cleanup.tmpl
Typically the header portion of the job submission script needs to be edited. Please contact support@silcsbio.com if you need assistance.
For SSFEP, use the following commands to make sure the software is correctly installed and the job handling script is working.
mkdir -p test/ssfep cd test/ssfep cp $SILCSBIODIR/examples/ssfep/* . $SILCSBIODIR/ssfep/1_setup_ssfep prot=4ykr.pdb lig=lig.mol2 $SILCSBIODIR/silcs/2_run_md_ssfep prot=4ykr.pdb lig=lig.mol2 nproc=1If the script failed to run at the
1_setup_ssfep
step, the software is not correctly installed, whereas if the script failed to run at the2_run_md_ssfep
step, the job handling script needs to be edited. The job handling scripts for SSFEP are:
templates/ssfep/job_lig_md.tmpl
templates/ssfep/job_prot_lig_md.tmpl
templates/ssfep/job_dG.tmpl
Typically the header portion of the job submission script needs to be edited. Please contact support@silcsbio.com if you need assistance.
2. I don’t have a cluster but I have a GPU workstation. What can I do?
We recommend the use of compute clusters over workstations, due to the computational resources required to perform simulations (especially in the case of SILCS). Typical timing information given in the introduction section is based on using 10 compute nodes all at the same time. However, as workstations with powerful GPUs are becoming more common, questions about whether a single GPU workstation can be used as a compute node are increasing.
For those users with only a GPU workstation, we offer a simple queue system. This queue system is intended only for use in a single compute node and designed to mimic a typical queue system, but lacks more sophisticated features like resource control or multiple compute node support.
This queue system can be found in the download page. Download and place the files in a folder that is in your
PATH
. If you have administrtive privileges on the workstation, either place the file in a widely accessible location, e.g., /usr/bin, or place a soft-link to the install location.Once the files are placed, start up the scheduler by using the following command:
scheduler --workers=<number of workers>Set the number of workers as the number of parallel jobs that can be run at the same time. For example, if the workstation has 2 GPUs, set the number of workers as 2.
This will require the user to adjust the
nproc
variable appropriately while using SILCS or SSFEP job handling script. Typically,nproc = (total number of CPUs) / (number of workers)
.For example, if the workstation has 16 CPUs and 2 GPUs, then you may want to run two jobs to run at the same time each using 8 CPUs and one GPU. In this case, set the number of worker as 2 and
nproc
variable as 8.
3. I compiled my GROMACS with MPI and my job is not running.
Please contact us, so we can repackage the files with appropriate command using
mpirun
instead.Alternatively, you may edit the job handling script to edit the GROMACS command.
For example, the
mdrun
command is specified at the top oftemplates/ssfep/job_lig_md.tmpl
file:mdrun="${GMXDIR}/gmx mdrun -nt $nproc"You may edit this to
mdrun="mpirun -np $nproc ${GMXDIR}/gmx mdrun"
4. Gromacs in head node does not run because the head node and compute node have different OS and Gromacs was compiled in compute node.
In this case, we recommend compile Gromacs on the head node and compile
mdrun
only in the compute node.Build only
mdrun
can be done by supplying-DGMX_BUILD_MDRUN_ONLY=ON
keyword for thecmake
command in the build process. Once themdrun
program is built, place it in the same $GMXDIR folder. Now template files needs to be edited to usemdrun
command properly in the compute node.For example, the
mdrun
command is specified at the top oftemplates/ssfep/job_lig_md.tmpl
file:mdrun="${GMXDIR}/gmx mdrun -nt $nproc"You may edit this to
mdrun="${GMXDIR}/mdrun -nt $nrpoc"
5. I got “error while loading shared libraries: libcudart.so.8.0: cannot open shared object file: No such file or directory” message during my setup.
If you encounter this error, the most likely culprit is that the GROMACS is compiled at a machine having a GPU whereas the current machine where the command was executed does not have a GPU.
But it is possible the necessary library is already available for the machine even though it does not have GPU. So, check if
libcudart.so
file exists in the current machine. The most likely place is/usr/local/cuda/lib64
. If the file exists in the location, add the folder inLD_LIBRARY_PATH
.If the library is not available in the current machine, we recommend following FAQ #4 to compile
GROMACS
andmdrun
separately.