Frequently Asked Questions

1. I installed the software, how do I test if it is correctly installed?

Because different users have different settings and requirements for their clusters or workstations, we provide a general job handling script for you to customize to your needs.

To assist with job submission script customization, example input files are available under the $SILCSBIODIR/examples folder.

For SILCS, use the following commands to make sure the software is correctly installed and the job handling script is working. If you are only interested in SSFEP simulations, you may skip to the SSFEP section below.

mkdir -p test/silcs
cd test/silcs
cp $SILCSBIODIR/examples/silcs/p38a.pdb .
$SILCSBIODIR/silcs/1_setup_silcs_boxes prot=p38a.pdb
$SILCSBIODIR/silcs/2a_run_gcmd prot=p38a.pdb nproc=1

If the script failed to run at the 1_setup-silcs_boxes step, the software is not correctly installed, whereas if the script failed to run at the 2a_run_gcmd step, the job handling script needs to be edited. The job handling scripts for SILCS are:

  • templates/silcs/job_mc_md.tmpl

  • templates/silcs/job_gen_maps.tmpl

  • templates/silcs/pymol_fragmap.tmpl

  • templates/silcs/vmd_fragmap.tmpl

  • templates/silcs/job_cleanup.tmpl

Typically the header portion of the job submission script requires editing. Please contact support@silcsbio.com if you need assistance.

For SSFEP, use the following commands to make sure the software is correctly installed and the job handling script is working.

mkdir -p test/ssfep
cd test/ssfep
cp $SILCSBIODIR/examples/ssfep/* .
$SILCSBIODIR/ssfep/1_setup_ssfep prot=4ykr.pdb lig=lig.mol2
$SILCSBIODIR/silcs/2_run_md_ssfep prot=4ykr.pdb lig=lig.mol2 nproc=1

If the script failed to run at the 1_setup_ssfep step, the software is not correctly installed, whereas if the script failed to run at the 2_run_md_ssfep step, the job handling script needs to be edited. The job handling scripts for SSFEP are:

  • templates/ssfep/job_lig_md.tmpl

  • templates/ssfep/job_prot_lig_md.tmpl

  • templates/ssfep/job_dG.tmpl

Typically the header portion of the job submission script requires editing. Please contact support@silcsbio.com if you need assistance.

2. I don’t have a cluster but I have a GPU workstation. What can I do?

You may be able to practically run the SilcsBio software if your GPU workstation has sufficient resources. An appropriate workstation may have at least 24 CPU cores, 4 GPUs, 64 GB of RAM, and 10 TB of disk space. Installing a job queueing system, such as the Slurm Workload Manager, will allow the SilcsBio server software to run on the workstation.

The SilcsBio Workstation is a turn-key GPU workstation hardware+software solution developed by SilcsBio that comes with all necessary software pre-installed. The SilcsBio workstation has a quiet, sleek form factor for use in an office setting and comes ready to plug in to a standard wall electrical socket. Please contact info@silcsbio.com for details.

3. I compiled my GROMACS with MPI and my job is not running

Please contact us so we can repackage the files with the appropriate command using mpirun instead.

Alternatively, you may edit the job handling script to edit the GROMACS command.

For example, the mdrun command is specified at the top of templates/ssfep/job_lig_md.tmpl file:

mdrun="${GMXDIR}/gmx mdrun -nt $nproc"

You may edit this to

mdrun="mpirun -np $nproc ${GMXDIR}/gmx mdrun"

4. GROMACS on the head node does not run because the head node and compute node have different operating systems.

In this case, we recommend compilng GROMACS on the head node and compiling mdrun only on the compute node.

Building only mdrun can be done by supplying the -DGMX_BUILD_MDRUN_ONLY=on keyword to the cmake command in the build process. Once the mdrun program is built, place it in the same $GMXDIR folder. Now template files needs to be edited to use the mdrun command properly on the compute node.

For example, the mdrun command is specified at the top of the templates/ssfep/job_lig_md.tmpl file:

mdrun="${GMXDIR}/gmx mdrun -nt $nproc"

You may edit this to

mdrun="${GMXDIR}/mdrun -nt $nrpoc"

5. For “error while loading shared libraries: libcudart.so.8.0: cannot open shared object file: No such file or directory” message during my setup.

If you encounter this error, the most likely culprit is that GROMACS was compiled on a machine having a GPU whereas the current machine where the command is being executed does not have a GPU.

It may be possible that the necessary library is already available for the machine even though it does not have a GPU. So, check if the libcudart.so file exists on the current machine. The most likely place is /usr/local/cuda/lib64. If the file exists in that location, add the folder in LD_LIBRARY_PATH.

If the library is not available on the current machine, we recommend following FAQ #4 to compile GROMACS and mdrun separately.

6. I want to modify the force field and topology files for SILCS simulation

As an example, if there is the need to add extra bonds that are not present in the standard force field definitions, this is the procedure to make the necessary modifications. As a specific example, some proteins contain two metals adjacent to each other for which it may be useful to place a bond between the ions. For example, the protein 3bi0 has two Zn ions adjacent to each other for which a bond may be added to restrain the distance between the ions to that in the crystal structure. See the GROMACS documentation on how to modify the .top and ffbonded.itp files.

First, run the following command. This will copy the basic force field and generate initial topology file to 1_setup, so you can edit them.

$SILCSBIODIR/silcs/1_setup_silcs_boxes prot=<prot PDB>

Then edit the force field parameter file 1_setup/charmm36.ff/ffbonded.itp.

If you want to modify the topology file (eg. to add an explicit bond between the ions), first copy 1_setup/<prot>_gmx.top.1.bak to 1_setup/<prot>_gmx.top file. Then edit <prot>_gmx.top and add the required bond between the two ions in the [ bond ] list.

Once the files are edited, re-run the 1_setup command again with skip_pdb2gmx=true keyword. This will preserve your edits and create the necessary files to run the SILCS simulations.

$SILCSBIODIR/silcs/1_setup_silcs_boxes prot=<prot PDB> skip_pdb2gmx=true

Once the second setup run completes run the $SILCSBIODIR/silcs/2a_run_gcmd script with the appropriate variables and the SILCS simulations will be initiated.

7. I want to visualize FragMaps using MOE

The FragMaps are generated by MAP format grid file. However, this file is not supported in MOE. To be able to display grid file, we provide a script that can convert MAP file format to CNS file format, which is supported by MOE.

$SILCSBIODIR/utils/map_to_cns.sh prot=<protein PDB> mapsdir=<FragMap directory>

The above command will convert all .map files in the mapsdir to .cns format files.