In this Masterclass, we will discuss how to perform and analyze umbrella sampling simulations. We will learn how to introduce a bias potential with PLUMED, how to compute free energy landscapes, and how to reweight the resulting ensembles. We will also understand how to compute statistical errors on the computed quantities.
Once you have completed this Masterclass you will be able to:
If you have not yet set up PLUMED, you can find information about installing it in the section Setting up the software of PLUMED Masterclass 21.1: PLUMED syntax and analysis.
Once you have installed PLUMED, you will need to install GROMACS as well. In particular, you will need a special version of GROMACS that has been patched with PLUMED. You can obtain it using conda with the following command
conda install --strict-channel-priority -c plumed/label/masterclass -c conda-forge gromacs
The --strict-channel-priority
might be necessary in case your conda install is configured to download packages from the bioconda
channel. Indeed, bioconda
contains a version of GROMACS that is not patched with PLUMED and would thus not work here.
On Linux, the command above should install the following packages:
gromacs plumed/label/masterclass/linux-64::gromacs-2019.6-h3fd9d12_0 libclang conda-forge/linux-64::libclang-11.0.1-default_ha53f305_1 libevent conda-forge/linux-64::libevent-2.1.10-hcdb4288_3 libhwloc conda-forge/linux-64::libhwloc-1.11.13-h3c4fd83_0 libllvm11 conda-forge/linux-64::libllvm11-11.0.1-hf817b99_0 libpq conda-forge/linux-64::libpq-12.3-h255efa7_3 [ etc ... ]
The exact versions might be different. Notice however that GROMACS comes from the plumed/label/masterclass
channel, whereas the required libraries come from the conda-forge
channel. To be sure the installed GROMACS is patched with PLUMED, try the following shell command:
gmx mdrun -h 2> /dev/null | grep -q plumed && echo ok
It should print ok
.
Please ensure that you have setup PLUMED and GROMACS on your machine before starting the exercises. Also notice that in order to obtain good performances it is better to compile GROMACS from source on the machine you are running your simulations. You can find out in the PLUMED documention how to patch GROMACS with PLUMED so as to be able to install it from source. For this tutorial, the conda precompiled binaries will be sufficient.
The data needed to execute the exercises of this Masterclass can be found on GitHub. You can clone this repository locally on your machine using the following command:
git clone https://github.com/plumed/masterclass-21-3.git
The data you need is in the folder called data
. You will find the following files in that folder:
topolA.tpr
: an input file that can be used to run a GROMACS simulation of alanine dipeptide starting from one of the two main free-energy minimatopolB.tpr
: same as topolA.tpr
, but starting from the other minimum.wham.py
: a python script that can be used to perform binless WHAM analysisNotice that PLUMED input files have not been provided in the GitHub repository. You must prepare these input files yourself using the templates below.
We would recommend that you run each exercise in separate sub-directories inside the root directory masterclass-21-3
.
Throughout this tutorial we will run simulations of alanine dipeptide in vacuum using GROMACS and PLUMED. Whereas this system is too simple to be considered a proper benchmark for enhanced sampling methods, it is complex enough to be used in learning them. Some of the commands below are specific for GROMACS, but all the PLUMED input files are compatible with other MD engines as well.
In PLUMED Masterclass 21.1: PLUMED syntax and analysis we learned how to analyze trajectories a posteriori. One of the nice features of PLUMED is that the very same analysis can be done on the fly. In other words, you might compute your collective variables while GROMACS is running instead of waiting for the simulation to end. This can could be convenient if you want to run a large number of simulations, you already know what to compute, and you do not want to use too much disk space.
To run a simulation with GROMACS you have to type this command in the shell:
gmx mdrun -plumed plumed.dat -s topolA.tpr -nsteps 200000 -x traj_unbiased.xtc
Notice that the file topolA.tpr
contains all the relevant information (simulation parameters, initial conditions, etc.). In this tutorial we will just need to play with the number of steps (200000 in the example above) and we will tune the name of the trajectory saved by GROMACS (traj_unbiased.xtc in the example above).
Also consider that GROMACS and PLUMED are not going to delete your files but are taking backups when you try to overwrite them. GROMCAS backup files start with #
, whereas PLUMED backup files start with bck.
. Both GROMACS and PLUMED will complain if you try to create too many backups of the same file. It is thus recommended to regularly clean your directory with a command such as rm -f \#* bck.*
The command above will only succeed if a file named plumed.dat
exists in the current directory. We know already how to create such file from PLUMED Masterclass 21.1: PLUMED syntax and analysis. In PLUMED Masterclass 21.2: Statistical errors in MD we also learned how to compute histograms. You should be thus able to complete the template below and put it in a file named plumed.dat
.
# vim:ft=plumed MOLINFOSTRUCTURE=reference.pdb phi: TORSIONcompulsory keyword a file in pdb format containing a reference structure.ATOMS=__FILL__ # use MOLINFO shortcuts to identify the phi angle of the second residue psi: TORSIONthe four atoms involved in the torsional angleATOMS=__FILL__ # use MOLINFO shortcuts to identify the psi angle of the second residue # use the command below to compute the histogram of phi # we use a smooth kernel to produce a nicer graph here # notice that when asking for numbers PLUMED is happy to accept strings such as "pi" meaning 3.14... # also arithmetics is allowed (e.g. 2*pi could be used if necessary) hhphi: HISTOGRAMthe four atoms involved in the torsional angleARG=__FILL__the input for this action is the scalar output from one or more other actions.STRIDE=100compulsory keyword ( default=1 ) the frequency with which the data should be collected and added to the quantity being averagedGRID_MIN=-picompulsory keyword the lower bounds for the gridGRID_MAX=__FILL__compulsory keyword the upper bounds for the gridGRID_BIN=600the number of bins for the gridBANDWIDTH=0.05 ffphi: CONVERT_TO_FEScompulsory keyword the bandwidths for kernel density estimationGRID=hhphi # no need to set TEMP here, PLUMED will obtain it from GROMACS DUMPGRIDcompulsory keyword the action that creates the input grid you would like to useGRID=__FILL__compulsory keyword the action that creates the grid you would like to outputFILE=fes_phi.datcompulsory keyword ( default=density ) the file on which to write the grid.STRIDE=200000 # stride is needed here since PLUMED does not know when the simulation is over # now add three more lines to compute and dump the free energy as a function of **psi** on a file names fes_psi.dat __FILL__ PRINT __FILL__# use this command to write phi and psi on a file named colvar.dat, every 100 stepscompulsory keyword ( default=0 ) the frequency with which the grid should be output to the file.
You can then monitor what happened during the simulation using the python script above.
In this manner you will be able to see (a) the time series of \( \phi \) and \( \psi \), (b) which portion of the \((\phi,\psi)\) space has been explored (c) the free energy as a function of \( \phi \) and (d) the free energy as a function of \( \psi \). In particular, you should be able to identify two minima separated by a moderate barrier, and several transitions between these minima will be visible in the simulated time scale.
We are now ready to use PLUMED to perform the task it was originally designed for: biasing a simulation on the fly. We will first try to add simple bias potentials that change the balance between the two minima we have obtained in the previous exercise. Not particularly useful here since both minima were sampled anyways. However, it is instructive since we will be able to test the same type of analysis we did in PLUMED Masterclass 21.2: Statistical errors in MD. The two minima observed in the previous exercise are located at \( \phi \) approximately equal to -2.5 and -1.5, separated by a barrier located at \( \phi \) approximately equal to -2. We can predict that adding a bias potential in the form -A*sin(x+2)
, with A positive and large enough, should favor the minimum at -1.5 at the expense of the other. We can try with A=10 using an input file like the following one:
# vim:ft=plumed __FILL__ # compute phi and psi here, as in the previous input file # fill in with the required function (i.e. -10*sin(phi+2)): f: CUSTOMARG=phithe input for this action is the scalar output from one or more other actions.FUNC=__FILL__compulsory keyword the function you wish to evaluatePERIODIC=NO # this command allows to add a bias potential equal to f BIASVALUEcompulsory keyword if the output of your function is periodic then you should specify the periodicity of the function.ARG=f # here you can paste the same HISTOGRAM/CONVERT_TO_FES/DUMPGRID commands that you used in # the previous exercise. let's just write the results on different files hhphi: __FILL__ ffphi: __FILL__ DUMPGRIDthe input for this action is the scalar output from one or more other actions.FILE=fes_phi_biased1.dat __FILL__ hhpsi: __FILL__ ffpsi: __FILL__ DUMPGRIDcompulsory keyword ( default=density ) the file on which to write the grid.FILE=fes_psi_biased1.dat __FILL__ lw: REWEIGHT_BIAS # and here we do the same again, but this time using LOGWEIGHTS. # these free energies will be printed on files fes_phi_biased1r.dat and # fes_psi_biased1r.dat and will be reweighted so as to be unbiased hhphir: __FILL__compulsory keyword ( default=density ) the file on which to write the grid.LOGWEIGHTS=lw ffphir: __FILL__ DUMPGRIDcould not find this keywordFILE=fes_phi_biased1r.dat __FILL__ hhpsir: __FILL__compulsory keyword ( default=density ) the file on which to write the grid.LOGWEIGHTS=lw ffpsir: __FILL__ DUMPGRIDcould not find this keywordFILE=fes_psi_biased1r.dat __FILL__ PRINT __FILL__# monitor what's happening, as before, writing on file plumed_colvar1.datcompulsory keyword ( default=density ) the file on which to write the grid.
Call this file plumed_biased1.dat
and run the simulation using this command:
gmx mdrun -plumed plumed_biased1.dat -s topolA.tpr -nsteps 200000 -x traj_comp_biased1.xtc
Notice that we are storing the trajectory on a separate file. We will need both the unbiased and the biased trajectory later.
You can then monitor what happened during the simulation using these commands
The free-energy plots here will include three lines: (a) the original one (as obtained from the previous exercise), (b) the one sampled in this second simulation, where the minimum at higher phi appears more stable, and (c) the reweighted free-energy from the second simulation, that should look closer to the original one. Notice that the relationship between plots (b) and (c) is straightforward when the analyzed variable is phi: the third line could have been obtained by just adding \( -10 \sin(x+2)\) to the second line. However, when analyzing psi the effect is less obvious.
We will now run a new simulation that is even more biased, and precisely choosing the prefactor A=20. This will favor even more the minimum at higher values of phi. Please prepare a plumed_biased2.dat
input file that is identical to plumed_biased1.dat
except for:
2
suffix (e.g. fes_phi_biased2.dat
).The run the new simulation with the following command:
gmx mdrun -plumed plumed_biased2.dat -s topolA.tpr -nsteps 200000 -x traj_comp_biased2.xtc
You can once more analyze this simulation as you did for the previous one. You should notice that the peak at higher \( \phi \) is even more favored now.
We will now use WHAM to combine the three simulations we have done so far, namely: (a) unbiased (traj_comp_unbiased.xtc
), (b) biased (traj_comp_biased1.xtc
), and (c) more biased (traj_comp_biased2.xtc
).
The first thing we have to do is to concatenate the three files:
gmx trjcat -cat -f traj_comp_unbiased.xtc traj_comp_biased1.xtc traj_comp_biased2.xtc -o traj_comp_cat.xtc
Notice that this new trajectory contains all the simulated frames, and we do not explicitly tracking from which of the simulation each frame originated. We then have to compute the bias that would have been felt in each of the three runs above by each of the frames in the concatenated trajectory. We could do this reusing the three plumed input files we used above, but it is perhaps clearer to do it with a single input file in this case. You can use an input like this one:
# vim:ft=plumed MOLINFOSTRUCTURE=reference.pdb phi: TORSIONcompulsory keyword a file in pdb format containing a reference structure.ATOMS=@phi-2 psi: TORSIONthe four atoms involved in the torsional angleATOMS=@psi-2 # here we list the bias potentials that we used in the three simulations we want to combine b0: CUSTOMthe four atoms involved in the torsional angleARG=phithe input for this action is the scalar output from one or more other actions.FUNC=0.0compulsory keyword the function you wish to evaluatePERIODIC=NO b1: CUSTOMcompulsory keyword if the output of your function is periodic then you should specify the periodicity of the function.ARG=phithe input for this action is the scalar output from one or more other actions.FUNC=__FILL__compulsory keyword the function you wish to evaluatePERIODIC=NO # fill here with the bias you used in the first biased simulation b2: CUSTOMcompulsory keyword if the output of your function is periodic then you should specify the periodicity of the function.ARG=phithe input for this action is the scalar output from one or more other actions.FUNC=__FILL__compulsory keyword the function you wish to evaluatePERIODIC=NO # fill here with the bias you used in the second biased simulation PRINTcompulsory keyword if the output of your function is periodic then you should specify the periodicity of the function.ARG=phi,psi,b0,b1,b2the input for this action is the scalar output from one or more other actions.FILE=biases.datthe name of the file on which to output these quantities
If you call it plumed_wham.dat
you can then use the following command
plumed driver --plumed plumed_wham.dat --ixtc traj_comp_cat.xtc
We are now ready to use binless WHAM to compute the weights associated to the concatenated trajectory. This can be done with the following script
With the procedure above we computed the weights that can be associated to the concatenated trajectory so as to recover an unbiased distribution. Notice that, although we did not explicitly enforced it, the weights only depends on the value of \( \phi \). This is because only \( \phi \) was biased here. Thus, if we plot the weight as a function of \( \phi \) all the points will collapse to a line. This will not happen if we plot the weights as a function of \( \psi \):
Also notice the form of the weights as a function of \( \phi \). This graph is telling us that points at larger \( \phi \) are weighted less. This makes sense, since in two simulations out of three we were biasing the ensemble so as to visit those points more frequently than in the Boltzmann ensemble. However, the curve is not exactly a sin function. This precise curve can only be obtained by solving self-consistently the WHAM equations.
We can then read the obtained weights and use them in an analysis similar to the one we have done above
# vim:ft=plumed phi: READFILE=bias_wham.datcompulsory keyword the name of the file from which to read these quantitiesVALUES=__FILL__ psi: READcompulsory keyword the values to read from the fileFILE=bias_wham.datcompulsory keyword the name of the file from which to read these quantitiesVALUES=__FILL__ lw: READcompulsory keyword the values to read from the fileFILE=bias_wham.datcompulsory keyword the name of the file from which to read these quantitiesVALUES=__FILL__ hhphi: HISTOGRAMcompulsory keyword the values to read from the fileARG=phithe input for this action is the scalar output from one or more other actions.GRID_MIN=-picompulsory keyword the lower bounds for the gridGRID_MAX=picompulsory keyword the upper bounds for the gridGRID_BIN=600the number of bins for the gridBANDWIDTH=0.05 ffphi: CONVERT_TO_FEScompulsory keyword the bandwidths for kernel density estimationGRID=hhphi DUMPGRIDcompulsory keyword the action that creates the input grid you would like to useGRID=ffphicompulsory keyword the action that creates the grid you would like to outputFILE=fes_phi_cat.dat hhpsi: HISTOGRAMcompulsory keyword ( default=density ) the file on which to write the grid.ARG=psithe input for this action is the scalar output from one or more other actions.GRID_MIN=-picompulsory keyword the lower bounds for the gridGRID_MAX=picompulsory keyword the upper bounds for the gridGRID_BIN=600the number of bins for the gridBANDWIDTH=0.05 ffpsi: CONVERT_TO_FEScompulsory keyword the bandwidths for kernel density estimationGRID=hhpsi DUMPGRIDcompulsory keyword the action that creates the input grid you would like to useGRID=ffpsicompulsory keyword the action that creates the grid you would like to outputFILE=fes_psi_cat.dat # we use a smooth kernel to produce a nicer graph here hhphir: HISTOGRAMcompulsory keyword ( default=density ) the file on which to write the grid.ARG=phithe input for this action is the scalar output from one or more other actions.GRID_MIN=-picompulsory keyword the lower bounds for the gridGRID_MAX=picompulsory keyword the upper bounds for the gridGRID_BIN=600the number of bins for the gridBANDWIDTH=0.05compulsory keyword the bandwidths for kernel density estimationLOGWEIGHTS=lw ffphir: CONVERT_TO_FESlist of actions that calculates log weights that should be used to weight configurations when calculating averagesGRID=hhphir DUMPGRIDcompulsory keyword the action that creates the input grid you would like to useGRID=ffphircompulsory keyword the action that creates the grid you would like to outputFILE=fes_phi_catr.dat hhpsir: HISTOGRAMcompulsory keyword ( default=density ) the file on which to write the grid.ARG=psithe input for this action is the scalar output from one or more other actions.GRID_MIN=-picompulsory keyword the lower bounds for the gridGRID_MAX=picompulsory keyword the upper bounds for the gridGRID_BIN=600the number of bins for the gridBANDWIDTH=0.05compulsory keyword the bandwidths for kernel density estimationLOGWEIGHTS=lw ffpsir: CONVERT_TO_FESlist of actions that calculates log weights that should be used to weight configurations when calculating averagesGRID=hhpsir DUMPGRIDcompulsory keyword the action that creates the input grid you would like to useGRID=ffpsircompulsory keyword the action that creates the grid you would like to outputFILE=fes_psi_catr.datcompulsory keyword ( default=density ) the file on which to write the grid.
Use the template above to produce a file names plumed_wham.dat
and run the following command:
plumed driver --noatoms --plumed plumed_wham.dat --kt 2.4943387854
You can now show the result with the following script
When you look at the first graph (i.e., the concatenated time series) you should notice that as you proceed towards the end the frames are coming from the most biased simulation, and thus will have a systematically larger value of \( \phi \). The reweighted ensemble will be in the end quite similar to the original one.
So far we only considered the fast transition between the two minima at negative values of \( \phi \). However, alanine dipeptide has another metastable state at positive \( \phi \), that is separated by the minima we have seen by a large barrier. In line of principle, one could use the trick above to discount the effect of any arbitrary biasing potential. For instance, if one had an estimate of a relevant free-energy barrier, a bias potential approximately equal to the negative of the barrier would make the residual free energy basically barrierless. However, this procedure requires knowing the free energy profile in advance. As we will see in the next Masterclass, metadynamics provides exactly a self-healing method that adjusts the bias until dynamics becomes diffusive.
In the context of the static potentials that we are using here, this is a very difficult approach. In the tutorial Lugano tutorial: Using restraints you can find an attempt to derive a potential cancelling the large barrier and leading to transition between the important metastable states. In this class we will take a more pragmatic approach and directly introduce the method that is commonly used in these situations, namely "multiple windows umbrella sampling".
To do so we will have to run a number of separate simulations, each of them with a bias potential designed to maintain the \( \phi \) variable in a narrow range. If the consecutive windows are sufficiently overlapping, and if the windows span the relevant region in the \( \phi \) variable, we will be able to reconstruct the full free-energy profile.
We will use 32 simulations, with RESTRAINT
potentials centered at uniformly spaced values of \( \phi \) in the range between -pi and pi. In order to avoid the repetition of the first potential, recalling the \( \phi \) is periodic, we can generate the list with the command
Now you should write a python script that generate 32 plumed input files (you can call them plumed_0.dat
, plumed_1.dat
, etc) that look like this one
# vim:ft=plumed MOLINFOSTRUCTURE=reference.pdb phi: TORSIONcompulsory keyword a file in pdb format containing a reference structure.ATOMS=__FILL__ psi: TORSIONthe four atoms involved in the torsional angleATOMS=__FILL__ bb: RESTRAINTthe four atoms involved in the torsional angleARG=phithe input for this action is the scalar output from one or more other actions.KAPPA=200.0compulsory keyword ( default=0.0 ) specifies that the restraint is harmonic and what the values of the force constants on each of the variables areAT=__FILL__ PRINTcompulsory keyword the position of the restraintARG=phi,psi,bb.biasthe input for this action is the scalar output from one or more other actions.FILE=__FILE__the name of the file on which to output these quantitiesSTRIDE=100 # make sure that each simulation writes on a different file # e.g. colvar_multi_0.dat, colvar_multi_1.dat, @newlinecompulsory keyword ( default=1 ) the frequency with which the quantities of interest should be output
Now run the 32 simulations with commands like these ones
gmx mdrun -plumed plumed_0.dat -s topolA.tpr -nsteps 200000 -x traj_comp_0.xtc gmx mdrun -plumed plumed_1.dat -s topolA.tpr -nsteps 200000 -x traj_comp_1.xtc # etc.
As we did before, you should now concatenate the resulting trajectories
gmx trjcat -cat -f traj_comp_[0-9].xtc traj_comp_[0-9][0-9].xtc -o traj_multi_cat.xtc
and analyze them with plumed driver:
plumed driver --plumed plumed_0.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 plumed driver --plumed plumed_1.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 # etc.
Notice the --trajectory-stride
option: we are telling the driver that the trajectory was saved every 100 frames. In this manner, we will be able to recycle the same plumed input file that we used while running the simulations. This approach is more robust than the one we used above, where a single plumed file was used to reproduce all the biases, and is thus recommended in production cases. Also notice that it is common practice to ignore the initial part of each simulation, since it typically contains a transient that would make the final result less robust. For simplicitly, we are not doing it here. Finally, notice that in Exercise 3: Combining statistics from biased and unbiased simulations we were writing a new input file just for writing the three biases. Here instead we recycled the input files used to run the 32 simulations.
We can now read the produced files and process them with our WHAM script
Now you can do something similar to what we did before, namely:
How do the resulting free-energy profiles compares with the one we obtained before?
Now that you have a simulation that sampled all the relevant metastable states, you can analyze it in many different manners. In particular, you can compute the average of any quantity just using the weights that we obtained. This can be done directly with PLUMED, as we have seen in PLUMED Masterclass 21.2: Statistical errors in MD, using the AVERAGE command. However, we will here use Python to compute averages, since this will make it easier to compute their statistical errors with bootstrap.
To complete this exercise you should compute the average value and the statistical error of the following quantities:
For each of these quantities, you should also compute the standard error. It is here recommended to compute the error using boostrap. You can do so by adjusting the following script
Repeat exercise Exercise 4: Enhancing conformational transitions with multiple-windows umbrella sampling using as an input file topolB.tpr
. This will make the simulations start from the minimum in B. Plot the free energy as a function of \( \phi \) and as a function of psi. How does it differ from the one obtained in solving Exercise 4: Enhancing conformational transitions with multiple-windows umbrella sampling?