Your problem is to determine whether a collection of argon molecules placed in a box will form an ordered structure at low temperature.
Molecular dynamics (MD) is a powerful simulation technique for studying the physical and chemical properties of solids, liquids, amorphous materials, and biological molecules. Although we know that quantum mechanics is the proper theory for molecular interactions, MD uses Newton’s laws as the basis of the technique and focuses on bulk properties, which do not depend much on small-r behaviors. In 1985, Car and Parrinello showed how MD can be extended to include quantum mechanics by applying density functional theory to calculate the force (Car and Parrinello, 1985). This technique, known as quantum MD, is an active area of research but is beyond the realm of this chapter.1) For those with more interest in the subject, there are full texts (Allan and Tildesley, 1987; Rapaport, 1995; Hockney, 1988) on MD and good discussions (Gould et al., 2006; Thijssen, 1999; Fosdick et al., 1996), as well as primers (Ercolessi, 1997) and codes, (Nelson et al., 1996; Refson, 2000; Anderson et al., 2008) available online.
MD’s solution of Newton’s laws is conceptually simple, yet when applied to a very large number of particles becomes the “high school physics problem from hell.” Some approximations must be made in order not to have to solve the 1023–1025 equations of motion describing a realistic sample, but instead to limit the problem to ~ 106 particles for protein simulations, and ~ 108 particles for materials simulations. If we have some success, then it is a good bet that the model will improve if we incorporate more particles or more quantum mechanics, something that becomes easier as computing power continues to increase.
In a number of ways, MD simulations are similar to the thermal Monte Carlo simulations we studied in Chapter 17. Both typically involve a large number N of interacting particles that start out in some set configuration and then equilibrate into some dynamic state on the computer. However, in MD we have what statistical mechanics calls a microcanonical ensemble in which the energy E and volume V of the N particles are fixed. We then use Newton’s laws to generate the dynamics of the system. In contrast, Monte Carlo simulations do not start with first principles, but instead, incorporate an element of chance and have the system in contact with a heat bath at a fixed temperature rather than keeping the energy E fixed. This is called a canonical ensemble.
Because a system of molecules in an MD simulation is dynamic, the velocities and positions of the molecules change continuously. After the simulation has run long enough to stabilize, we will compute time averages of the dynamic quantities in order to deduce the thermodynamic properties. We apply Newton’s laws with the assumption that the net force on each molecule is the sum of the two-body forces with all other (N – 1) molecules:
In writing these equations, we have ignored the fact that the force between argon atoms really arises from the particle–particle interactions of the 18 electrons and the nucleus that constitute each atom (Figure 18.1). Although it may be possible to ignore this internal structure when deducing the long-range properties of inert elements, it matters for systems such as polyatomic molecules that display rotational, vibrational, and electronic degrees of freedom as the temperature is raised.2)
The force on molecule i derives from the sum of molecule–molecule potentials:
Figure 18.1 The molecule–molecule effective interaction arises from the many-body interaction of the electrons and nucleus in one molecule (circle) with the electrons and nucleus in another molecule (other circle). Note, the size of the nucleus at the center of each molecule is highly exaggerated, and real electrons have no size.
Here is the distance between the centers of molecules i and j, and the limits on the sums are such that no interaction is counted twice. Because we have assumed a conservative potential, the total energy of the system, that is, the potential plus kinetic energies summed over all particles, should be conserved over time. Nonetheless, in a practical computation we “cut the potential off” (assume u(rij) = 0) when the molecules are far apart. Because the derivative of the potential produces an infinite force at this cutoff point, energy will no longer be precisely conserved. Yet because the cutoff radius is large, the cutoff occurs only when the forces are minuscule, and so the violation of energy conservation should be small relative to approximation and round-off errors.
In a first-principles calculation, the potential between any two argon atoms arises from the sum over approximately 1000 electron–electron and electron–nucleus Coulomb interactions. A more practical calculation would derive an effective potential based on a form of many-body theory, such as Hartree–Fock or density functional theory. Our approach is simpler yet. We use the Lennard–Jones potential,
Here the parameter governs the strength of the interaction, and σ determines the length scale. Both are deduced by fits to data, which is why this potential is called a “phenomenological” potential.
Some typical values for the parameters and scales for the variables are given in Table 18.1. In order to make the program simpler and to avoid under- and overflows, it is helpful to measure all variables in the natural units formed by these constants. The interparticle potential and force then take the forms
Table 18.1 Parameter and scales for the Lennard-Jones potential.
Figure 18.2 The Lennard–Jones effective potential used in many MD simulations. Note the sign change at r = 1 and the minimum at r 1.1225 (natural units). Also note that because the r-axis does not extend to r = 0, the infinitely high central repulsion is not shown.
The Lennard–Jones potential is seen in Figure 18.2 to be the sum of a long-range attractive interaction ∝ 1/r6 and a short-range repulsive one ∝ 1/r12. The change from repulsion to attraction occurs at r = σ. The minimum of the potential occurs at r = 21/6 σ = 1.1225σ, which would be the atom–atom spacing in a solid bound by this potential. The repulsive 1/r12 term in the Lennard–Jones potential (18.6) arises when the electron clouds from two atoms overlap, in which case the Coulomb interaction and the Pauli exclusion principle force the electrons apart. The 1/r12 term dominates at short distances and makes atoms behave like hard spheres. The precise value of 12 is not of theoretical significance (although it being large is) and was probably chosen because it is 2 × 6.
The 1/r6 term that dominates at large distances models the weak van der Waals induced dipole–dipole attraction between two molecules. The attraction arises from fluctuations in which at some instant in time a molecule on the right tends to be more positive on the left side, like a dipole ⇐. This in turn attracts the negative charge in a molecule on its left, thereby inducing a dipole ⇐ in it. As long as the molecules stay close to each other, the polarities continue to fluctuate in synchronization ⇐⇐ so that the attraction is maintained. The resultant dipole–dipole attraction behaves like 1/r6, and although much weaker than a Coulomb force, it is responsible for the binding of neutral, inert elements, such as argon for which the Coulomb force vanishes.
We assume that the number of particles is large enough to use statistical mechanics to relate the results of our simulation to the thermodynamic quantities. The simulation is valid for any number of particles, but the use of statistics requires large numbers. The equipartition theorem tells us that, on the average, for molecules in thermal equilibrium at temperature T, each degree of freedom has an energy kB T/2 associated with it, where kB = 1.38 × 10–23 J/K is Boltzmann’s constant. A simulation provides the kinetic energy of translation3):
The time average of KE (three degrees of freedom) is related to temperature by
The system’s pressure P is determined by a version of the Virial theorem,
where the Virial w is a weighted average of the forces. Note that because ideal gases have no intermolecular forces, their Virial vanishes and we have the ideal gas law. The pressure is thus
where p = N/V is the density of the particles.
Although we start the system off with a velocity distribution characteristic of a definite temperature, this is not a true temperature of the system because the system is not in equilibrium initially, and there will a redistribution of energy between KE and PE (Thijssen, 1999). Note that this initial randomization is the only place where chance enters into our MD simulation, and it is there to speed the simulation along. Indeed, in Figure 18.3 we show results of simulations in which the molecules are initially at rest and equally spaced. Once started, the time evolution is determined by Newton’s laws, in contrast to Monte Carlo simulations which are inherently stochastic. We produce a Gaussian (Maxwellian) velocity distribution with the methods discussed in Chapter 4. In our sample code, we take the average of uniform random numbers
to produce a Gaussian distribution with mean
. We then subtract this mean value to obtain a distribution about 0.
Figure 18.3 (a) Two frames from an animation showing the results of a 1D MD simulation that starts with uniformly spaced atoms. Note the unequal spacing resulting from an image atom moving in from the left after an atom left from the right. (b) Two frames from the animation of a 2D MD simulation showing the initial and an equilibrated state. Note how the atoms start off in a simple cubic arrangement but then equilibrate to a face-centered-cubic lattice. In both the cases, the atoms remain confined as a result of the interatomic forces.
It is easy to believe that a simulation of 1023 molecules should predict bulk properties well, but with typical MD simulations employing only 103–106 particles, one must be clever to make less seem like more. Furthermore, because computers are finite, the molecules in the simulation are constrained to lie within a finite box, which inevitably introduces artificial surface effects arising from the walls. Surface effects are particularly significant when the number of particles is small because then a large fraction of the molecules reside near the walls. For example, if 1000 particles are arranged in a 10 × 10 × 10 × 10 cube, there are 103 – 83 = 488 particles one unit from the surface, that is, 49% of the molecules, while for 106 particles this fraction falls to 6%.
The imposition of periodic boundary conditions (PBCs) strives to minimize the shortcomings of both the small numbers of particles and of artificial boundaries. Although we limit our simulation to an Lx × Ly × Lz box, we imagine this box being replicated to infinity in all directions (Figure 18.4). Accordingly, after each time-integration step we examine the position of each particle and check if it has left the simulation region. If it has, then we bring an image of the particle back through the opposite boundary (Figure 18.4):
Consequently, each box looks the same and has continuous properties at the edges. As shown by the one-headed arrows in Figure 18.4, if a particle exits the simulation volume, its image enters from the other side, and so balance is maintained.
In principle, a molecule interacts with all others molecules and their images, so despite the fact that there is a finite number of atoms in the interaction volume, there is an effective infinite number of interactions (Ercolessi, 1997). Nonetheless, because the Lennard–Jones potential falls off so rapidly for large r, V(1.13σ)/200, far-off molecules do not contribute significantly to the motion of a molecule, and we pick a value
beyond which we ignore the effect of the potential:
Figure 18.4 The infinite space generated by imposing periodic boundary conditions on the particles within the simulation volume (shaded box). The two-headed arrows indicate how a particle interacts with the nearest version of another particle, be that within the simulation volume or an image. The vertical arrows indicate how the image of particle 4 enters when the actual particle 4 exits.
Accordingly, if the simulation region is large enough for , an atom interacts with only the nearest image of another atom.
As already indicated, a shortcoming with the cutoff potential (18.14) is that because the derivative du/dr is singular at r = rcut, the potential is no longer conservative and thus energy conservation is no longer ensured. However, because the forces are already very small at rcut, the violation will also be very small.
A realistic MD simulation may require integration of the 3D equations of motion for 1010 time steps for each of 103–106 particles. Although we could use our standard rk4
ODE solver for this, time is saved by using a simple rule embedded in the program. The Verlet algorithm uses the central-difference approximation (Chapter 5) for the second derivative to advance the solutions by a single time step h for all N particles simultaneously:
where we have set m = 1. (Improved algorithms may vary the time step depending upon the speed of the particle.) Note that although the atom–atom force does not have an explicit time dependence, we include a t dependence in it as a way of indicating its dependence upon the atoms’ positions at a particular time. Because this is really an implicit time dependence, energy remains conserved.
Part of the efficiency of the Verlet algorithm (18.16) is that it solves for the position of each particle without requiring a separate solution for the particle’s velocity. However, once we have deduced the position for various times, we can use the central-difference approximation for the first derivative of ri to obtain the velocity:
Finally, note that because the Verlet algorithm needs r from two previous steps, it is not self-starting and so we start it with the forward difference
Velocity–Verlet Algorithm Another version of the Verlet algorithm, which we recommend because of its increased stability, uses a forward-difference approximation for the derivative to advance both the position and velocity simultaneously:
Although this algorithm appears to be of lower order than (18.16), the use of updated positions when calculating velocities, and the subsequent use of these velocities, make both algorithms of similar precision.
Of interest is that (18.21) approximates the average force during a time step as . Updating the velocity is a little tricky because we need the force at time t + h, which depends on the particle positions at t + h. Consequently, we must update all the particle positions and forces to t + h before we update any velocities, while saving the forces at the earlier time for use in (18.21). As soon as the positions are updated, we impose periodic boundary conditions to establish that we have not lost any particles, and then we calculate the forces.
In the supplementary materials to this book, you will find a number of 2D animations (movies) of solutions to the MD equations. Some frames from these animations are shown in Figure 18.3. We recommend that you look at the movies in order to better visualize what the particles do during an MD simulation. In particular, these simulations use a potential and temperature that should lead to a solid or liquid system, and so you should see the particles binding together.
Listing 18.1 MD.py performs a 2D MD simulation with a small number of rather large time steps for just a few particles. To be realistic the user should change the parameters and the number of random numbers added to form the Gaussian distribution.
The program MD.py
in Listings 18.1 implements an MD simulation in 1D using the velocity-Verlet algorithm. Use it as a model and do the following:
%100
for output.
Figure 18.5 The kinetic, potential, and total energy as a function of time or number of steps for a 2D MD simulation with 36 particles (a), and 300 particles (b), both with an initial temperature of 150 K. The potential energy is negative, the kinetic energy is positive, and the total energy is seen to be conserved (flat).
Figure 18.6 (a) The temperature after equilibration as a function of initial kinetic energy for a 2D MD simulation with 36 particles. The two are nearly proportional. (b) The pressure vs. temperature for a simulation with several hundred particles. An ideal gas (noninteracting particles) would yield a straight line (courtesy of J. Wetzel).
Figure 18.7 A simulation of a projectile shot into a group of particles. The energy introduced by the projectile is seen to lead to evaporation of the particles (courtesy of J. Wetzel).