ARCHIVED: Use LAMMPS on Big Red II at IU

This content has been archived, and is no longer maintained by Indiana University. Information here may no longer be accurate, and links may no longer be available or reliable.

On this page:

Note:

Big Red II was retired from service on December 15, 2019; for more, see ARCHIVED: About Big Red II at Indiana University (Retired).


Overview

LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics simulation code designed to run efficiently on parallel computers. LAMMPS models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. Developed at Sandia National Laboratories, LAMMPS is open source code distributed freely under the terms of the GNU General Public License (GPL). For more, see the LAMMPS home page and the LAMMPS Documentation.

Input scripts and commands

LAMMPS executes by reading a text-based input file containing a sequence of commands for configuring and running your simulation. A typical LAMMPS input script contains commands that perform four fundamental tasks:

  • Initializing the simulation: These commands (for example,units, newton, boundary, and atom_style) set parameters that must be defined before atoms are created or read from a data file. If a data file contains force field parameters, other commands (e.g., pair_style, bond_style, angle_style, and improper_style) tell LAMMPS which kinds of force fields are being used.
  • Defining the atoms: The read_data (or read_restart) command reads in a data (or restart) file containing information for defining atoms and molecular topology. Together, the lattice, region, create_box, and create_atoms commands create atoms on a lattice. The replicate command duplicates an entire set of atoms to create a larger simulation.
  • Configuring the settings: Once atoms are defined, several commands configure settings for the simulation:
    • Force field coefficients (for example, pair_coeff, bond_coeff, angle_coeff, and kspace_style)
    • Simulation parameters (for example, neighbor, neigh_modify, timestep, run_style, and min_style)
    • Boundary conditions, time integrations, and diagnostic options (for example, fix)
    • Computations to be executed during the simulation (e.g., compute, compute_modify, and variable)
    • Output options (for example, thermo, dump, and restart)
  • Running the simulation: The run N command launches the simulation (replace N with the desired number of timesteps). Alternatively, the minimize command launches an energy minimization; temper launches a parallel tempering simulation.

Most commands have default settings, so you'll include them in your input script only when you want to change the default settings. LAMMPS reads your input script one line at a time, executing each command as it's read; therefore the sequence of commands in the script is meaningful. When your input script ends, LAMMPS exits.

Following is an example to illustrate the structure of a LAMMPS input script:

# Rhodopsin model

units           real
neigh_modify    delay 5 every 1

atom_style      full
bond_style      harmonic
angle_style     charmm
dihedral_style  charmm
improper_style  harmonic
pair_style      lj/charmm/coul/long 8.0 10.0
pair_modify     mix arithmetic
kspace_style    pppm 1e-4

read_data       data.rhodo

fix             1 all shake 0.0001 5 0 m 1.0 a 232
fix             2 all npt temp 300.0 300.0 100.0 &
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1

special_bonds   charmm

thermo          50
thermo_style    multi
timestep        2.0

run             100

In the above example:

  • The pair_style command specifies the lj/charmm/coul/long pair style. For more on this command, see the pair_style command page in the LAMMPS Documentation
  • The kspace_style command specifies the pppm (particle-particle particle-mesh) solver should be used to compute long-range Coulombic interactions. For more on this command, see the kspace_style command page in the LAMMPS Documentation.
  • The read_data command reads in the data.rhodo data file (which, for this example, is located in the same directory as the input script). The data file contains information about the size of the problem to be run, the initial atomic coordinates, molecular topology, and force-field coefficients. For more on the read_data command and an explanation of data file formatting, see the read_data command page in the LAMMPS Documentation.
  • The run command runs the simulation through 200 timesteps. For more about the run command, see the run command page in the LAMMPS Documentation.

For more about LAMMPS input script commands, see the Commands section of the LAMMPS Documentation.

Accelerator packages with optimized style options

To improve performance of simulations on systems equipped with NVIDIA GPUs and/or multiple-core CPUs (for example, Big Red II), optional accelerator packages are available in LAMMPS. Accelerator packages include optimized versions of some standard style options, which you can enable to further improve the efficiency of your simulation. Whenever its package-specific variant does not exist, the accelerator package automatically uses the standard style option instead.

Optimized style options have the same names as their standard counterparts but with package-specific suffixes appended. For example:

Standard style option: lj/cut/coul/long
GPU variant: lj/cut/coul/long/gpu
OpenMP variant: lj/cut/coul/long/omp

Methods for enabling and configuring accelerator packages and their optimized style options differ between packages. For example:

  • USER-CUDA: To enable the USER-CUDA accelerator package, use the -cuda on option on the command line when launching LAMMPS.

    To enable use of any available USER-CUDA style options, include -suffixnbspcuda on the command line, as well. To selectively enable a particular USER-CUDA style option, specify it explicitly in your input script; for example:

    pair_style lj/cut/coul/long/cuda
    

    Using -suffix cuda on the command line sets the same defaults as adding the package cuda gpu 2 command to your input script. To alter the default -suffix cuda settings at run-time, include the package cuda command (with the desired package-specific settings) near the top of your input script.

  • GPU: To run the GPU package, your input script must explicitly turn off the newton command; to do so, include newton off as the first command in your input script. (By default, LAMMPS turns on Newton's third law of motion for pairwise and bonded interactions.)

    To enable the GPU accelerator package and its corresponding style options, include -suffix gpu on the command line when launching LAMMPS. To selectively enable a particular GPU style option, specify it explicitly in your input script; for example:

    pair_style lj/cut/coul/long/gpu
    

    Using -suffix gpu on the command line sets the same defaults as adding the package gpu force/neigh 0 0 1 command to your input script. To alter the default -suffix gpu settings at run-time, include the package gpu command (with the desired package-specific settings) near the top of your input script.

    For more, see the GPU package page in the LAMMPS Documentation.

  • USER-OMP: To enable the USER-OMP accelerator package and its corresponding style options, include -suffix omp on the command line when launching LAMPPS. To selectively enable a particular USER-OMP style option, specify it explicitly in your input script; for example:
    pair_style lj/cut/coul/long/omp
    

    Using -suffix omp on the command line sets the same defaults as adding the package omp * command to your input script. To alter the default -suffix omp settings at run-time, include the package omp command (with the desired package-specific settings) near the top of your input script.

    For more, see the USER-OMP package page in the LAMMPS Documentation.

Performance improvements from accelerator packages depend on a variety of factors; for details, see Accelerate performance in the LAMMPS Documentation.

For more about the package command, see the package command page in the LAMMPS Documentation. For more about -suffix and other command-line switches LAMMPS recognizes, see the Command-line options section of the LAMMPS Documentation.

Run a GPU-accelerated LAMMPS simulation on Big Red II

At Indiana University, you can run GPU-accelerated LAMMPS simulations on Big Red II. Follow the instructions below for help setting up your user environment, preparing an input script and a batch job script, and submitting and monitoring your job.

Set up your user environment

Several different versions of LAMMPS are installed on Big Red II, each requiring a different set of prerequisite modules that must also be added to your user environment. In some cases, several different versions of the prerequisite modules are installed, as well. If you need help determining which modules you should load to properly set up your Big Red II user environment for running LAMMPS simulations, contact the UITS Research Applications and Deep Learning team.

To see which modules are currently loaded, on the command line, enter:

module list

If a programming environment module other than the one LAMMPS requires is loaded, use the module swap command to replace it with required module; for example:

module swap PrgEnv-cray PrgEnv-gnu

Use module load commands to add any missing prerequisite packages and the LAMMPS module.

Prepare an input script

To run a GPU-accelerated LAMMPS simulation, your input script may include commands that explicitly invoke an available accelerator package and its corresponding optimized style options (as discussed above in the Accelerator packages with optimized style options section); for example:

package gpu force/neigh 0 0 1.0
pair_style hybrid eam/gpu morse/gpu 2.21392

To see which accelerator packages and style options are available for LAMMPS on Big Red II:

  1. From the Big Red II login node, submit a short interactive job request; for example, on the command line, enter:
    qsub -I -l walltime=00:10:00 -l nodes=1:ppn=32 -q cpu
    
  2. When the job starts, and you are placed on one of Big Red II's compute nodes (for example, aprun8):
    1. Load the lammps/gnu/gpu/15May15; on the command line, enter:
      module load lammps/gnu/gpu/15May15
      
    2. Launch the LAMMPS executable with the -h option; on the command line, enter:
      aprun -n 1 /N/soft/cle4/lammps/lammps-15May15/bin/lmp_xe6 -h
      

Example input scripts are available on Big Red II at:

/N/soft/cle4/lammps/lammps-15May15/examples

For information about the examples, access the README file on Big Red II:

/N/soft/cle4/lammps/lammps-15May15/examples/README

For complete documentation about LAMMPS input scripts and commands, see the Commands section of the LAMMPS Documentation.

Prepare a batch job script

To run LAMMPS on Big Red II, your batch job script (for example, ~/work_directory/my_job_script.pbs) must:

  • Specify the resource requirements and other parameters appropriate for your job.
  • Invoke the aprun command to properly launch the LAMMPS executable (lmp_xe6).

Following is a sample job script for running a GPU-accelerated LAMMPS simulation across four hybrid CPU/GPU nodes in the native Extreme Scalability Mode (ESM) execution environment on Big Red II:

#!/bin/bash 

#PBS -l nodes=4:ppn=16,walltime=3:00:00
#PBS -q gpu
#PBS -o out.log
#PBS -e err.log

cd $PBS_O_WORKDIR 
aprun -n 4 -N 1 lmp_xe6 -cuda off -suffix gpu < in.cmdfile

In the above sample script:

  • The -q gpu TORQUE directives routes the simulation to the gpu queue.
  • The cd $PBS_O_WORKDIR line changes to the directory where the job was submitted and where the input files are located.
  • When invoking aprun on the CPU/GPU nodes, the -n argument specifies the total number of nodes (not the total number of processing elements), and the -N argument specifies the number of GPUs per node, which on Big Red II is one (for example, -N 1).
  • The -suffix gpu option enables the use of any available GPU style options by commands in your input script.

Submit and monitor your job

To submit your job script (such as ~/work_directory/my_job_script.pbs), use the TORQUE qsub command; for example, on the command line, enter:

  qsub [options] ~/work_directory/my_job_script.pbs

For a full description of the qsub command and available options, see its manual page.

To monitor the status of your job, use any of the following methods:

  • Use the TORQUE qstat command; on the command line, enter (replace username with the IU username you used to submit the job):
      qstat -u username
    

    For a full description of the qstat command and available options, see its manual page.

  • Use the Moab checkjob command; on the command line enter (replace job_id with the ID number assigned to your job):
      checkjob job_id
    

    For a full description of the checkjob command and available options, see its manual page.

Get help

For more about running batch jobs on Big Red II, see ARCHIVED: Run batch jobs on Big Red II.

If you have questions about using LAMMPS on Big Red II, or need help, contact the UITS Research Applications and Deep Learning team.

Research computing support at IU is provided by the Research Technologies division of UITS. To ask a question or get help regarding Research Technologies services, including IU's research supercomputers and research storage systems, and the scientific, statistical, and mathematical applications available on those systems, contact UITS Research Technologies. For service-specific support contact information, see Research computing support at IU.

This is document beus in the Knowledge Base.
Last modified on 2023-04-21 16:55:31.