ARCHIVED: Execution environments on Big Red II at IU: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM)

This content has been archived, and is no longer maintained by Indiana University. Information here may no longer be accurate, and links may no longer be available or reliable.

At Indiana University, Big Red II runs the Cray Linux Environment (CLE), which provides two separate execution environments for running batch and large interactive jobs: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Both execution environments require the use of unique commands to properly launch applications on Big Red II's compute nodes.

On this page:

Note:

Big Red II was retired from service on December 15, 2019; for more, see ARCHIVED: About Big Red II at Indiana University (Retired).


Extreme Scalability Mode (ESM)

The ESM environment is the native execution environment on Big Red II. It is designed to run large, complex, highly scalable applications, but does not provide the full set of Linux services needed to run standard cluster-based applications.

To execute an application on Big Red II's compute nodes in the ESM execution environment, you must invoke the aprun application launch command in your batch job script, or on the command line when your interactive job is ready to start.

Note:
Submitting your batch or interactive job to TORQUE (using the qsub command) places your job on one of the aprun service nodes. These nodes have limited resources shared between all users on the system and are not intended for computational use. You must invoke the aprun command in your job script or from the aprun command line to launch your application on one or more compute nodes in the ESM execution environment.

Cluster Compatibility Mode (CCM)

The CCM environment provides the Linux services needed to run applications that run on most standard x86_64 cluster-based systems. The CCM execution environment emulates a Linux-based cluster, allowing you to launch standard applications that won't run in the ESM environment.

To run a batch job or large interactive job that executes an application on Big Red II's compute nodes in the CCM execution environment, you must:

  • Add the ccm module to your user environment with this module load command:
    module load ccm

    You can add this line to your TORQUE batch job script (after your TORQUE directives and before your executable lines), or invoke it on the command line when your interactive job is ready to start. To permanently add the ccm module to your user environment, add the line to your ~/.modules file; see ARCHIVED: Use a .modules file in your home directory to save your user environment on an IU research supercomputer.

  • Use the -l gres=ccm flag:
    • For batch jobs, include the -l gres=ccm flag as a TORQUE directive in your job script:
      #PBS -l gres=ccm
    • For interactive jobs, add the -l gres=ccm flag as an option on the qsub command line:
      qsub -I -l walltime=00:45:00 -l nodes=1:ppn=32 -l gres=ccm -q cpu
  • Invoke the appropriate command to place your job on a compute node:
    • For batch jobs, add the ccmrun application launch command to the beginning of the executable line in your batch job script:
      ccmrun mpirun -np 64 my_binary
    • For interactive jobs, invoke the ccmlogin command from the aprun command line; this places you on a compute node (for example, nid00998) from which you can launch your application (for example, MATLAB):
      dartmaul@aprun1:~> ccmlogin
      Warning: Permanently added '[nid00998]:203' (RSA) to the list of known hosts.
      dartmaul@nid00998:~>matlab
    Note:
    Submitting your batch or interactive job to TORQUE (using the qsub command) places your job on one of Big Red II's aprun service nodes. These nodes have limited resources shared between all users on the system and are not intended for computational use. The ccmrun command (for batch jobs) and ccmlogin command (for interactive jobs) lets you launch your application on one or more compute nodes in the emulated cluster (CCM execution environment).

More information

For more about the Cray Linux Environment (CLE), see About CLE User Application Placement in CLE.

For more about the aprun command, see Run Applications Using the aprun Command. Alternatively, view the aprun manual page on Big Red II; on the command line, enter:

man aprun

For more about the ccmrun and ccmlogin commands, see their manual pages; on the Big Red II command line, enter:

man ccmrun
man ccmlogin

For more about running batch jobs on Big Red II, see:

Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

Back to top

This is document bdol in the Knowledge Base.
Last modified on 2019-12-15 07:03:57.