Execution environments on Big Red II at IU: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM)

At Indiana University, Big Red II runs the Cray Linux Environment (CLE), which provides two separate execution environments for running batch and large interactive jobs: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Both execution environments require the use of unique commands to properly launch applications on Big Red II's compute nodes.

On this page:


Extreme Scalability Mode (ESM)

The ESM environment is the native execution environment on Big Red II. It is designed to run large, complex, highly scalable applications, but does not provide the full set of Linux services needed to run standard cluster-based applications.

To execute an application on Big Red II's compute nodes in the ESM execution environment, you must invoke the aprun application launch command in your batch job script, or on the command line when your interactive job is ready to start.

Note: Submitting your batch or interactive job to TORQUE (using the qsub command) places your job on one of the aprun service nodes. These nodes have limited resources shared between all users on the system and are not intended for computational use. You must invoke the aprun command in your job script or from the aprun command line to launch your application on one or more compute nodes in the ESM execution environment.

Cluster Compatibility Mode (CCM)

The CCM environment provides the Linux services needed to run applications that run on most standard x86_64 cluster-based systems. The CCM execution environment emulates a Linux-based cluster, allowing you to launch standard applications that won't run in the ESM environment.

To run a batch job or large interactive job that executes an application on Big Red II's compute nodes in the CCM execution environment, you must:

  • Add the ccm module to your user environment with this module load command:
      module load ccm

    You can add this line to your TORQUE batch job script (after your TORQUE directives and before your executable lines), or invoke it on the command line when your interactive job is ready to start. To permanently add the ccm module to your user environment, add the line to your ~/.modules file; see In Modules, how do I save my environment with a .modules file?

  • Use the -l gres=ccm flag:
    • For batch jobs, include the -l gres=ccm flag as a TORQUE directive in your job script:
        #PBS -l gres=ccm
    • For interactive jobs, add the -l gres=ccm flag as an option on the qsub command line:
        qsub -I -l walltime=00:45:00 nodes=1:ppn=32 -l gres=ccm -q cpu
  • Invoke the appropriate command to place your job on a compute node:
    • For batch jobs, add the ccmrun application launch command to the beginning of the executable line in your batch job script:
        ccmrun mpirun -np 64 my_binary
    • For interactive jobs, invoke the ccmlogin command from the aprun command line; this places you on a compute node (e.g., nid00998) from which you can launch your application (e.g., MATLAB):
        dartmaul@aprun1:~> ccmlogin 
        Warning: Permanently added '[nid00998]:203' (RSA) to the list of known hosts.
        dartmaul@nid00998:~>matlab

    Note: Submitting your batch or interactive job to TORQUE (using the qsub command) places your job on one of Big Red II's aprun service nodes. These nodes have limited resources shared between all users on the system and are not intended for computational use. The ccmrun command (for batch jobs) and ccmlogin command (for interactive jobs) lets you launch your application on one or more compute nodes in the emulated cluster (i.e., CCM execution environment).

More information

For more about the Cray Linux Environment (CLE), see Workload Management and Application Placement for the Cray Linux Environment (in PDF format).

For more about the aprun command, see its manual page online, or on the Big Red II command line, enter:

  man aprun

For more about the ccmrun and ccmlogin commands, see their manual pages; on the Big Red II command line, enter:

  man ccmrun
  man ccmlogin

For more about running batch jobs on >Big Red II, see:

Support for this system is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups. If you have system-specific questions, contact the HPS group. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on this system, contact the SciAPT group.

Back to top

This is document bdol in the Knowledge Base.
Last modified on 2017-07-25 09:00:58.

  • Fill out this form to submit your issue to the UITS Support Center.
  • Please note that you must be affiliated with Indiana University to receive support.
  • All fields are required.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.

  • Fill out this form to submit your comment to the IU Knowledge Base.
  • If you are affiliated with Indiana University and need help with a computing problem, please use the I need help with a computing problem section above, or contact your campus Support Center.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.