About Big Red II at Indiana University

On this page:


System overview

Note:

Big Red II will be retired from service on December 15, 2019. After that date, you will no longer be able to log into Big Red II; however, the data in your Big Red II home directory will remain accessible from your home directory on any of the other IU research supercomputers. Requests for new software installations on Big Red II are currently being redirected to Big Red 3 unless the requested software has requirements specific to Big Red II.

IU graduate students, faculty, and staff can create Big Red 3 accounts using the instructions in Get additional IU computing accounts. Undergraduate students and affiliates can get Big Red 3 accounts if they are sponsored by full-time IU faculty or staff members; see About Big Red 3, Big Red II, RDC, and SDA accounts for undergraduate students and sponsored affiliates at IU. Grand Challenges users who create Big Red 3 accounts can submit the Request Access to Specialized HPC Resources form to request exclusive access to a portion of the system for running jobs.

Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS) and a maximal achieved performance (Rmax) of 596.4 teraFLOPS, Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (big data).

Big Red II features a hybrid architecture based on two Cray, Inc., supercomputer platforms. As configured upon entering production in August 2013, Big Red II comprised 344 XE6 (CPU-only) compute nodes and 676 XK7 "GPU-accelerated" compute nodes, all connected through Cray's Gemini scalable interconnect, providing a total of 1,020 compute nodes, 21,824 processor cores, and 43,648 GB of RAM. Each XE6 node has two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM; each XK7 node has one AMD Opteron 16-core Interlagos x86_64 CPU, 32 GB of RAM, and one NVIDIA Tesla K20 GPU accelerator.

Big Red II runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES).

The Data Capacitor II, DC-WAN2, Slate, and Slate-Project file systems are mounted for temporary storage of research data.

Although Big Red II is a local resource for use by the IU community, its presence has an important effect on the national cyberinfrastructure ecosystem. By providing IU researchers the same technology, software environment, and hybrid architecture used in national supercomputer resources, such as Blue Waters at the National Center of Supercomputing Applications (NCSA), Big Red II meets relatively modest scientific computing needs locally, allowing larger national supercomputing assets to be efficiently used on challenging compute- and data-intensive projects. Big Red II also helps conserve nationally funded supercomputing assets by providing a powerful hybrid system IU scientists can use to fully optimize and tune their applications before migrating them to an Extreme Science and Engineering Discovery Environment (XSEDE) computational service.

System access

Access is available to IU graduate students, faculty, and staff. Undergraduates and non-IU collaborators must have IU faculty sponsors. For details, see the "Research system accounts (all campuses)" section of Computing accounts at IU.

Once your account is created, you can use any SSH2 client to access bigred2.uits.iu.edu. Log in with your IU username and passphrase, and then confirm your identity with Duo two-step login.

Notes:
  • To set up SSH public-key authentication, you must submit the "SSH public-key authentication to HPS systems" user agreement (log into HPC everywhere using your IU username and passphrase), in which you agree to set a passphrase on your private key when you generate your key pair.
  • For enhanced security, SSH connections that have been idle for 60 minutes will be disconnected. To protect your data from misuse, remember to log off or lock your computer whenever you leave it.
  • The scheduled monthly maintenance window for IU's high-performance computing systems is the second Sunday of each month, 7am-7pm.

HPC software

The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high-performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail.

For information about adding packages to your user environment, see Use Modules to manage your software environment on IU's research computing systems.

To request software, submit the HPC Software Request form.

Set up your user environment

On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research computing systems.

File storage options

Before storing data on this system, make sure you understand the information in the Work with data containing PHI section (below).

Data transfer nodes

Big Red II users can improve the efficiency and performance of their data transfers by using the system's dedicated data transfer nodes (DTNs). The DTNs feature hardware components that are tuned to optimize I/O throughput, enabling high-speed data transfers between Research Technologies resources at Indiana University and other remote hosts.

For details, see Use Big Red II data transfer nodes to improve the efficiency of data transfers.

Work with data containing PHI

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).

This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.

Note:
Although PHI is one type of Critical data, other types of institutional data classified as Critical are not permitted on Research Technologies systems. For help determining which institutional data elements classified as Critical are considered PHI, see About protected health information (PHI) data elements in the classifications of institutional data.

For more, see Your legal responsibilities for protecting data containing protected health information (PHI) when using UITS Research Technologies systems and services.

UITS provides consulting and online help for Indiana University researchers, faculty, and staff who need help securely processing, storing, and sharing data containing protected health information (PHI). If you have questions about managing HIPAA-regulated data at IU, contact UITS HIPAA Consulting. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. For additional details about HIPAA compliance at IU, see HIPAA Privacy and Security Compliance

Run jobs on Big Red II

The Cray Linux Environment (CLE) provides two separate execution environments for running batch and large interactive jobs:

  • Extreme Scalability Mode (ESM): Applications (for example, AMBER, OpenFOAM, and NAMD) that are optimized for Cray environments usually will run in the ESM environment. If you compile your own application, you should target the ESM environment first.
  • Cluster Compatibility Mode (CCM): Applications (for example, MATLAB and Ansys) that are developed to run on standard Linux clusters or single servers can run in the CCM environment. In general, the CCM environment will support any standard Linux application.

For more, see Execution environments on Big Red II at IU: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM).

Big Red II uses the TORQUE resource manager (based on OpenPBS) and the Moab Workload Manager to manage and schedule jobs. TORQUE job scripts must be tailored specifically for the Cray Linux Environment on Big Red II; see Run batch jobs on Big Red II. Moab uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of batch jobs, see:

Important:
For your application to execute on Big Red II's compute nodes, your batch job script must include the appropriate application launch command (aprun for ESM jobs; ccmrun for CCM jobs). Additionally, for CCM jobs, you must load the ccm module (add module load ccm to your ~/.modules file), and use the -l gres=ccm TORQUE directive in your job script. TORQUE scripts to run batch jobs in other Linux environments, such as Red Hat Enterprise Linux (RHEL) or CentOS, will not work on Big Red II without the proper modifications. If your script's executable line does not begin with the appropriate launch command, your application will execute on an aprun service node, not a compute node, and may likely cause a service disruption for all users on the system. The aprun nodes are shared by all currently running jobs, and are intended only for passing job requests. Any memory- or computationally-intensive jobs running on aprun nodes will be terminated.

For details, see:

Queue information

Note:
Each debug queue (debug_cpu and debug_gpu) has a maximum limit of two queued jobs per user. Across all queues, the maximum number of queued jobs allowed per user is 500.

CPU-only jobs

Big Red II has the following queues configured to accept jobs that will run on the 32-core dual-Opteron (CPU-only) nodes:

  • cpu: The routing queue for all "production" jobs; each job is routed, based on its resource requirements, to one of the execution queues (normal, serial, or long)
  • debug_cpu: An execution queue reserved for testing and debugging purposes only

Maximum values for each execution queue are defined in the following table.

32-core dual-Opteron (CPU-only) nodes
Execution queue Cores/node Nodes/job Wall time/job Nodes/user
(normal) * 32 128 2 days 128
(serial) * 32 1 7 days 128
(long) * 32 8 14 days 32
debug_cpu 32 4 1 hour 4
* Do not submit jobs directly to the normal, serial, or long execution queues. Always use the cpu routing queue when submitting jobs for "production" runs. Use the debug queue for testing or debugging purposes only.

CPU/GPU jobs

Big Red II has the following queues configured to accept jobs that will run on the 16-core Opteron/NVIDIA (CPU/GPU) nodes:

  • gpu: The main execution queue for jobs on the CPU/GPU nodes
  • opengl: An execution queue reserved for OpenGL jobs
  • cpu16: An execution queue that lets non-GPU jobs run on the CPU/GPU nodes
  • debug_gpu: An execution queue reserved for testing and debugging CPU/GPU codes

Maximum values for each queue are defined as follows:

16-core Opteron/NVIDIA (CPU/GPU) nodes
Execution queue Cores/node Nodes/job Wall time/job Nodes/user
gpu 16 256 7 days 384
opengl 16 256 7 days 384
cpu16 16 32 7 days 384
debug_gpu 16 4 1 hour 4
Note:
To best meet the needs of all research projects affiliated with Indiana University, UITS Research Technologies administers the batch job queues on IU's research supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's supercomputing systems does not meet the needs of your research project, contact UITS Research Technologies.

Request single user time

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request "single user time" or ask for more information, contact UITS Research Technologies.

Acknowledge grant support

The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure

Get help

For an overview of Big Red II documentation, see Get started on Big Red II.

Support for IU research computing systems, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

This is document bcqt in the Knowledge Base.
Last modified on 2019-09-19 15:33:11.

Contact us

For help or to comment, email the UITS Support Center.