Big Red II at Indiana University
On this page:
- System overview
- System information
- System access
- Available software
- Running jobs on Big Red II
- Documentation and training
Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).
Big Red II was officially dedicated on April 26, 2013, and entered full production on August 7, 2013. Its predecessor, Big Red, was decommissioned and powered down at the end of September 2013.
Although Big Red II is a local resource for use by the IU community, its presence has an important effect on the national cyberinfrastructure ecosystem, including the Extreme Science and Engineering Discovery Environment (XSEDE). By providing IU researchers the same technology, software environment, and hybrid architecture used in national supercomputer resources, such as Titan at Oak Ridge National Laboratory (ORNL) and Blue Waters at the National Center of Supercomputing Applications (NCSA), Big Red II meets relatively modest scientific computing needs locally, allowing larger national supercomputing assets to be efficiently used on challenging compute- and data-intensive projects. Big Red II also helps conserve nationally funded supercomputing assets by providing a powerful hybrid system IU scientists can use to fully optimize and tune their applications before migrating them to an XSEDE digital service, such as Kraken at the National Institute for Computational Sciences (NICS).
Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:
- 344 CPU-only compute nodes, each containing two AMD Opteron
16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM
- 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM
All compute nodes are connected through the Cray Gemini interconnect.
The scheduled monthly maintenance window for Big Red II is 7am-7pm on the first Tuesday of each month.
|System configuration||Aggregate information||Per node|
|Machine type||Hybrid (x86_64/NVIDIA Kepler)|
|Operating system||Cray Linux Environment (based on SUSE Linux SLES 11)|
|Processor cores||21,824||32 (compute)
|Rmax||596.4 teraFLOPS||0.884 teraFLOPS|
|Rpeak||1.006 petaFLOPS||320 gigaFLOPS (compute)
1,317 gigaFLOPS (GPU)
|RAM||43,648 GB||64 GB (compute)
32 GB (GPU)
|Storage||180 TB local scratch|
|File systems||Lustre (Data Capacitor II/DCWAN), NFSv3
Note: Indiana University is in the process of
replacing its current Data Capacitor storage system with Data
Capacitor II (DC2), a high-speed, high-capacity, storage facility for
very large data sets. The DC2 file system
Access is available to IU graduate students, faculty, and staff. Undergraduates and non-IU collaborators must have IU faculty sponsors. For details, see the "Research system accounts (all campuses)" section of What computing accounts are available at IU, and for whom?
Once your account is created, you may use your IU username and
passphrase to log into Big Red II
bigred2.uits.iu.edu) with any SSH2
client. Public key authentication is permitted on Big Red II; see In SSH and SSH2 for Unix, how do I set up public key authentication?
Big Red II supports file transfer via SCP and SFTP; see At IU, what SSH/SFTP clients are supported and where can I get them?
Software installed on Big Red II is available to users via the Modules environment management system.
Modules is a command-line interface that provides commands for setting and modifying shell environment variables. These environment variables define values used by both the shell and the programs you execute on the shell command line.
The Modules environment management package simplifies the management of environment variables associated with various software packages, and lets you automatically modify environment variables as needed when switching between software packages.
For more about the Modules package, see the
module manual page
page. Additionally, see On Big Red II, Mason, Quarry, and Rockhopper at IU, how do I use Modules to manage my software environment?
For more on the IU Cyberinfrastructure Gateway, see What is the IU Cyberinfrastructure Gateway?
Running jobs on Big Red II
Important: For your application to execute on Big
Red II's compute nodes, your batch job script must include the
appropriate application launch command (
aprun for ESM
ccmrun for CCM jobs). Additionally, for CCM jobs,
you must load the
ccm module (add
module load ccm to your
file), and use the
-l gres=ccm TORQUE directive in
your job script. TORQUE scripts for batch jobs on Quarry or Mason
will not work on Big Red II without the proper
modifications. If your script's executable line does not begin with
the appropriate launch command, your application will execute on an
aprun service node, not a compute node, and may likely
cause a service disruption for all users on the system. The
aprun nodes are shared by all currently running jobs, and
are intended only for passing job requests. Any memory- or
computationally-intensive jobs running on
will be terminated.
IU researchers can execute jobs in two environments on Big Red II: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Software optimized for a Cray environment (e.g., AMBER, OpenFOAM, and NAMD) will usually run in the ESM environment. If you compile your own applications, you should target the ESM environment first. Applications developed to run on a standard Linux cluster or on a single server (e.g., MATLAB or Ansys) can run in the CCM environment. In general, the CCM environment will support any standard Linux application. For details, see:
- Compiling C, C++, and Fortran programs on Big Red II at IU
- How do I run batch jobs on Big Red II at IU?
- On Big Red II at IU, how do I run OpenMP or hybrid OpenMP/MPI jobs?
- On Big Red II at IU, how do I use PCP to bundle multiple serial jobs to run them in parallel?
To highlight the performance benefits of Big Red II, UITS provides performance comparisons based on common benchmarks and applications; see BRII Comparisons.
Documentation and training
For an overview of Big Red II documentation, see Getting started on Big Red II.
For tutorials and workshops on how to effectively use Big Red II's hybrid architecture (particularly, how to use GPUs and identify compatible software), see Cyberinfrastructure Training and InfoShares on the UITS Research Technologies web site.
- If you have system-specific questions about Big Red II, Quarry,
Mason, or the Research Database Complex (RDC), email the High
Performance Systems team.
- If you have questions about compilers, programming, scientific and
numerical libraries, or debuggers on a research computing system, email Scientific
Applications and Performance Tuning team.
- If you have questions about statistical and mathematical software
on any of the research computing systems, email the
Research Analytics group.
- If you have questions about shared scratch space on the Data
Capacitor, email the High
Performance File Systems team.
- If you have questions about the Research File System (RFS) or Scholarly Data Archive (SDA), email the Research Storage team.
To ask any other question about Research Technologies systems and services, use the Request help or information form.
Last modified on November 01, 2013.