UITS

Big Red II at Indiana University

On this page:


System overview


Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).

Big Red II was officially dedicated on April 26, 2013, and entered full production on August 7, 2013. Its predecessor, Big Red, was decommissioned and powered down at the end of September 2013.

Although Big Red II is a local resource for use by the IU community, its presence has an important effect on the national cyberinfrastructure ecosystem, including the Extreme Science and Engineering Discovery Environment (XSEDE). By providing IU researchers the same technology, software environment, and hybrid architecture used in national supercomputer resources, such as Titan at Oak Ridge National Laboratory (ORNL) and Blue Waters at the National Center of Supercomputing Applications (NCSA), Big Red II meets relatively modest scientific computing needs locally, allowing larger national supercomputing assets to be efficiently used on challenging compute- and data-intensive projects. Big Red II also helps conserve nationally funded supercomputing assets by providing a powerful hybrid system IU scientists can use to fully optimize and tune their applications before migrating them to an XSEDE digital service, such as Kraken at the National Institute for Computational Sciences (NICS).

Back to top

System information

Note: The scheduled monthly maintenance window for Big Red II is the first Tuesday of each month, 7am-7pm.

Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:

  • 344 CPU-only compute nodes, each containing two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM
  • 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM

Big Red II runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). All compute nodes are connected through the Cray Gemini interconnect.

System summary
Machine type
Hybrid (x86_64 CPUs/NVIDIA Kepler GPUs)
Operating system
Cray Linux Environment (based on SUSE Linux SLES 11)
Memory model Distributed
Nodes 1,020
Computational system details Total Per node
CPUs 1,364 2 (compute)
1 (GPU)
Processor cores 21,824 32 (compute)
16 (GPU)
RAM 43,648 GB 64 GB (compute)
32 GB (GPU)
LINPACK Benchmark performance
Rmax 596.4 TFLOPS 0.884 TFLOPS
Rpeak 1.006 PFLOPS
320 GFLOPS (compute)
1,317 GFLOPS (GPU)

Back to top

File storage options

Note: Before storing data on this system, make sure you understand the information in the Working with ePHI research data section (below).

You can store data on your home directory or in scratch space:

  • Home directory: Your Big Red II home directory disk space is allocated on a network-attached storage (NAS) storage device. You have a 100 GB disk quota, which is shared (if applicable) with your accounts on Karst, Mason, and the Research Data Complex (RDC).

    The path to your home directory is (replace username with your Network ID username):

      /N/u/username/BigRed2

    IU graduate students, faculty, and staff who need more than 100 GB of permanent storage can apply for accounts on the Research File System (RFS) and the Scholarly Data Archive (SDA). See At IU, how can I apply for an account on the SDA or RFS?

  • Shared scratch: Once you have an account on one of the UITS research computing systems, you also have access to 3.5 PB of shared scratch space.

    Shared scratch space is hosted on the Data Capacitor II (DC2) file system. The DC2 scratch directory is a temporary workspace. Scratch space is not allocated, and its total capacity fluctuates based on project space requirements. The DC2 file system is mounted on IU research systems as /N/dc2/scratch and behaves like any other disk device. If you have an account on an IU research system, you can access /N/dc2/scratch/username (replace username with your IU Network ID username). Access to /N/dc2/projects requires an allocation. For details, see The Data Capacitor II and DC-WAN high-speed file systems at Indiana University. Files in shared scratch space more than 60 days old are periodically purged, following user notification.

For more, see At IU, how much disk space is available to me on the research computing systems?

Back to top

Working with ePHI research data

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of personal health data. The HIPAA Security Rule set national standards specifically for the security of protected health information (PHI) that is created, stored, transmitted, or received electronically (i.e., electronic protected health information, or ePHI). To ensure the confidentiality, integrity, and availability of ePHI data, the HIPAA Security Rule requires organizations and individuals to implement a series of administrative, physical, and technical safeguards when working with ePHI data.

Although you can use this system for processing or storing electronic protected health information (ePHI) related to official IU research:

  • You and/or the project's principal investigator (PI) are responsible for ensuring the privacy and security of that data, and complying with applicable federal and state laws/regulations and institutional policies. IU's policies regarding HIPAA compliance require the appropriate Institutional Review Board (IRB) approvals and a data management plan.
  • You and/or the project's PI are responsible for implementing HIPAA-required administrative, physical, and technical safeguards to any person, process, application, or service used to collect, process, manage, analyze, or store ePHI data.

The UITS Advanced Biomedical IT Core provides consulting and online help for Indiana University researchers who need help securely processing, storing, and sharing ePHI research data. If you need help or have questions about managing HIPAA-regulated data at IU, contact the ABITC. For additional details about HIPAA compliance at IU, see HIPAA & ABITC and the Office of Vice President and General Counsel (OVPGC) HIPAA Privacy & Security page.

Important: Although UITS HIPAA-aligned resources are managed using standards surpassing official standards for managing institutional data at IU and are appropriate for storing HIPAA-regulated ePHI research data, they are not recognized by the IU Committee of Data Stewards as appropriate for storing institutional data elements classified as Critical that are not ePHI data. For help determining which institutional data elements classified as Critical are considered ePHI, see Which data elements in the classifications of institutional data are considered protected health information (PHI)?

The IU Committee of Data Stewards and the University Information Policy Office (UIPO) set official classification levels and data management standards for institutional data in accordance with the university's Management of Institutional Data (DM-01) policy. If you have questions about the classifications of institutional data, contact the appropriate Data Steward. To determine the most sensitive classification of institutional data you can store on any given UITS service, see the "Choosing an appropriate storage solution" section of At IU, which dedicated file storage services and IT services with storage components are appropriate for sensitive institutional data, including ePHI research data?

Note: In accordance with standards for access control mandated by the HIPAA Security Rule, you are not permitted to access ePHI data using a group (or departmental) account. To ensure accountability and enable only authorized users to access ePHI data, IU researchers must use their personal Network ID credentials for all work involving ePHI data.

Back to top

System access

Access is available to IU graduate students, faculty, and staff. Undergraduates and non-IU collaborators must have IU faculty sponsors. For details, see the "Research system accounts (all campuses)" section of What computing accounts are available at IU, and for whom?

Once your account is created, you may use your IU username and passphrase to log into Big Red II (bigred2.uits.iu.edu) with any SSH2 client. Public key authentication is permitted on Big Red II; see How do I set up SSH public-key authentication to connect to a remote system?

Big Red II supports file transfer via SCP and SFTP; see At IU, what SSH/SFTP clients are supported and where can I get them?

Back to top

Setting up your user environment

On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

Modules is a command-line interface that provides commands for setting and modifying shell environment variables. These environment variables define values used by both the shell and the programs you execute on the shell command line.

The Modules environment management package simplifies the management of environment variables associated with various software packages, and lets you automatically modify environment variables as needed when switching between software packages.

Some common Modules commands include:

Command Action
module avail List all software packages available on the system.
module avail package List all versions of package available on the system; for example:

  module avail openmpi
module list List all packages currently added to your user environment.
module load package Add the default version of the package to your user environment; for example:

  module load intel
module load package/version Add the specified version of the package to your user environment; for example:

  module load intel/11.1
module unload package Remove the specified package from your user environment.
module swap package_A package_B Swap the loaded package (package_A) with another package (package_B). This is synonymous with:

  module switch package_A package_B
module show package Show the changes loading the specified package makes to your user environment (e.g., environment variables set, library paths added). This is synonymous with:

  module display package

To make permanent changes to your environment, edit your ~/.modules file. For more, see In Modules, how do I save my environment with a .modules file?

For more about the Modules package, see the module manual page and the modulefile manual page. Additionally, see On Big Red II, Karst, Mason, and Rockhopper at IU, how do I use Modules to manage my software environment?

Back to top

Available software

The Scientific Applications and Performance Tuning (SciAPT) team maintains the software on Big Red II and the other research computing systems at IU.

For a list of packages available on Big Red II, see Big Red II Modules in the IU Cyberinfrastructure Gateway. Alternatively, you can log into your Big Red II account and enter the module avail command.

Big Red II users can request software using the Research Systems Software Request form.

Back to top

Running jobs on Big Red II

IU researchers can execute jobs in two environments on Big Red II: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Software optimized for a Cray environment (e.g., AMBER, OpenFOAM, and NAMD) will usually run in the ESM environment. If you compile your own applications, you should target the ESM environment first. Applications developed to run on a standard Linux cluster or on a single server (e.g., MATLAB or Ansys) can run in the CCM environment. In general, the CCM environment will support any standard Linux application.

Big Red II uses the TORQUE resource manager (based on OpenPBS) and the Moab Workload Manager to manage and schedule jobs. TORQUE job scripts must be tailored specifically for the Cray Linux Environment on Big Red II; see How do I run batch jobs on Big Red II at IU? Moab uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of batch jobs, see:

Important: For your application to execute on Big Red II's compute nodes, your batch job script must include the appropriate application launch command (aprun for ESM jobs; ccmrun for CCM jobs). Additionally, for CCM jobs, you must load the ccm module (add module load ccm to your ~/.modules file), and use the -l gres=ccm TORQUE directive in your job script. TORQUE scripts for batch jobs on Karst or Mason will not work on Big Red II without the proper modifications. If your script's executable line does not begin with the appropriate launch command, your application will execute on an aprun service node, not a compute node, and may likely cause a service disruption for all users on the system. The aprun nodes are shared by all currently running jobs, and are intended only for passing job requests. Any memory- or computationally-intensive jobs running on aprun nodes will be terminated.

For details, see:

To highlight the performance benefits of Big Red II, UITS provides performance comparisons based on common benchmarks and applications; see BRII Comparisons.

Back to top

Queue information

Big Red II has the following queues configured to accept jobs:

  • cpu: A routing queue for jobs that will run on the 32-core dual-Opteron (CPU-only) nodes; jobs submitted to the cpu routing queue are placed in the normal, long, or serial queue based on their resource requirements
  • gpu: For jobs that will run on the 16-core Opteron/NVIDIA (CPU/GPU) nodes
  • debug_cpu: For CPU-only jobs with a maximum wall time of one hour and a maximum node count of four
  • preempt: Allows non-GPU jobs to run on the CPU/GPU nodes, but preempts the non-GPU job with the lowest accrued wall time whenever a GPU job is ready to dispatch and no other GPU-enabled nodes are available; consequently, non-GPU jobs in this queue may dispatch multiple times before running to completion
  • debug_gpu: For CPU/GPU jobs with a maximum wall time of one hour and a maximum node count of four

Maximum values for each queue are defined as follows:

32-core dual-Opteron (CPU-only) nodes
Execution queue Nodes Cores/job Nodes/job Wall time/job Nodes/user
(normal)* 340 2,048 128 2 days 128
(serial)* 340 32 1 7 days 64
(long)* 128 256 8 14 days 32
debug_cpu 4 64 2 1 hour 2
* Jobs submitted to the routing queue (cpu) are routed to the normal, serial, and long execution queues based on resource requirements.
16-core Opteron/NVIDIA (CPU/GPU) nodes
Execution queue Nodes Cores/job Nodes/job Wall time/job Nodes/user
gpu 672 2,048 128 2 days 128
preempt 672 1,024 64 2 days 676
debug_gpu 4 32 4 1 hour 4

Note: To best meet the needs of all research projects affiliated with Indiana University, the High Performance Systems (HPS) team administers the batch job queues on UITS Research Technologies supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's supercomputing systems does not meet the needs of your research project, fill out and submit the Research Technologies Ask RT for Help form (for "Help Needed", select High Performance Systems job or queue help).

Back to top

Requesting single user time

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request single user time on one of IU's research computing systems, fill out and submit the Research Technologies Ask RT for Help form (for "Help Needed", select Request to run jobs in single user time on HPS systems). If you have questions about single user time on IU research computing systems, email the HPS team.

Back to top

Acknowledging grant support

The Indiana University cyberinfrastructure managed by the Research Technologies division of UITS is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see If I use IU's research cyberinfrastructure, what sources of funding do I need to acknowledge in my published work?

Back to top

Documentation and training

For an overview of Big Red II documentation, see Getting started on Big Red II.

For tutorials and workshops on how to effectively use Big Red II's hybrid architecture (particularly, how to use GPUs and identify compatible software), see Cyberinfrastructure Training and InfoShares on the UITS Research Technologies web site.

Back to top

Support

Support for research computing systems at Indiana University is provided by various units within the Systems area of the Research Technologies division of UITS:

To ask any other question about Research Technologies systems and services, use the Request help or information form.

Back to top

This is document bcqt in the Knowledge Base.
Last modified on 2015-02-09.

  • Fill out this form to submit your issue to the UITS Support Center.
  • Please note that you must be affiliated with Indiana University to receive support.
  • All fields are required.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.

  • Fill out this form to submit your comment to the IU Knowledge Base.
  • If you are affiliated with Indiana University and need help with a computing problem, please use the I need help with a computing problem section above, or contact your campus Support Center.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.