Big Red II+ at Indiana University

On this page:

System overview

Big Red II+ is a supercomputer that complements Indiana University's Big Red II by providing an environment dedicated to large-scale, compute-intensive research. Researchers, scholars, and artists with large-scale research needs have benefited from Big Red II; these users can now take advantage of faster processing capability and networking provided by Big Red II+. The system helps support programs at the highest level of the university, such as the Grand Challenges Program.

Big Red II+ is a Cray XC30 supercomputer providing 550 compute nodes, each containing two Intel Xeon E5 12-Core x86_64 CPUs and 64 GB of DDR3 RAM. Big Red II+ has a theoretical peak performance (Rpeak) of 286 trillion floating-point operations per second (286 teraFLOPS). All compute nodes are connected through the Cray Aries interconnect.

Like Big Red II, Big Red II+ runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). The system uses Slurm to coordinate resource management and job scheduling.

The Data Capacitor II parallel file system is mounted for temporary storage of research data. The Modules environment management package on Big Red II+ allows users to dynamically customize their shell environments.

System access

On Big Red II+, accounts available to researchers run existing code or develop new code that will scale to use at least 6,144 CPU cores. Principal Investigators (PIs) may request access to Big Red II+ by completing and submitting the Big Red II+ Access Request form.

Once your account is created, you can use any SSH2 client to access Log in with your IU username and passphrase, and then confirm your identity with Duo two-step login.


Available software

Big Red II+ researchers are free to install application software in their home directories. For system software or compilers, contact the UITS High Performance Systems (HPS) team.

Set up your user environment

On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research computing systems.

Big Red II+ provides programming environments for the Cray, Intel, PGI, and GNU Compiler Collections (GCC) compilers. For information about using these compiler suites, see Compile C, C++, and Fortran programs on Big Red II and Big Red II+ at IU.

File storage options

Before storing data on this system, make sure you understand the information in the Work with data containing PHI section (below).

Work with data containing PHI

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).

This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.

Although PHI is one type of Critical data, other types of institutional data classified as Critical are not permitted on Research Technologies systems. For help determining which institutional data elements classified as Critical are considered PHI, see About protected health information (PHI) data elements in the classifications of institutional data.

For more, see Your legal responsibilities for protecting data containing protected health information (PHI) when using UITS Research Technologies systems and services.

UITS provides consulting and online help for Indiana University researchers, faculty, and staff who need help securely processing, storing, and sharing data containing protected health information (PHI). If you have questions about managing HIPAA-regulated data at IU, contact UITS HIPAA Consulting. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. For additional details about HIPAA compliance at IU, see HIPAA Privacy and Security Compliance

Run jobs on Big Red II+

Big Red II+ uses the Slurm workload manager; for more, see Use Slurm to submit and manage jobs on high-performance computing systems.

Job scripts must be tailored specifically for the Cray Linux Environment on Big Red II+.

Multi-factor Job Priority is the scheduling algorithm. The following factors determine job priority:

  • QoS (Quality of Service): Factor associated with logical user groups (for example, Grand Challenge projects)
  • Age: Length of time a job has been waiting (eligible to be scheduled)

Through the QoS factor, users in the Grand Challenge groups receive a base priority increase at job submission. Through the age factor, all users receive a job priority increase scaled over a seven day window (the older the job, the higher its priority).

Partition (queue) information

In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. The following partitions are available on Big Red II+:

Partition Number of nodes
workq (default) 548
interactive 2

To view details about Big Red II+ partitions and nodes, use the sinfo command; for more about using sinfo, see the View partition and node information section of Use Slurm to submit and manage jobs on high-performance computing systems.

Acknowledge grant support

The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure


Support for IU research computing systems, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

This is document aoku in the Knowledge Base.
Last modified on 2019-06-25 17:40:14.

Contact us

For help or to comment, email the UITS Support Center.