About Big Red 3 at Indiana University

On this page:

System overview

Big Red 3 is a Cray XC40 supercomputer dedicated to researchers, scholars, and artists with large-scale, compute-intensive applications that can take advantage of the system's extreme processing capability and high-bandwidth network topology. Big Red 3 supports programs at the highest level of the university, including the Grand Challenges program.

Featuring 930 dual-socket compute nodes equipped with Intel Haswell Xeon processors (22,464 compute cores), Big Red 3 has a theoretical peak performance (Rpeak) of 934 trillion floating-point operations per second (934 teraFLOPS). Big Red 3 runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). Big Red 3 uses Slurm to coordinate resource management and job scheduling.

System access

IU graduate students, faculty, and staff can create Big Red 3 accounts using the instructions in Get additional IU computing accounts. Undergraduate students and affiliates can get Big Red 3 accounts if they are sponsored by full-time IU faculty or staff members; see About Big Red 3, Big Red II, RDC, and SDA accounts for undergraduate students and sponsored affiliates at IU. Grand Challenges users who create Big Red 3 accounts can submit the Request Access to Specialized HPC Resources form to request exclusive access to a portion of the system for running jobs.

Once your account is created, you can use any SSH2 client to access bigred3.uits.iu.edu. Sign into IU Login with your IU username and passphrase.

  • To set up SSH public-key authentication, you must submit the "SSH public-key authentication to HPS systems" user agreement (log into HPC everywhere using your IU username and passphrase), in which you agree to set a passphrase on your private key when you generate your key pair.
  • For enhanced security, SSH connections that have been idle for 60 minutes will be disconnected. To protect your data from misuse, remember to log off or lock your computer whenever you leave it.
  • The scheduled monthly maintenance window for IU's high-performance computing systems is the second Sunday of each month, 7am-7pm.

HPC software

The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high-performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail.

For information about adding packages to your user environment, see Use Modules to manage your software environment on IU's research computing systems.

To request software, submit the HPC Software Request form.

Set up your user environment

On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research computing systems.

Big Red 3 provides programming environments for the Cray, Intel, PGI, and GNU Compiler Collections (GCC) compilers. For information about using these compiler suites, see Compile C, C++, and Fortran programs on Big Red II or Big Red 3 at IU.

File storage options


Big Red 3 is not currently cleared for work involving data that contain protected health information (PHI).

UITS provides consulting and online help for Indiana University researchers, faculty, and staff who need help securely processing, storing, and sharing data containing protected health information (PHI). If you have questions about managing HIPAA-regulated data at IU, contact UITS HIPAA Consulting. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. For additional details about HIPAA compliance at IU, see HIPAA Privacy and Security Compliance

Run jobs on Big Red 3

Big Red 3 uses the Slurm workload manager; for more, see Use Slurm to submit and manage jobs on high-performance computing systems.

Partition (queue) information

In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. To view details about Big Red 3 partitions and nodes, use the sinfo command; for more about using sinfo, see the View partition and node information section of Use Slurm to submit and manage jobs on high-performance computing systems.

Acknowledge grant support

The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure

Get help

Support for IU research computing systems, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

This is document aoku in the Knowledge Base.
Last modified on 2019-12-03 10:10:33.

Contact us

For help or to comment, email the UITS Support Center.