About Big Red 3 at Indiana University

On this page:


System overview

Big Red 3 is a Cray XC40 supercomputer dedicated to researchers, scholars, and artists with large-scale, compute-intensive applications that can take advantage of the system's extreme processing capability and high-bandwidth network topology. Big Red 3 supports programs at the highest level of the university, including the Grand Challenges program.

Featuring 930 dual-socket compute nodes equipped with Intel Haswell Xeon processors (22,464 compute cores), Big Red 3 has a theoretical peak performance (Rpeak) of 934 trillion floating-point operations per second (934 teraFLOPS). Big Red 3 runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). Big Red 3 uses Slurm to coordinate resource management and job scheduling.

System access

IU students, faculty, staff, and affiliates can create Big Red 3 accounts using the instructions in Get additional IU computing accounts. Grand Challenges users who create Big Red 3 accounts can submit the Request Access to Specialized HPC Resources form to request priority access to the system for running jobs.

Once your account is created, you can use any SSH2 client to access bigred3.uits.iu.edu. Sign into IU Login with your IU username and passphrase.

Notes:

HPC software

The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail.

For information about adding packages to your user environment, see Use Modules to manage your software environment on IU's research supercomputers.

To request software, submit the HPC Software Request form.

Set up your user environment

On the research supercomputers at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research supercomputers.

Big Red 3 provides programming environments for the Cray, Intel, PGI, and GNU Compiler Collections (GCC) compilers. For information about using these compiler suites, see Compile C, C++, and Fortran programs on Big Red 3 at IU.

File storage options

Before storing data on this system, make sure you understand the information in the Work with data containing PHI section (below).
Note:

Former Big Red II+ account holders who create Big Red 3 accounts are responsible for migrating their home directory files from Big Red II+ (/N/u/username/BR2Plus) to Big Red 3 (/N/u/username/BigRed3). UITS will archive the BR2Plus home directory on October 11, 2020, during the regularly scheduled monthly maintenance window.

Work with data containing PHI

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).

This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.

For more, see Your legal responsibilities for protecting data containing protected health information (PHI) when using UITS Research Technologies systems and services.

Note:
Although PHI is classified as Critical data, other types of institutional data classified as Critical are not permitted on Research Technologies systems. For help determining which institutional data elements classified as Critical are considered PHI, see About protected health information (PHI) data elements in the classifications of institutional data.

SecureMyResearch, a joint initiative of Indiana University's Center for Applied Cybersecurity Research (CACR), OVPIT Information Security, and UITS Research Technologies, provides self-service resources and one-on-one consulting to help IU researchers, faculty, and staff meet cybersecurity and compliance requirements for processing, storing, and sharing research data, including PHI. If you have questions about securing HIPAA-regulated research data at IU, email securemyresearch@iu.edu. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. To learn about division of responsibilities for securing PHI, see Shared responsibility model for securing PHI on UITS systems.

Run jobs on Big Red 3

Big Red 3 uses the Slurm workload manager; for more, see Use Slurm to submit and manage jobs on high performance computing systems.

Notes:
  • Big Red 3 jobs are allocated memory based on task count (--ntasks-per-node). Each task receives 1.2 GB of memory (for example, four tasks will receive 4.8 GB, and so on). If needed, users may request additional memory using the --mem flag. The maximum memory you can request on a node is 58 GB. For example, in a batch script:
    #SBATCH –mem=58G
    

    For an interactive job request:

    srun -p general -N 1 --ntasks-per-node=4 --mem=4G --time=1:00:00 --pty bash
    
  • Each Big Red 3 compute node has 48 hyper-threaded CPU cores and 24 physical CPU cores. Users can share compute nodes among their own jobs when not requesting all the cores on a node. If none of the jobs has unique resource constraints, such as special memory requirements, Slurm automatically stacks up to 24 jobs on one compute node. For example, if you submit four jobs, each requesting 12 cores, Slurm will place them on the same node. To prevent this behavior, you can request all the cores on a node by adding the following directives to your Slurm job script:
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=48
    

    If you request all 48 cores on a node but want to use fewer cores for your run, use the -n flag to specify that in your srun command; for example:

    srun -n 24 a.out 
    

    Otherwise, srun will launch the application with 48 cores.

Partition (queue) information

In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. To view details about Big Red 3 partitions and nodes, use the sinfo command; for more about using sinfo, see the View partition and node information section of Use Slurm to submit and manage jobs on high performance computing systems.

Acknowledge grant support

The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure

Get help

Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

This is document aoku in the Knowledge Base.
Last modified on 2020-07-01 11:39:01.

Contact us

For help or to comment, email the UITS Support Center.