Big Red II at Indiana University
On this page:
- System overview
- System information
- File storage options
- Working with data containing PHI
- System access
- Setting up your user environment
- Available software
- Running jobs on Big Red II
- Queue information
- Requesting single user time
- Acknowledging grant support
- Documentation and training
Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).
Big Red II was officially dedicated on April 26, 2013, and entered full production on August 7, 2013. Its predecessor, Big Red, was decommissioned and powered down at the end of September 2013.
Although Big Red II is a local resource for use by the IU community, its presence has an important effect on the national cyberinfrastructure ecosystem, including the Extreme Science and Engineering Discovery Environment (XSEDE). By providing IU researchers the same technology, software environment, and hybrid architecture used in national supercomputer resources, such as Titan at Oak Ridge National Laboratory (ORNL) and Blue Waters at the National Center of Supercomputing Applications (NCSA), Big Red II meets relatively modest scientific computing needs locally, allowing larger national supercomputing assets to be efficiently used on challenging compute- and data-intensive projects. Big Red II also helps conserve nationally funded supercomputing assets by providing a powerful hybrid system IU scientists can use to fully optimize and tune their applications before migrating them to an XSEDE digital service, such as Kraken at the National Institute for Computational Sciences (NICS).
Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:
- 344 CPU-only compute nodes, each containing two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM
- 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM
Big Red II runs a proprietary variant of Linux called Cray Linux Environment (CLE). In CLE, compute elements run a lightweight kernel called Compute Node Linux (CNL), and the service nodes run SUSE Enterprise Linux Server (SLES). All compute nodes are connected through the Cray Gemini interconnect.
Hybrid (x86_64 CPUs/NVIDIA Kepler GPUs)
Cray Linux Environment (based on SUSE Linux SLES 11)
|Computational system details||Total||Per node|
|Processor cores||21,824||32 (compute)
|RAM||43,648 GB||64 GB (compute)
32 GB (GPU)
|LINPACK Benchmark performance|
|Rmax||596.4 TFLOPS||0.884 TFLOPS|
320 GFLOPS (compute)
1,317 GFLOPS (GPU)
File storage options
Note: Before storing data on this system, make sure you understand the information in the Working with data containing PHI section (below).
You can store data on your home directory or in scratch space:
- Home directory: Your Big Red II home directory disk space is allocated on a network-attached storage (NAS) storage device. You have a 100 GB disk quota, which is shared (if applicable) with your accounts on Karst, Mason, and the Research Data Complex (RDC).
The path to your home directory is (replace
usernamewith your Network ID username):
IU graduate students, faculty, and staff who need more than 100 GB of permanent storage can apply for accounts on the Scholarly Data Archive (SDA). See At IU, how do I apply for an account on the SDA?
To check your quota, use the
quotacommand. If the
quotacommand is not already loaded by default, use the
module loadcommand to add it to your environment:
module load quota
To make permanent changes to your environment, edit your
~/.modulesfile. For more, see In Modules, how do I save my environment with a .modules file?
- Shared scratch: Once you have an account on one of the UITS research computing systems, you also have access to 3.5 PB of shared scratch space.
Shared scratch space is hosted on the Data Capacitor II (DC2) file system. The DC2 scratch directory is a temporary workspace. Scratch space is not allocated, and its total capacity fluctuates based on project space requirements. The DC2 file system is mounted on IU research systems as
/N/dc2/scratchand behaves like any other disk device. If you have an account on an IU research system, you can access
usernamewith your IU Network ID username). Access to
/N/dc2/projectsrequires an allocation. For details, see The Data Capacitor II and DC-WAN high-speed file systems at Indiana University. Files in shared scratch space may be purged if they have not been accessed for more than 60 days.
Working with data containing PHI
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).
This system meets certain requirements established in the HIPAA Security Rule that enable its use for research involving data that contain protected health information (PHI). You may use this resource for research involving data that contain PHI only if you institute additional physical, administrative, and technical safeguards that complement those UITS already has in place. For more, see When using UITS Research Technologies systems and services, what are my legal responsibilities for protecting the privacy and security of data containing protected health information (PHI)? If you need help or have questions, contact UITS HIPAA Consulting.
Access is available to IU graduate students, faculty, and staff. Undergraduates and non-IU collaborators must have IU faculty sponsors. For details, see the "Research system accounts (all campuses)" section of What computing accounts are available at IU, and for whom?
Once your account is created, you may use your IU username and passphrase to log into Big Red II (
bigred2.uits.iu.edu) with any SSH2 client. Public key authentication is permitted on Big Red II; see How do I set up SSH public-key authentication to connect to a remote system?
Big Red II supports file transfer via SCP and SFTP; see At IU, what SSH/SFTP clients are supported and where can I get them?
Setting up your user environment
On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.
Modules is a command-line interface that provides commands for setting and modifying shell environment variables. These environment variables define values used by both the shell and the programs you execute on the shell command line.
The Modules environment management package simplifies the management of environment variables associated with various software packages, and lets you automatically modify environment variables as needed when switching between software packages.
Some common Modules commands include:
||List all software packages available on the system.|
||List all versions of
module avail openmpi
||List all packages currently added to your user environment.|
||Add the default version of the
module load intel
||Add the specified
module load intel/11.1
||Remove the specified
||Swap the loaded package (
module switch package_A package_B
||Show the changes loading the specified
module display package
To make permanent changes to your environment, edit your
~/.modules file. For more, see In Modules, how do I save my environment with a .modules
For more about the Modules package, see the
module manual page
page. Additionally, see On Big Red II, Karst, and Mason at IU, how do I use Modules to
manage my software environment?
For a list of packages available on Big Red II, see Big Red II Modules in the IU Cyberinfrastructure Gateway. Alternatively, you can log into your Big Red II account and enter the
module avail command.
Big Red II users can request software using the Software Request form.
Running jobs on Big Red II
IU researchers can execute jobs in two environments on Big Red II: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Software optimized for a Cray environment (e.g., AMBER, OpenFOAM, and NAMD) will usually run in the ESM environment. If you compile your own applications, you should target the ESM environment first. Applications developed to run on a standard Linux cluster or on a single server (e.g., MATLAB or Ansys) can run in the CCM environment. In general, the CCM environment will support any standard Linux application.
Big Red II uses the TORQUE resource manager (based on OpenPBS) and the Moab Workload Manager to manage and schedule jobs. TORQUE job scripts must be tailored specifically for the Cray Linux Environment on Big Red II; see How do I run batch jobs on Big Red II at IU? Moab uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of batch jobs, see:
Important: For your application to execute on Big
Red II's compute nodes, your batch job script must include the
appropriate application launch command (
aprun for ESM
ccmrun for CCM jobs). Additionally, for CCM jobs,
you must load the
ccm module (add
module load ccm to your
file), and use the
-l gres=ccm TORQUE directive in
your job script. TORQUE scripts for batch jobs on Karst or Mason
will not work on Big Red II without the proper
modifications. If your script's executable line does not begin with
the appropriate launch command, your application will execute on an
aprun service node, not a compute node, and may likely
cause a service disruption for all users on the system. The
aprun nodes are shared by all currently running jobs, and
are intended only for passing job requests. Any memory- or
computationally-intensive jobs running on
will be terminated.
For details, see:
- Compiling C, C++, and Fortran programs on Big Red II at IU
- On Big Red II at IU, how do I run OpenMP or hybrid OpenMP/MPI jobs?
- On Big Red II at IU, how do I use PCP to bundle multiple serial jobs to run them in parallel?
- How do I run interactive jobs on Big Red II at IU?
Big Red II has the following queues configured to accept jobs that will run on the 32-core dual-Opteron (CPU-only) nodes:
- cpu: The routing queue for all "production" jobs; each job is routed, based on its resource requirements, to one of the execution queues (normal, serial, or long)
- debug_cpu: An execution queue reserved for testing and debugging purposes only
Maximum values for each execution queue are defined in the following table.
32-core dual-Opteron (CPU-only) nodes
|Execution queue||Nodes||Nodes/job||Cores/job||Wall time/job||Nodes/user|
*Do not submit jobs directly to the normal, serial, or long execution queues. Always use the cpu routing queue when submitting jobs for "production" runs. Use the debug queue for testing or debugging purposes only.
Big Red II has the following queues configured to accept jobs that will run on the 16-core Opteron/NVIDIA (CPU/GPU) nodes:
- gpu: The main execution queue for jobs on the CPU/GPU nodes
- opengl: An execution queue reserved for OpenGL jobs
- preempt: An execution queue that lets non-GPU jobs run on the CPU/GPU nodes; if no other GPU-enabled nodes are available when a GPU job is ready to dispatch, the non-GPU job with the lowest accrued wall time will be preempted (as a result, non-GPU jobs submitted to this queue may dispatch multiple times before running to completion)
- debug_gpu: An execution queue reserved for testing and debugging CPU/GPU codes
Maximum values for each queue are defined as follows:
16-core Opteron/NVIDIA (CPU/GPU) nodes
|Execution queue||Nodes||Nodes/job||Cores/job||Wall time/job||Nodes/user|
Requesting single user time
Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request such single user time, complete and submit the Research Technologies Ask RT for Help form, requesting to run jobs in single user time on HPS systems. If you have questions, email the HPS team.
Acknowledging grant support
The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see If I use IU's research cyberinfrastructure, what sources of funding do I need to acknowledge in my published work?
Documentation and training
For an overview of Big Red II documentation, see Getting started on Big Red II.
For tutorials on how to effectively use Big Red II's hybrid architecture, see Supercomputing quick start guidess.
- If you have a system-specific question about Big Red II, Karst, Mason, or the Research Database Complex (RDC) contact the High Performance Systems (HPS) team.
- If you have questions about the Scholarly Data Archive (SDA), contact the Research Storage team.
- If you have questions about the Research Database Complex (RDC), contact the Research Data Services team
- If you have questions about shared scratch or project space on the Data Capacitor II or Data Capacitor Wide Area Network (DC-WAN) file system, contact the High Performance File Systems (HPFS) team.
- If you have questions about the development tools, compilers, scientific or numerical libraries, or debuggers available on the research computing system, contact the Scientific Applications and Performance Tuning (SciAPT) team.
- If you have questions about the statistical and mathematical applications available on the research computing systems, contact the Research Analytics group.
- If you have questions about the bioinformatics and genome analysis packages available on the research computing systems, email the National Center for Genome Analysis Support (NCGAS).
For general inquiries about UITS Research Technologies systems and services, complete and submit the Research Technologies request for help form.
This is document bcqt in the Knowledge Base.
Last modified on 2016-11-15 16:23:14.
- Fill out this form to submit your issue to the UITS Support Center.
- Please note that you must be affiliated with Indiana University to receive support.
- All fields are required.