Indiana University
University Information Technology Services
  
What are archived documents?
Login>>
Login

Login is for authorized groups (e.g., UITS, OVPIT, and TCC) that need access to specialized Knowledge Base documents. Otherwise, simply use the Knowledge Base without logging in.

Close

Big Red II at Indiana University

On this page:


System overview


Big Red II is Indiana University's main system for high-performance parallel computing. With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world's fastest research supercomputers. Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).

Big Red II was officially dedicated on April 26, 2013, and entered full production on August 7, 2013. Its predecessor, Big Red, was decommissioned and powered down at the end of September 2013.

Although Big Red II is a local resource for use by the IU community, its presence has an important effect on the national cyberinfrastructure ecosystem, including the Extreme Science and Engineering Discovery Environment (XSEDE). By providing IU researchers the same technology, software environment, and hybrid architecture used in national supercomputer resources, such as Titan at Oak Ridge National Laboratory (ORNL) and Blue Waters at the National Center of Supercomputing Applications (NCSA), Big Red II meets relatively modest scientific computing needs locally, allowing larger national supercomputing assets to be efficiently used on challenging compute- and data-intensive projects. Big Red II also helps conserve nationally funded supercomputing assets by providing a powerful hybrid system IU scientists can use to fully optimize and tune their applications before migrating them to an XSEDE digital service, such as Kraken at the National Institute for Computational Sciences (NICS).

Back to top

System information

Note: The scheduled monthly maintenance window for Big Red II is the first Tuesday of each month, 7am-7pm.

Big Red II is a Cray XE6/XK7 supercomputer with a hybrid architecture providing a total of 1,020 compute nodes:

  • 344 CPU-only compute nodes, each containing two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB of RAM

  • 676 CPU/GPU compute nodes, each containing one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, and 32 GB of RAM

All compute nodes are connected through the Cray Gemini interconnect.

System summary
Machine type Hybrid (x86_64 CPUs/NVIDIA Kepler GPUs)
Operating system Cray Linux Environment (based on SUSE Linux SLES 11)
Memory model Distributed
Nodes 1,020
Computational system details Total Per node
CPUs 1,364 2 (compute)
1 (GPU)
Processor cores 21,824 32 (compute)
16 (GPU)
Rmax 596.4 teraFLOPS 0.884 teraFLOPS
Rpeak 1.006 petaFLOPS 320 gigaFLOPS (compute)
1,317 gigaFLOPS (GPU)
RAM 43,648 GB 64 GB (compute)
32 GB (GPU)

Back to top

File storage options

You can store data on your home directory or in scratch space:

  • Home directory: Your Big Red II home directory disk space is allocated on a NAS storage device. You have a 10 GB disk quota, which is shared (if applicable) with your accounts on Quarry, Mason, and the Research Data Complex (RDC).

    The path to your home directory is (replace username with your Network ID username):

    /N/u/username/BigRed2

    Note: IU graduate students, faculty, and staff who need more than 10 GB of permanent storage can apply for accounts on the Research File System (RFS) and the Scholarly Data Archive (SDA). See At IU, how can I apply for an account on the SDA or RFS?

  • Shared scratch: Once you have an account on one of the UITS research computing systems, you also have access to 3.5 PB of shared scratch space.

    Shared scratch space is hosted on the Data Capacitor II (DC2) file system. The DC2 scratch directory is a temporary workspace. Scratch space is not allocated, and its total capacity fluctuates based on project space requirements. The DC2 file system is mounted on IU research systems as /N/dc2/scratch and behaves like any other disk device. If you have an account on an IU research system, you can access /N/dc2/scratch/username (replace username with your IU Network ID username). Access to /N/dc2/projects requires an allocation. For details, see The Data Capacitor II and DCWAN high-speed file systems at Indiana University. Files in shared scratch space more than 60 days old are periodically purged, following user notification.

    Note: The Data Capacitor II (DC2) high-speed, high-capacity, storage facility for very large data sets replaces the former Data Capacitor file system, which was decommissioned January 7, 2014. The DC2 scratch file system (/N/dc2/scratch) is mounted on Big Red II, Quarry, and Mason. Project directories on the former Data Capacitor were migrated to DC2 by UITS before the system was decommissioned. All data on the Data Capacitor scratch file system (/N/dc/scratch) were deleted when the system was decommissioned. If you have questions about the Data Capacitor's retirement, email the UITS High Performance File Systems group.

For more, see At IU, how much disk space is available to me on the research computing systems?

Back to top

Working with electronic protected health information

Although this and other UITS systems and services have been approved by IU by the Office of the Vice President and General Counsel (OVPGC) as appropriate for storing electronic protected health information (ePHI) regulated by the Health Insurance Portability and Accountability Act of 1996 (HIPAA), if you use this or any other IU IT resource for work involving ePHI research data:

  • You and/or the project's principal investigator (PI) are responsible for ensuring the privacy and security of that data, and complying with applicable federal and state laws/regulations and institutional policies. IU's policies regarding HIPAA compliance require the appropriate Institutional Review Board (IRB) approvals and a data management plan.

  • You and/or the project's PI are responsible for implementing HIPAA-required administrative, physical, and technical safeguards to any person, process, application, or service used to collect, process, manage, analyze, or store ePHI data.

Important: Although UITS HIPAA-aligned resources are managed using standards meeting or exceeding those established for managing institutional data at IU, and are approved by the IU Office of the Vice President and General Counsel (OVPGC) for storing research-related ePHI, they are not recognized by the IU Committee of Data Stewards as appropriate for storing other types of institutional data classified as "Critical" that are not ePHI research data. To determine which services are appropriate for storing sensitive institutional data, including ePHI research data, see Comparing supported data classifications, features, costs, and other specifications of file storage solutions and services with storage components available at IU.

For more, see:

The UITS Advanced Biomedical IT Core (ABITC) provides consulting and online help for IU researchers who need help securely processing, storing, and sharing ePHI research data. If you need help or have questions about managing HIPAA-regulated data at IU, contact Anurag Shankar at ABITC. For additional details about HIPAA compliance at IU, see HIPAA & ABITC and the Office of Vice President and General Counsel (OVPGC) HIPAA Privacy & Security page.

Back to top

System access

Access is available to IU graduate students, faculty, and staff. Undergraduates and non-IU collaborators must have IU faculty sponsors. For details, see the "Research system accounts (all campuses)" section of What computing accounts are available at IU, and for whom?

Once your account is created, you may use your IU username and passphrase to log into Big Red II (bigred2.uits.iu.edu) with any SSH2 client. Public key authentication is permitted on Big Red II; see In SSH and SSH2 for Unix, how do I set up public key authentication?

Big Red II supports file transfer via SCP and SFTP; see At IU, what SSH/SFTP clients are supported and where can I get them?

Back to top

Available software

Software installed on Big Red II is available to users via the Modules environment management system.

Modules is a command-line interface that provides commands for setting and modifying shell environment variables. These environment variables define values used by both the shell and the programs you execute on the shell command line.

The Modules environment management package simplifies the management of environment variables associated with various software packages, and lets you automatically modify environment variables as needed when switching between software packages.

For a list of software modules on Big Red II, see Big Red II Modules in the IU Cyberinfrastructure Gateway.

For more about the Modules package, see the module manual page and the modulefile manual page. Additionally, see On Big Red II, Mason, Quarry, and Rockhopper at IU, how do I use Modules to manage my software environment?

For more on the IU Cyberinfrastructure Gateway, see What is the IU Cyberinfrastructure Gateway?

Back to top

Running jobs on Big Red II

Important: For your application to execute on Big Red II's compute nodes, your batch job script must include the appropriate application launch command (aprun for ESM jobs; ccmrun for CCM jobs). Additionally, for CCM jobs, you must load the ccm module (add module load ccm to your ~/.modules file), and use the -l gres=ccm TORQUE directive in your job script. TORQUE scripts for batch jobs on Quarry or Mason will not work on Big Red II without the proper modifications. If your script's executable line does not begin with the appropriate launch command, your application will execute on an aprun service node, not a compute node, and may likely cause a service disruption for all users on the system. The aprun nodes are shared by all currently running jobs, and are intended only for passing job requests. Any memory- or computationally-intensive jobs running on aprun nodes will be terminated.

IU researchers can execute jobs in two environments on Big Red II: Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM). Software optimized for a Cray environment (e.g., AMBER, OpenFOAM, and NAMD) will usually run in the ESM environment. If you compile your own applications, you should target the ESM environment first. Applications developed to run on a standard Linux cluster or on a single server (e.g., MATLAB or Ansys) can run in the CCM environment. In general, the CCM environment will support any standard Linux application. For details, see:

To highlight the performance benefits of Big Red II, UITS provides performance comparisons based on common benchmarks and applications; see BRII Comparisons.

Back to top

Acknowledging grant support

The Indiana University cyberinfrastructure managed by the Research Technologies division of UITS is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see If I use IU's research cyberinfrastructure, what sources of funding do I need to acknowledge in my published work?

Back to top

Documentation and training

For an overview of Big Red II documentation, see Getting started on Big Red II.

For tutorials and workshops on how to effectively use Big Red II's hybrid architecture (particularly, how to use GPUs and identify compatible software), see Cyberinfrastructure Training and InfoShares on the UITS Research Technologies web site.

Back to top

Support

Support for research computing systems at Indiana University is provided by various units within the Systems area of the Research Technologies division of UITS:

To ask any other question about Research Technologies systems and services, use the Request help or information form.

Back to top

This is document bcqt in domain all.
Last modified on April 09, 2014.

I need help with a computing problem

  • Fill out this form to submit your issue to the UITS Support Center.
  • Please note that you must be affiliated with Indiana University to receive support.
  • All fields are required.



Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.

I have a comment for the Knowledge Base

  • Fill out this form to submit your comment to the IU Knowledge Base.
  • If you are affiliated with Indiana University and need help with a computing problem, please use the I need help with a computing problem section above, or contact your campus Support Center.