About Big Red 200 at IU
On this page:
- System overview
- System access
- HPC software
- Set up your user environment
- File storage options
- Run jobs on Big Red 200
- Acknowledge grant support
- Get help
System overview
Big Red 200 is an HPE Cray EX supercomputer designed to support scientific and medical research, and advanced research in artificial intelligence, machine learning, and data analytics.
Big Red 200 features 640 compute nodes, each equipped with 256 GB of memory and two 64-core, 2.25 GHz, 225-watt AMD EPYC 7742 processors. Big Red 200 also includes 64 GPU-accelerated nodes, each with 256 GB of memory, a single 64-core, 2.0 GHz, 225-watt AMD EPYC 7713 processor, and four NVIDIA A100 GPUs. Big Red 200 has a theoretical peak performance (Rpeak) of nearly 7 petaFLOPS.
Big Red 200 is managed with HPE's Performance Cluster Manager (HPCM) and currently runs SUSE Enterprise Linux Server (SLES) version 15 on the compute, GPU, and login nodes.
The Indiana University research supercomputers use the Slurm workload manager for resource management and job scheduling; see Use Slurm to submit and manage jobs on IU's research computing systems.
Before storing data on any of Indiana University's research computing or storage systems, make sure you understand the information in Types of sensitive institutional data appropriate for UITS Research Technologies services.
Make sure you do not include sensitive institutional data as part of a file's filename or pathname.
System access
IU graduate students, faculty, and staff can create Big Red 200 accounts using the instructions in Get additional IU computing accounts. Undergraduate students and affiliates can request Big Red 200 accounts if they are sponsored by full-time IU faculty or staff members. For details, see the Research system accounts (all campuses) section of Computing accounts at IU.
Once your account is created, you can use any SSH2 client to access bigred200.uits.iu.edu
.
Log in with your IU username and passphrase.
-
Two-factor authentication using Two-Step Login (Duo) is required for access to the login nodes on IU research supercomputers, and for SCP and SFTP file transfers to those systems. SSH public key authentication remains an option for researchers who submit the "SSH public key authentication to HPS systems" agreement, in which you agree to set a passphrase on your private key when you generate your key pair. If you have questions about how two-factor authentication may impact your workflows, contact the UITS Research Applications and Deep Learning team. For help, see Get started with Two-Step Login (Duo) at IU and Help for Two-Step Login (Duo).
-
For enhanced security, SSH connections that have been idle for 60 minutes will be disconnected. To protect your data from misuse, remember to log off or lock your computer whenever you leave it.
-
The scheduled monthly maintenance window for IU's high performance computing systems is the second Sunday of each month, 7am-7pm.
HPC software
The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail
.
For information on requesting software, see Software requests in Policies regarding UITS research systems.
Set up your user environment
The IU research supercomputers use module-based environment management systems that provide a convenient method for dynamically customizing your software environment.
Big Red 200 uses the Lmod module management system.
Big Red 200 provides programming environments for the Cray, Intel, NVIDIA HPC, and GNU Compiler Collections (GCC) compilers. For more, see Compile programs on Big Red 200 at IU.
File storage options
For file storage information, see Available access to allocated and short-term storage capacity on IU's research systems.
To check your quota, use the quota
command from the command line of any IU research supercomputer. If the quota
command is not already loaded by default, use the module load quota
command to add it to your environment. The quota
command displays disk (data) quotas and usage for your home directory space on the research supercomputers, your space on Slate, and your space on the Scholarly Data Archive (SDA), as applicable.
Big Red 200 is not currently cleared for work involving data that contain protected health information (PHI).
If you have questions about securing HIPAA-regulated research data at IU, email securemyresearch@iu.edu. SecureMyResearch provides self-service resources and one-on-one consulting to help IU researchers, faculty, and staff meet cybersecurity and compliance requirements for processing, storing, and sharing regulated and unregulated research data; for more, see About SecureMyResearch. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. To learn about division of responsibilities for securing PHI, see Shared responsibility model for securing PHI on UITS systems.
Run jobs on Big Red 200
To set up access to run jobs on Big Red 200, IU faculty, staff, and graduate students can use RT Projects to create projects, request allocations, and add users (research collaborators, lab members, and/or students) who should be permitted to use their allocations.
For more about RT Projects, see Use RT Projects to request and manage access to specialized Research Technologies resources.
Big Red 200 uses the Slurm workload manager; for more, see Use Slurm to submit and manage jobs on IU's research computing systems. For information about running GPU-accelerated jobs on Big Red 200, see Run GPU-accelerated jobs on Quartz or Big Red 200 at IU.
In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. To view details about available partitions and nodes, use the sinfo
command; for more about using sinfo
, see the View partition and node information section of Use Slurm to submit and manage jobs on IU's research computing systems.
Acknowledge grant support
The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure.
Get help
Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.
- If you have a technical issue or system-specific question, contact the High Performance Systems (HPS) team.
- If you have a programming question about compilers, scientific/numerical libraries, or debuggers, contact the UITS Research Applications and Deep Learning team.
For general questions about research computing at IU, contact UITS Research Technologies.
For more options, see Research computing support at IU.
This is document brcc in the Knowledge Base.
Last modified on 2024-04-22 12:37:13.