About Carbonate at Indiana University
On this page:
- System overview
- System access
- HPC software
- Set up your user environment
- File storage options
- Work with data containing PHI
- Run jobs on Carbonate
- Acknowledge grant support
- Get help
System overview
UITS Research Technologies will retire Carbonate on December 17, 2023. Carbonate users should plan to move to Quartz or Big Red 200 before the retirement date to ensure that their research continues without interruption. Following the retirement, Carbonate's GPU hardware will be moved to the Quartz cluster, and Research Desktop (RED) will be updated to align more closely with Quartz. The colocation service on Carbonate will not be impacted by this retirement. If you have questions about the retirement of Carbonate, contact UITS Research Technologies.
Carbonate is Indiana University's large-memory computer cluster. Designed to support data-intensive computing, Carbonate is particularly well-suited for running genome assembly software, large-scale phylogenetic software, and other genome analysis applications that require large amounts of computer memory. Carbonate provides a specialized GPU partition for researchers with applications that require GPUs. Additionally, Carbonate offers a colocation service to IU researchers, research labs, departments, and schools.
Carbonate has 72 general-purpose compute nodes, each with 256 GB of RAM, and eight large-memory compute nodes, each with 512 GB of RAM. Each general-purpose compute node is a Lenovo NeXtScale nx360 M5 server equipped with two 12-core Intel Xeon E5-2680 v3 CPUs and four 480 GB solid-state drives. Carbonate also features 24 GPU-accelerated Apollo 6500 nodes, each equipped with two Intel 6248 2.5 GHz 20-core CPUs, 768 GB of RAM, 4 NVIDIA V100-PCIE-32GB GPUs, and one 1.92 TB solid-state drive.
All Carbonate nodes are housed in the IU Bloomington Data Center, run Red Hat Enterprise 7.x, and are connected to the IU Science DMZ via 10-gigabit Ethernet. The Slate, Slate-Project, and Slate-Scratch file systems are mounted for temporary storage of research data. The Modules environment management package allows users to dynamically customize their shell environments.
Besides being available to IU students, faculty, and staff for standard, cluster-based, high-throughput computing, Carbonate offers a colocation service to the IU community. IU schools and departments can purchase nodes that are compatible with IU's Carbonate cluster, have them installed in the very secure IUB Data Center, have them available when members want to use them, and have them managed and secured by UITS Research Technologies staff. This colocation service gives schools and departments access to compute nodes that are dedicated solely to their use within Carbonate's physical, electrical, and network framework while leveraging the security and energy efficiency benefits provided by location within the IU Data Center.
To inquire about the colocation service on Carbonate, email the UITS High Performance Systems (HPS) team.
Before storing data on any of Indiana University's research computing or storage systems, make sure you understand the information in Types of sensitive institutional data appropriate for UITS Research Technologies services.
Make sure you do not include sensitive institutional data as part of a file's filename or pathname.
System access
IU students, faculty, staff, and sponsored affiliates can create Carbonate accounts using the instructions in Get additional IU computing accounts. For details, see the Research system accounts (all campuses) section of Computing accounts at IU.
Once your account is created, you can use any SSH2 client to access carbonate.uits.iu.edu
.
Log in with your IU username and passphrase.
-
Two-factor authentication using Two-Step Login (Duo) is required for access to the login nodes on IU research supercomputers, and for SCP and SFTP file transfers to those systems. SSH public key authentication remains an option for researchers who submit the "SSH public key authentication to HPS systems" agreement (log into HPC everywhere using your IU username and passphrase), in which you agree to set a passphrase on your private key when you generate your key pair. If you have questions about how two-factor authentication may impact your workflows, contact the UITS Research Applications and Deep Learning team. For help, see Get started with Two-Step Login (Duo) at IU and Help for Two-Step Login (Duo).
-
For enhanced security, SSH connections that have been idle for 60 minutes will be disconnected. To protect your data from misuse, remember to log off or lock your computer whenever you leave it.
-
The scheduled monthly maintenance window for IU's high performance computing systems is the second Sunday of each month, 7am-7pm.
HPC software
The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail
.
For information on requesting software, see Software requests in Policies regarding UITS research systems.
Set up your user environment
The IU research supercomputers use module-based environment management systems that provide a convenient method for dynamically customizing your software environment.
Carbonate uses the Modules module management system.
For more, see Use modules to manage your software environment on IU research supercomputers.
The GNU Compiler Collection (GCC), the Intel Compiler Suite, and Portland Group (PGI) compilers are available on Carbonate. Open MPI and MPICH compilers also are available for compiling parallel programs. For more, see Compile programs on Carbonate at IU.
File storage options
For file storage information, see Available access to allocated and short-term storage capacity on IU's research systems.
To check your quota, use the quota
command from the command line of any IU research supercomputer. If the quota
command is not already loaded by default, use the module load quota
command to add it to your environment. Alternatively, log in to HPC everywhere and, in the "HPC Status" pane, look under "Storage". The quota
command and HPC everywhere both display disk (data) quotas and usage for your home directory space on the research supercomputers, your space on Slate, and your space on the Scholarly Data Archive (SDA), as applicable. HPC everywhere additionally displays your inode (file) quotas for these spaces.
Before storing data on this system, make sure you understand the information in the Work with data containing PHI section (below).
Work with data containing PHI
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).
This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.
Although PHI is classified as Critical data, other types of institutional data classified as Critical are not permitted on Research Technologies systems. For help determining which institutional data elements classified as Critical are considered PHI, see About protected health information (PHI) data elements in the classifications of institutional data.
If you have questions about securing HIPAA-regulated research data at IU, email securemyresearch@iu.edu
. SecureMyResearch provides self-service resources and one-on-one consulting to help IU researchers, faculty, and staff meet cybersecurity and compliance requirements for processing, storing, and sharing regulated and unregulated research data; for more, see About SecureMyResearch. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. To learn about division of responsibilities for securing PHI, see Shared responsibility model for securing PHI on UITS systems.
Run jobs on Carbonate
To set up access to run jobs on Carbonate, IU faculty, staff, and graduate students can use RT Projects to create projects, request allocations, and add users (research collaborators, lab members, and/or students) who should be permitted to user their allocations.
For more about RT Projects, see Use RT Projects to request and manage access to specialized Research Technologies resources.
The Indiana University research supercomputers use the Slurm workload manager for resource management and job scheduling; see Use Slurm to submit and manage jobs on IU's research computing systems.
In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. To view details about available partitions and nodes, use the sinfo
command; for more about using sinfo
, see the View partition and node information section of Use Slurm to submit and manage jobs on IU's research computing systems.
- general: If your job requires up to 251 GB of memory, submit it to the general partition by including
-p general
either as anSBATCH
directive in your job script or as an option in yoursrun
command. - largememory: If your job requires from 251 GB to 503 GB of memory, submit it to the largememory partition by including
-p largememory
either as anSBATCH
directive in your batch job script or as an option in yoursrun
command.
Acknowledge grant support
The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure
Get help
Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.
- If you have a technical issue or system-specific question, contact the High Performance Systems (HPS) team.
- If you have a programming question about compilers, scientific/numerical libraries, or debuggers, contact the UITS Research Applications and Deep Learning team.
For general questions about research computing at IU, contact UITS Research Technologies.
For more options, see Research computing support at IU.
This is document aolp in the Knowledge Base.
Last modified on 2023-08-21 15:48:33.