About Carbonate at Indiana University
On this page:
- System overview
- System access
- HPC software
- Set up your user environment
- File storage options
- Work with data containing PHI
- Run jobs on Carbonate
- Acknowledge grant support
- Get help
System overview
Carbonate is Indiana University's large-memory computer cluster. Designed to support data-intensive computing, Carbonate is particularly well-suited for running genome assembly software, large-scale phylogenetic software, and other genome analysis applications that require large amounts of computer memory. Carbonate provides specialized deep learning (DL) and GPU partitions for researchers with deep learning applications and other applications that require GPUs. Additionally, Carbonate offers a colocation service to IU researchers, research labs, departments, and schools.
Carbonate has 72 general-purpose compute nodes, each with 256 GB of RAM, and eight large-memory compute nodes, each with 512 GB of RAM. Each general-purpose compute node is a Lenovo NeXtScale nx360 M5 server equipped with two 12-core Intel Xeon E5-2680 v3 CPUs and four 480 GB solid-state drives. In support of deep learning applications and research, Carbonate also features:
- 12 GPU-accelerated Lenovo ThinkSystem SD530 deep learning (DL) nodes, each equipped with two Intel Xeon Gold 6126 12-core CPUs, two NVIDIA GPU accelerators (eight with Tesla P100s; four with Tesla V100s), four 1.92 TB solid-state drives, and 192 GB of RAM.
- 24 GPU-accelerated Apollo 6500 nodes, each equipped with two Intel 6248 2.5 GHz 20-core CPUs, 768 GB of RAM, 4 NVIDIA V100-PCIE-32GB GPUs, and one 1.92 TB solid-state drive.
All Carbonate nodes are housed in the IU Bloomington Data Center, run Red Hat Enterprise 7.x, and are connected to the IU Science DMZ via 10-gigabit Ethernet. The Slate, and Slate-Project file systems are mounted for temporary storage of research data. The Modules environment management package allows users to dynamically customize their shell environments.
Besides being available to IU students, faculty, and staff for standard, cluster-based, high-throughput computing, Carbonate offers a colocation service to the IU community. IU schools and departments can purchase nodes that are compatible with IU's Carbonate cluster, have them installed in the very secure IUB Data Center, have them available when members want to use them, and have them managed and secured by UITS Research Technologies staff. This colocation service gives schools and departments access to compute nodes that are dedicated solely to their use within Carbonate's physical, electrical, and network framework while leveraging the security and energy efficiency benefits provided by location within the IU Data Center.
To inquire about the colocation service on Carbonate, email the UITS High Performance Systems (HPS) team.
System access
Access is available to IU students, faculty, staff, and sponsored affiliates. For details, see the Research system accounts (all campuses) section of Computing accounts at IU.
NSF-funded life sciences researchers can apply to the National Center for Genome Analysis Support (NCGAS) allocations committee to request accounts on Carbonate. To request an allocation, submit the NCGAS Allocations Request Form. If you have questions, email NCGAS.
Once your account is created, you can use any SSH2 client to access carbonate.uits.iu.edu
. Log in with your IU username and passphrase.
- Two-factor authentication using Two-Step Login (Duo) is required for access to the login nodes on IU research supercomputers, and for SCP and SFTP file transfers to those systems. SSH public key authentication remains an option for researchers who submit the "SSH public key authentication to HPS systems" agreement (log into HPC everywhere using your IU username and passphrase), in which you agree to set a passphrase on your private key when you generate your key pair. If you have questions about how two-factor authentication may impact your workflows, contact the UITS Research Applications and Deep Learning team. For help, see Get started with Two-Step Login (Duo) at IU and Help for Two-Step Login (Duo).
-
For enhanced security, SSH connections that have been idle for 60 minutes will be disconnected. To protect your data from misuse, remember to log off or lock your computer whenever you leave it.
-
The scheduled monthly maintenance window for IU's high performance computing systems is the second Sunday of each month, 7am-7pm.
HPC software
The Research Applications and Deep Learning (RADL) group, within the Research Technologies division of UITS, maintains and supports the high performance computing (HPC) software on IU's research supercomputers. To see which applications are available on a particular system, log into the system, and then, on the command line, enter module avail
.
For information about adding packages to your user environment, see Use Modules to manage your software environment on IU's research supercomputers.
To request software, submit the HPC Software Request form.
Set up your user environment
On the research supercomputers at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.
For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research supercomputers.
File storage options
For file storage information, see Available access to allocated and short-term storage capacity on IU's research systems.
Work with data containing PHI
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).
This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.
If you have questions about securing HIPAA-regulated research data at IU, email securemyresearch@iu.edu
. SecureMyResearch provides self-service resources and one-on-one consulting to help IU researchers, faculty, and staff meet cybersecurity and compliance requirements for processing, storing, and sharing regulated and unregulated research data; for more, see About SecureMyResearch. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. To learn about division of responsibilities for securing PHI, see Shared responsibility model for securing PHI on UITS systems.
Run jobs on Carbonate
UITS Research Technologies is transitioning the job submission and scheduling environment on Carbonate from TORQUE/Moab to Slurm. Starting April 12, 2021, TORQUE and Moab will no longer be available on Carbonate. To help researchers prepare for the transition, Research Technologies provides a Slurm debug partition on Carbonate for testing job scripts converted from TORQUE to Slurm. If you need help converting a job script from TORQUE to Slurm, email the UITS Research Applications and Deep Learning team. For more about Slurm, see Use Slurm to submit and manage jobs on high performance computing systems.
Carbonate's general-purpose and large-memory compute nodes use the TORQUE resource manager integrated with Moab Workload Manager to coordinate resource management and job scheduling. For details on how to submit, monitor, and delete jobs on Carbonate's general-purpose and large-memory compute nodes, see Run jobs on Carbonate. For information about running jobs on Carbonate's deep learning (DL) nodes, see Use Slurm to submit and manage jobs on high performance computing systems.
Acknowledge grant support
The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure
Get help
For an overview of Carbonate documentation, see Get started on Carbonate.
Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.
- If you have a system-specific question, contact the High Performance Systems (HPS) team.
- If you have a programming question about compilers, scientific/numerical libraries, or debuggers, contact the UITS Research Applications and Deep Learning team.
For general questions about research computing at IU, contact UITS Research Technologies.
For more options, see Research computing support at IU.
This is document aolp in the Knowledge Base.
Last modified on 2021-02-08 10:02:16.