About Carbonate at Indiana University

On this page:

For help determining which of IU's research supercomputing systems is best suited to meet your needs, use the UITS Supercomputing Pathfinder.

System overview

Carbonate (carbonate.uits.iu.edu) at Indiana University is a large memory computer cluster configured to support high-performance, data-intensive computing. Carbonate can handle computing tasks for researchers using genome assembly software, large-scale phylogenetic software, and other genome analysis applications that require large amounts of computer memory. Accounts are available to IU students, faculty, and staff. Carbonate also serves as a "condominium cluster" environment for IU researchers, research labs, departments, and schools.

Carbonate has 72 general-purpose compute nodes, each with 256 GB of RAM, and eight large-memory compute nodes, each with 512 GB of RAM. Each node is a Lenovo NeXtScale nx360 M5 server equipped with two 12-core Intel Xeon E5-2680 v3 CPUs and four 480 GB solid-state drives. All nodes are housed in the IU Bloomington Data Center, run Red Hat Enterprise 7.x, and are connected to the IU Science DMZ via 10-gigabit Ethernet.

Carbonate uses the TORQUE resource manager integrated with Moab Workload Manager to coordinate resource management and job scheduling. The Data Capacitor II and Data Capacitor Wide Area Network (DC-WAN) parallel file systems are mounted for temporary storage of research data. The Modules environment management package on allows users to dynamically customize their shell environments.

Besides being available to IU students, faculty, and staff for standard, cluster-based, high-throughput computing, Carbonate offers two alternative service models to the IU community:

  • Condominium computing: The condominium computing service model provides a way for IU schools, departments, and researchers to fund computational nodes for their own research purposes without shouldering the cost, overhead, and management requirements of purchasing individual systems. Condominium nodes are housed in the IU Bloomington Data Center, and are managed, backed up, and secured by UITS Research Technologies staff. Condominium nodes are available to "members" whenever they are needed, but when they are not in use, idle condominium nodes become available to other researchers and students on Carbonate. In this way, condominium computing promotes cost-effective expansion of IU's high-performance computing capabilities, enables efficient provisioning of computing resources to the entire IU research community, and helps conserve natural resources and energy.
  • Dedicated computing: The dedicated computing service model lets schools and departments host nodes that are dedicated solely to their use within Carbonates's physical, electrical, and network framework. This provides 24/7 access for school or departmental use, while leveraging the network and physical components of Carbonate, and the security and energy efficiency benefits provided by location within the IU Data Center.

To inquire about the condominium computing or dedicated computing service models on Carbonate, email the UITS High Performance Systems (HPS) team.

System access

IU students, faculty, and staff can request accounts on Carbonate by following the instructions in Get additional IU computing accounts.

Non-IU collaborators must have IU faculty sponsors. For details, see the Research system accounts (all campuses) section of Computing accounts at IU.

NSF-funded life sciences researchers can apply to the National Center for Genome Analysis Support (NCGAS) allocations committee to request accounts on Carbonate. To request an allocation, submit the NCGAS Allocations Request Form. If you have questions, email NCGAS.

Once your account is created, you can use any SSH2 client to access carbonate.uits.iu.edu. Log in with your IU username and passphrase, and then confirm your identity with Duo two-step login.


Available software

For a list of packages available on Carbonate, see HPC Applications.

Carbonate users are free to install software in their home directories and may request the installation of software for use by all users on the system. Only faculty or staff can request software. If students require software packages on Carbonate, their advisors must request them. For details, see IU policies relative to installing software on Carbonate. To request software, use the HPC Software Request form.

Set up your user environment

On the research computing resources at Indiana University, the Modules environment management system provides a convenient method for dynamically customizing your software environment.

For more about using Modules to configure your user environment, see Use Modules to manage your software environment on IU's research computing systems.

File storage options

Before storing data on this system, make sure you understand the information in the Working with data containing PHI section (below).

Work with data containing PHI

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established rules protecting the privacy and security of individually identifiable health information. The HIPAA Privacy Rule and Security Rule set national standards requiring organizations and individuals to implement certain administrative, physical, and technical safeguards to maintain the confidentiality, integrity, and availability of protected health information (PHI).

This UITS system or service meets certain requirements established in the HIPAA Security Rule thereby enabling its use for work involving data that contain protected health information (PHI). However, using this system or service does not fulfill your legal responsibilities for protecting the privacy and security of data that contain PHI. You may use this system or service for work involving data that contain PHI only if you institute additional administrative, physical, and technical safeguards that complement those UITS already has in place.

Although PHI is one type of institutional data classified as Critical at IU, other types of institutional data classified as Critical are not permitted on Research Technologies systems. For help determining which institutional data elements classified as Critical are considered PHI, see About protected health information (PHI) data elements in the classifications of institutional data.

For more, see Your legal responsibilities for protecting data containing protected health information (PHI) when using UITS Research Technologies systems and services.

UITS provides consulting and online help for Indiana University researchers, faculty, and staff who need help securely processing, storing, and sharing data containing protected health information (PHI). If you have questions about managing HIPAA-regulated data at IU, contact UITS HIPAA Consulting. To learn more about properly ensuring the safe handling of PHI on UITS systems, see the UITS IT Training video Securing HIPAA Workflows on UITS Systems. For additional details about HIPAA compliance at IU, see HIPAA Privacy & Security on the University Compliance website.

Run jobs on Carbonate

IU's research computing clusters use the TORQUE resource manager (based on OpenPBS) and the Moab Workload Manager to manage and schedule batch jobs. Moab uses fairshare scheduling to track usage and prioritize jobs.

User processes on the login nodes are limited to 20 minutes of CPU time. Processes on the login nodes that run longer than 20 minutes are terminated automatically (without warning). If your application requires more than 20 minutes of CPU time or has large memory requirements, submit a batch job or request an interactive session using the TORQUE qsub command.

Because of this limit:

  • When running Java programs, add the -Xmx parameter (values must be multiples of 1,024 greater than 2 MB) on the command line to specify the Java Virtual Machine (JVM) maximum heap size. For example, to run a Java program (for example, Hello_DeathStar) with a maximum heap size of 640 MB , on the command line, enter:
      java -Xmx640m Hello_DeathStar
  • Memory-intensive jobs started on the login nodes will be killed almost immediately. Submit debugging and testing jobs to the INTERACTIVE or DEBUG queue; for example:
    • INTERACTIVE queue:
       qsub -I -q interactive -l nodes=1:ppn=4,vmem=10gb,walltime=4:00:00
    • DEBUG queue:
       qsub -q debug -l nodes=1:ppn=4,vmem=32gb,walltime=00:30:00

Submit jobs

To submit a TORQUE job script (for example, job.script) on Carbonate, use the qsub command. If the command exits successfully, it will return a job ID; for example:

 [sgerrera@h1]$ qsub job.script

If your job has resource requirements that are different from the defaults (but not exceeding the maximums allowed), specify them either with TORQUE directives in your job script, or with the -l (a lower-case "L"; short for resource_list) option in your qsub command. For example:

  • To submit a job (for example, job.script) that needs more than the default 60 minutes of walltime, use:
     qsub -l walltime=10:00:00 job.script
  • To submit a job (for example, job.script) that needs more than the default 16 GB of virtual memory (for example, 200 GB), use:
     qsub -l nodes=1:ppn=4,vmem=200gb job.script

    If you don't provide a virtual memory resource (omit -l vmem=[n]gb), you will receive a warning, and the default (16 GB) virtual memory will be applied; for example:

     [ersojyn1@h1]$ qsub -l nodes=1:ppn=4 job.script
     warning:vmem resource not provided, default vmem of 16gb will be applied. See
     /etc/motd for details.
Command-line arguments override directives in your job script.

On the command line, you can specify multiple attributes with either one -l switch followed by multiple comma-separated attributes, or multiple -l switches, one for each attribute. For example, to submit a job (for example, job.script) that requires 32 GB of virtual memory to run on 16 cores on one node, you may enter either of the following commands (they are equivalent):

 qsub -l nodes=1:ppn=16,vmem=32gb job.script
 qsub -l nodes=1:ppn=16 -l vmem=32gb job.script

If you need help determining how much memory your jobs are using, TORQUE will report this information when you add the -m e flag to qsub, or when you add the equivalent directive to your submit script:

 #PBS -m e

When you use the -m e parameter, TORQUE/PBS will send you an email message at job completion similar to the following:

 PBS Job Id: 16857.s1
 Job Name:   sleep
 Exec host:  c15/0-3
 Execution terminated

In the above example, resources_used.vmem=470124kb is the relevant line.

Useful qsub options include:

Option Action
-a YYYYMMDDhhmm.SS Specify the date and time after which the job is eligible to execute (replace YYYY with the year, MM with the month, DD with the day of the month, hh with the hour, and mm with the minute; the .SS to indicate seconds is optional).
-I Run the job interactively. (Interactive jobs are forced to not rerunnable.)
-m e Mail a job summary report when the job terminates.
-q queue_name Specify the destination queue for the job. (On Carbonate, use this only when submitting jobs to the INTERACTIVE or DEBUG queues.)
-r [y|n] Declare whether the job is rerunnable. If the argument is y, the job is rerunnable; if the argument is n, the job is not rerunnable. The default value is y (rerunnable).
-V Export all environment variables in the qsub command's environment to the batch job.

For more, see the qsub manual page.

Monitor jobs

To monitor the status of a queued or running job, use the TORQUE qstat command. Useful qstat options include:

Option Action
-a Display all jobs.
-f Write a full status display to standard output.
-n List the nodes allocated to a job.
-r Display jobs that are running.
-u user1,user2 Display jobs owned by specified users.

For more, see the qstat manual page.

Delete jobs

To delete a queued or running job, use the qdel command.

Occasionally, a node will become unresponsive and unable to respond to the TORQUE server's requests to kill a job. In such cases, try using qdel -W <delay> to override the delay between SIGTERM and SIGKILL signals (for <delay>, specify a value in seconds).

For more, see the qdel manual page.

Queue information

Carbonate employs a default routing queue that funnels jobs, according to their resource requirements, into two execution queues configured to maximize job throughput and minimize wait times (the amount of time a job remains queued, waiting for required resources to become available). Depending on the resource requirements specified in either your batch job script or your qsub command, the routing queue (BATCH) automatically places your job into the NORMAL or LARGEMEMORY queue:

  • NORMAL queue: Jobs requesting up to 251 GB of virtual memory (qsub -l vmem=251gb)
  • LARGEMEMORY queue: Jobs requesting from 251 GB up to 503 GB of virtual memory (qsub -l vmem=503gb)
To best meet the needs of all research projects affiliated with Indiana University, UITS Research Technologies administers the batch job queues on IU's research supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's supercomputing systems does not meet the needs of your research project, contact UITS Research Technologies.

You do not have to specify a queue in your job script or in your qsub command to submit your job to one of the two batch execution queues; your job will run in the NORMAL or LARGEMEMORY queue unless you specifically submit it to the DEBUG or INTERACTIVE queue, the properties of which are as follows:

  • DEBUG: The DEBUG queue is intended for short, quick-turnaround test jobs requiring less than 1 hour of wall time.
    Maximum wall time: 1 hour
    Maximum nodes per job: 2
    Maximum cores per job: 48
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit a batch job to the DEBUG queue, either add the #PBS -q debug directive to your job script, or enter qsub -q debug on the command line.

    For longer debugging or testing sessions, submit an interactive job to the INTERACTIVE queue instead.
  • INTERACTIVE: Interactive jobs submitted to the INTERACTIVE queue should experience less wait time (start sooner) than interactive jobs submitted to the batch execution queues.
    Maximum wall time: 8 hours
    Maximum cores per job: 8
    Maximum number of jobs per queue: 128
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit an interactive job to the INTERACTIVE queue, on the command line, enter qsub with the -I and -q interactive options added; for example:

      qsub -I -q interactive -l nodes=1:ppn=1,walltime=4:00:00
    If you enter qsub without the -q interactive option, your interactive job will be placed in the routing queue for submission to the NORMAL or LARGEMEMORY batch execution queue, which most likely will entail a longer wait time for your job.

Request single user time

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request "single user time" or ask for more information, contact UITS Research Technologies.

Acknowledge grant support

The Indiana University cyberinfrastructure, managed by the Research Technologies division of UITS, is supported by funding from several grants, each of which requires you to acknowledge its support in all presentations and published works stemming from research it has helped to fund. Conscientious acknowledgment of support from past grants also enhances the chances of IU's research community securing funding from grants in the future. For the acknowledgment statement(s) required for scholarly printed works, web pages, talks, online publications, and other presentations that make use of this and/or other grant-funded systems at IU, see Sources of funding to acknowledge in published work if you use IU's research cyberinfrastructure


For an overview of Carbonate documentation, see Get started on Carbonate.

Support for IU research computing systems, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

This is document aolp in the Knowledge Base.
Last modified on 2019-02-25 18:07:02.

Contact us

For help or to comment, email the UITS Support Center.