ARCHIVED: Queue information for IU research supercomputers

This content has been archived, and is no longer maintained by Indiana University. Information here may no longer be accurate, and links may no longer be available or reliable.

On this page:


Overview

Following is information about the queues available for running batch jobs on Indiana University research supercomputers. For current status information about jobs running on IU research supercomputers, visit the HPC everywhere.

Note:
To best meet the needs of all research projects affiliated with Indiana University, UITS Research Technologies administers the batch job queues on IU's research supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's research supercomputers does not meet the needs of your research project, contact UITS Research Technologies.

Big Red 3

In Slurm, compute resources are grouped into logical sets called partitions, which are essentially job queues. To view details about available partitions and nodes, use the sinfo command; for more about using sinfo, see the View partition and node information section of Use Slurm to submit and manage jobs on IU's research computing systems.

Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

For more, see ARCHIVED: About Big Red 3 (Retired).

Carbonate

Carbonate employs a default routing queue that funnels jobs, according to their resource requirements, into two execution queues configured to maximize job throughput and minimize wait times (the amount of time a job remains queued, waiting for required resources to become available). Depending on the resource requirements specified in either your batch job script or your qsub command, the routing queue (BATCH) automatically places your job into the NORMAL or LARGEMEMORY queue:

  • NORMAL queue: Jobs requesting up to 251 GB of virtual memory (qsub -l vmem=251gb)
  • LARGEMEMORY queue: Jobs requesting from 251 GB up to 503 GB of virtual memory (qsub -l vmem=503gb)
Note:
To best meet the needs of all research projects affiliated with Indiana University, UITS Research Technologies administers the batch job queues on IU's research supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's research supercomputers does not meet the needs of your research project, contact UITS Research Technologies.

You do not have to specify a queue in your job script or in your qsub command to submit your job to one of the two batch execution queues; your job will run in the NORMAL or LARGEMEMORY queue unless you specifically submit it to the DEBUG or INTERACTIVE queue, the properties of which are as follows:

  • DEBUG: The DEBUG queue is intended for short, quick-turnaround test jobs requiring less than 1 hour of wall time.
    Maximum wall time: 1 hour
    Maximum nodes per job: 2
    Maximum cores per job: 48
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit a batch job to the DEBUG queue, either add the #PBS -q debug directive to your job script, or enter qsub -q debug on the command line.

    Note:
    For longer debugging or testing sessions, submit an interactive job to the INTERACTIVE queue instead.
  • INTERACTIVE: Interactive jobs submitted to the INTERACTIVE queue should experience less wait time (start sooner) than interactive jobs submitted to the batch execution queues.
    Maximum wall time: 8 hours
    Maximum cores per job: 8
    Maximum number of jobs per queue: 128
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit an interactive job to the INTERACTIVE queue, on the command line, enter qsub with the -I and -q interactive options added; for example:

      qsub -I -q interactive -l nodes=1:ppn=1,walltime=4:00:00
    
    Note:
    If you enter qsub without the -q interactive option, your interactive job will be placed in the routing queue for submission to the NORMAL or LARGEMEMORY batch execution queue, which most likely will entail a longer wait time for your job.

Support for IU research supercomputers, software, and services is provided by various teams within the Research Technologies division of UITS.

For general questions about research computing at IU, contact UITS Research Technologies.

For more options, see Research computing support at IU.

For more, see ARCHIVED: About Carbonate at IU (Retired).

This is document bdkd in the Knowledge Base.
Last modified on 2021-04-09 15:09:07.