Queue information for IU research computing systems

Following is information about the queues available for running batch jobs on Indiana University research computing systems. For current status information about jobs running on IU research computing systems, visit the IU Cyberinfrastructure Gateway.

On this page:

Note:
To best meet the needs of all research projects affiliated with Indiana University, the High Performance Systems (HPS) team administers the batch job queues on UITS Research Technologies supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's supercomputing systems does not meet the needs of your research project, fill out and submit the Research Technologies Ask RT for Help form (for "Select a group to contact", select High Performance Systems).

Big Red II

Note:
Each debug queue (debug_cpu and debug_gpu) has a maximum limit of two queued jobs per user. Across all queues, the maximum number of queued jobs allowed per user is 500.

CPU-only jobs

Big Red II has the following queues configured to accept jobs that will run on the 32-core dual-Opteron (CPU-only) nodes:

  • cpu: The routing queue for all "production" jobs; each job is routed, based on its resource requirements, to one of the execution queues (normal, serial, or long)
  • debug_cpu: An execution queue reserved for testing and debugging purposes only

Maximum values for each execution queue are defined in the following table.

32-core dual-Opteron (CPU-only) nodes
Execution queue Cores/node Nodes/job Wall time/job Nodes/user
(normal) * 32 128 2 days 128
(serial) * 32 1 7 days 128
(long) * 32 8 14 days 32
debug_cpu 32 4 1 hour 4
* Do not submit jobs directly to the normal, serial, or long execution queues. Always use the cpu routing queue when submitting jobs for "production" runs. Use the debug queue for testing or debugging purposes only.

CPU/GPU jobs

Big Red II has the following queues configured to accept jobs that will run on the 16-core Opteron/NVIDIA (CPU/GPU) nodes:

  • gpu: The main execution queue for jobs on the CPU/GPU nodes
  • opengl: An execution queue reserved for OpenGL jobs
  • cpu16: An execution queue that lets non-GPU jobs run on the CPU/GPU nodes
  • debug_gpu: An execution queue reserved for testing and debugging CPU/GPU codes

Maximum values for each queue are defined as follows:

16-core Opteron/NVIDIA (CPU/GPU) nodes
Execution queue Cores/node Nodes/job Wall time/job Nodes/user
gpu 16 256 7 days 384
opengl 16 256 7 days 384
cpu16 16 32 7 days 384
debug_gpu 16 4 1 hour 4

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request such single user time, complete and submit the Research Technologies Ask RT for Help form, requesting to run jobs in single user time on HPS systems. If you have questions, email the HPS team.

Support for this system is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups. If you have system-specific questions, contact the HPS group. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on this system, contact the SciAPT group.

For more, see Big Red II at Indiana University.

Carbonate

Carbonate employs a default routing queue that funnels jobs, according to their resource requirements, into two execution queues configured to maximize job throughput and minimize wait times (i.e., the amount of time a job remains queued, waiting for required resources to become available). Depending on the resource requirements specified in either your batch job script or your qsub command, the routing queue (BATCH) automatically places your job into the NORMAL or LARGEMEMORY queue:

  • NORMAL queue: Jobs requesting up to 251 GB of virtual memory (qsub -l vmem=251gb)
  • LARGEMEMORY queue: Jobs requesting from 251 GB up to 503 GB of virtual memory (qsub -l vmem=503gb)
Note:
To best meet the needs of all research projects affiliated with Indiana University, the High Performance Systems (HPS) team administers the batch job queues on UITS Research Technologies supercomputers using resource management and job scheduling policies that optimize the overall efficiency and performance of workloads on those systems. If the structure or configuration of the batch queues on any of IU's supercomputing systems does not meet the needs of your research project, fill out and submit the Research Technologies Ask RT for Help form (for "Select a group to contact", select High Performance Systems).

You do not have to specify a queue in your job script or in your qsub command to submit your job to one of the two batch execution queues; your job will run in the NORMAL or LARGEMEMORY queue unless you specifically submit it to the DEBUG or INTERACTIVE queue, the properties of which are as follows:

  • DEBUG: The DEBUG queue is intended for short, quick-turnaround test jobs requiring less than 1 hour of wall time.
    Maximum wall time: 1 hour
    Maximum nodes per job: 2
    Maximum cores per job: 48
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit a batch job to the DEBUG queue, either add the #PBS -q debug directive to your job script, or enter qsub -q debug on the command line.

    Note:
    For longer debugging or testing sessions, submit an interactive job to the INTERACTIVE queue instead.
  • INTERACTIVE: Interactive jobs submitted to the INTERACTIVE queue should experience less wait time (i.e., start sooner) than interactive jobs submitted to the batch execution queues.
    Maximum wall time: 8 hours
    Maximum cores per job: 8
    Maximum number of jobs per queue: 128
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit an interactive job to the INTERACTIVE queue, on the command line, enter qsub with the -I and -q interactive options added; for example:

      qsub -I -q interactive -l nodes=1:ppn=1,walltime=4:00:00
    
    Note:
    If you enter qsub without the -q interactive option, your interactive job will be placed in the routing queue for submission to the NORMAL or LARGEMEMORY batch execution queue, which most likely will entail a longer wait time for your job.

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request such single user time, complete and submit the Research Technologies Ask RT for Help form, requesting to run jobs in single user time on HPS systems. If you have questions, email the HPS team.

Support for this system is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups. If you have system-specific questions, contact the HPS group. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on this system, contact the SciAPT group.

For more, see Carbonate at Indiana University.

Karst

Karst employs a default routing queue that funnels jobs, according to their resource requirements, into three execution queues configured to maximize job throughput and minimize wait times (i.e., the amount of time a job remains queued, waiting for required resources to become available). Depending on the resource requirements specified in either your batch job script or your qsub command, the routing queue (BATCH) automatically places your job into the SERIAL, NORMAL, or LONG queue.

Note:
The maximum wall time allowed for jobs running on Karst is 14 days. If your job requires more than 14 days of wall time, email the High Performance Systems group for assistance.

You do not have to specify a queue in your job script or in your qsub command to submit your job to one of the three batch execution queues; your job will run in the SERIAL, NORMAL, or LONG queue unless you specifically submit it to the DEBUG, PREEMPT, or INTERACTIVE queue, the properties of which are as follows:

  • DEBUG: The DEBUG queue is intended for short, quick-turnaround test jobs requiring less than 1 hour of wall time.
    Maximum wall time: 1 hour
    Maximum nodes per job: 4
    Maximum cores per job: 64
    Maximum number of jobs per queue: None
    Maximum number of jobs per user: 2
    Direct submission: Yes

    To submit a batch job to the DEBUG queue, either add the #PBS -q debug directive to your job script, or enter qsub -q debug on the command line.

    Note:
    For longer debugging or testing sessions, submit an interactive job to the INTERACTIVE queue instead.
  • INTERACTIVE: Interactive jobs submitted to the INTERACTIVE queue should experience less wait time (i.e., start sooner) than interactive jobs submitted to the batch execution queues.
    Maximum wall time: 8 hours
    Maximum nodes per job: None
    Maximum cores per job: 8
    Maximum number of jobs per queue: 128
    Maximum number of jobs per user: 16
    Direct submission: Yes

    To submit an interactive job to the INTERACTIVE queue, on the command line, enter qsub with the -I and -q interactive options added; for example:

      qsub -I -q interactive -l nodes=1:ppn=1,walltime=4:00:00
    
    Note:
    If you enter qsub without the -q interactive option, your interactive job will be placed in the routing queue for submission to the SERIAL, NORMAL, or LONG batch execution queue, which most likely will entail a longer wait time for your job.
  • PREEMPT: Jobs submitted to the PREEMPT queue run on "condominium nodes" owned by members of the "condominium computing" service; however, when a job submitted by a "condominium node" owner is ready to dispatch (and no other nodes are available), the non-condominium job with the lowest accrued wall time will be preempted. Consequently, non-condominium jobs in the PREEMPT queue may dispatch multiple times before running to completion.
    Maximum wall time: 14 days
    Maximum nodes per job: None
    Maximum cores per job: None
    Maximum number of jobs per queue: 1,800
    Maximum number of jobs per user: 200
    Direct submission: Yes

    To submit a job to the PREEMPT queue, add the #PBS -q preempt directive to your job script, or enter qsub -q preempt on the command line.

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request such single user time, complete and submit the Research Technologies Ask RT for Help form, requesting to run jobs in single user time on HPS systems. If you have questions, email the HPS team.

Support for this system is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups. If you have system-specific questions, contact the HPS group. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on this system, contact the SciAPT group.

For more, see Karst at Indiana University.

Mason

Note:
Mason, Indiana University's large memory computer cluster, will be retired on January 1, 2018. For more, see About the Mason retirement.

The BATCH queue is the default, general-purpose queue on Mason. The default walltime is one hour; the maximum limit is two weeks. If your job requires more than two weeks of walltime, email the High Performance Systems group for assistance.

Although UITS Research Technologies cannot provide dedicated access to an entire compute system during the course of normal operations, "single user time" is made available by request one day a month during each system's regularly scheduled maintenance window to accommodate IU researchers with tasks requiring dedicated access to an entire compute system. To request such single user time, complete and submit the Research Technologies Ask RT for Help form, requesting to run jobs in single user time on HPS systems. If you have questions, email the HPS team.

Support for this system is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups. If you have system-specific questions, contact the HPS group. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers on this system, contact the SciAPT group.

Note:
If you have an Extreme Science and Engineering Discovery Environment (XSEDE) allocation on Mason and need help, or have questions about using Mason, contact the XSEDE Help Desk, or consult the Indiana University Mason User Guide on the XSEDE User Portal.

For more, see Mason at Indiana University.

This is document bdkd in the Knowledge Base.
Last modified on 2017-07-19 08:45:16.

  • Fill out this form to submit your issue to the UITS Support Center.
  • Please note that you must be affiliated with Indiana University to receive support.
  • All fields are required.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.

  • Fill out this form to submit your comment to the IU Knowledge Base.
  • If you are affiliated with Indiana University and need help with a computing problem, please use the I need help with a computing problem section above, or contact your campus Support Center.

Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.