About batch jobs

A batch job is a computer program or set of programs processed in batch mode. This means that a sequence of commands to be executed by the operating system is listed in a file (often called a batch file, command file, job script, or shell script) and submitted for execution as a single unit. The opposite of a batch job is interactive processing, in which a user enters individual commands to be processed immediately.

In many cases, batch jobs accumulate during working hours, and are then executed during the evening or another time the computer is idle. This is often the best way to run programs that place heavy demands on the computer.

On high-performance compute clusters, users typically submit batch jobs to pre-defined groups of compute nodes (called queues or partitions) that are managed by a resource management application. Some clusters employ a separate job scheduler to dispatch batch jobs based on the availability of compute resources, job requirements specified by users, and usage policies set by cluster administrators.

At Indiana University, Big Red II, Carbonate, and Karst use TORQUE for submitting and monitoring jobs, and the Moab Workload Manager for dispatching jobs. Big Red 3 and the Carbonate deep learning nodes use Slurm to coordinate resource management and job scheduling.

Note:

For your batch job to run properly on Big Red II, your TORQUE job script must be tailored specifically for the Cray Linux Environment; TORQUE scripts for running jobs on systems running other Linux distributions, such as Red Hat Enterprise Linux (RHEL) or CentOS, will not work on Big Red II without modifications.

For more about running batch jobs on Big Red II, see Run batch jobs on Big Red II.

This is document afrx in the Knowledge Base.
Last modified on 2019-10-14 04:32:20.

Contact us

For help or to comment, email the UITS Support Center.