ARCHIVED: Common commands in TORQUE and SGE

This content has been archived, and is no longer maintained by Indiana University. Information here may no longer be accurate, and links may no longer be available or reliable.

High-performance computing systems at Indiana University use the following resource management packages for submitting and monitoring jobs:

  • Big Red II, Karst, and Mason use TORQUE (based on OpenPBS).

    Note: After seven years of production service, Indiana University's Quarry research computing cluster was decommissioned January 30, 2015. IU students, faculty, and staff can request accounts on Karst, IU's newest high-throughput computing cluster; for instructions, see Requesting an account. User data on Quarry has not be removed; you can access your old home directory data on Quarry from your account on any other IU research computing resource (see Available access to allocated and short-term storage capacity on IU's research systems). All software modules that were available on Quarry are available on Karst. If you have questions or concerns about Quarry's retirement, or need help, contact the High Performance Systems group.

  • Rockhopper uses the Sun Grid Engine (SGE).

For each resource manager, see the following tables to compare:

Common commands

  TORQUE command SGE command
Job submission qsub [scriptfile] qsub [scriptfile]
Job deletion qdel [job_id] qdel [job_id]
Job status (for user) qstat -u [username] qstat [-j job_id]
Extended job status qstat -f [job_id] qstat -f [-j job_id]
Hold a job temporarily qhold [job_id] qhold [job_id]
Release job hold qrls [job_id] qrls [job_id]
List of usable queues qstat -Q qconf -sql

Note: TORQUE (Big Red II, Karst, and Mason) relies on Moab to dispatch jobs; SGE (Rockhopper) does not. For a list of useful Moab commands, see ARCHIVED: Common Moab scheduler commands.

Back to top

Environment variables

  TORQUE command SGE command
Submission directory $PBS_O_WORKDIR $SGE_O_WORKDIR

Back to top

Resource specifications

  TORQUE command SGE command
Queue #PBS -q [queue] #$ -q [queue]
Nodes #PBS -l nodes=[#] n/a
Processors #PBS -l ppn=[#] #$ -pe ompi [#]
Wall clock limit #PBS -l walltime=[hh:mm:ss] #$ -l time=[hh:mm:ss]
Standard output file #PBS -o [file] #$ -o [path]
Standard error #PBS -e [file] #$ -e [path]
Copy environment #PBS -V #$ -V
Notification event #PBS -m abe #$ -m abe
Email address #PBS -M [email] #$ -M [email]
Job name #PBS -N [name] #$ -N [name]
Job restart #PBS -r [y|n] #$ -r [yes|no]
Initial directory n/a #$ -wd [directory]
Node usage #PBS -l naccesspolicy=singlejob n/a
Memory requirement #PBS -l mem=XXXXmb #$ -mem [#]G

This is document avgl in the Knowledge Base.
Last modified on 2018-01-18 15:30:52.