Mason at Indiana University
On this page:
- System overview
- System information
- File systems (storage for IU users)
- System access
- Available software
- Computing environment
- Transferring your files to Mason
- Application development
- Running your applications
mason.indiana.edu) at Indiana University is a large memory computer cluster configured to support data-intensive, high-performance computing tasks for researchers using genome assembly software (particularly software suitable for assembly of data from next-generation sequencers), large-scale phylogenetic software, or other genome analysis applications that require large amounts of computer memory. At IU, Mason accounts are available to IU faculty, postdoctoral fellows, research staff, and students involved in genome research. IU educators providing instruction on genome analysis software, and developers of such software, are also welcome to use Mason. IU has also made Mason available to genome researchers from the National Science Foundation's Extreme Science and Engineering Discovery Environment (XSEDE) project.
Mason consists of 16 Hewlett Packard DL580 servers, each containing four Intel Xeon L7555 8-core processors and 512 GB of RAM. The total RAM in the system is 8 TB. Each server chassis has a 10-gigabit Ethernet connection to the other research systems at IU and the XSEDE network (XSEDENet).
The Mason nodes run Red Hat Enterprise Linux 6.0. Job management is provided by the TORQUE resource manager (for more, see Running your applications below) in combination with the Moab job scheduler. Mason employs the Modules system to simplify application and environment configuration (for more, see Computing environment below).
Note: The scheduled monthly maintenance window for Mason is the first Tuesday of each month, 7am-7pm.
|System configuration||Aggregate information||Per-node information|
|Machine type||High-performance computing (data-intensive computing)|
|Operating system||Red Hat Enterprise Linux 6|
|CPUs||64 Intel Xeon L7555 8-core processors||4 Intel Xeon L7555 8-core processors|
|Nodes||16 Hewlett Packard DL580 servers|
|RAM||8 TB||512 GB|
|Network||10-gigabit Ethernet per node|
|Local storage||8 TB||500 GB|
|Computational systems details||Total||Per node|
|Processing capability||Rmax = 4,302 gigaFLOPS||Rmax = 239 gigaFLOPS|
|Benchmark data||HPL gigaFLOPS 3129.75||HPL gigaFLOPS 222.22|
|Power usage||0.000153 teraFLOPS per watt||0.000173 teraFLOPS per watt|
|Login nodes||Rmax = 478 gigaFLOPS||Rmax = 239 gigaFLOPS|
|Homogeneous compute nodes||Rmax = 3,824 gigaFLOPS||Rmax = 239 gigaFLOPS|
|Heterogeneous compute nodes (GPU or other accelerator)||Rmax = 3,824 gigaFLOPS|
File systems (storage for IU users)
|File system||Total disk space||Quotas||Backups|
User home directories reside on a network-attached storage (NAS)
device. The path to your home directory on Mason includes your
Network ID username (
|8 TB||10 GB; shared, if applicable, between your accounts on Big Red II, Quarry, and the Research Database Complex (RDC)||The home directory file system on Mason makes daily "hourly" and
"nightly" snapshots, and saves them to hidden
Local scratch space
|450 GB||n/a||Local scratch space is not intended for permanent storage of data, and is not backed up. Files are automatically deleted once they are 14 days old.|
Shared scratch space is hosted on the Data Capacitor. The Data
Capacitor scratch directory is a temporary workspace. Scratch space is
not allocated, and its total capacity fluctuates based on project
space requirements. The Data Capacitor is mounted on IU research
|427 TB||n/a||Shared scratch space is not intended for permanent storage of data, and is not backed up. Files in scratch space may be purged if they have not been accessed for more than 60 days.|
IU graduate students, faculty, and staff who need more than 10 GB of permanent storage can apply for accounts on the Research File System (RFS) and the Scholarly Data Archive (SDA). See At IU, how can I apply for an account on the SDA or RFS?
Note: Indiana University is in the process of
replacing its current Data Capacitor storage system with Data
Capacitor II (DC2), a high-speed, high-capacity, storage facility for
very large data sets. The DC2 file system
/N/dc2/scratch) is currently mounted on Big Red II,
Quarry, and Mason. Both systems (Data Capacitor and DC2) will be
available until December 3, 2013, when any data remaining on the Data
Capacitor scratch file system (
/N/dc/scratch) will be set
to read-only. On January 7, 2014, any data remaining on the Data
Capacitor will be deleted, and the system will be
decommissioned. Project directories on the Data Capacitor will be
migrated to DC2 (and their owners contacted) by UITS. Scratch
directory data will not be automatically migrated to
DC2. You should move any critical data stored in your Data Capacitor
scratch directory to the Scholarly Data Archive (SDA) as soon as
possible. For instructions on using the Hierarchical Storage Interface
(HSI) application to move your data, see At IU, how do I use HSI to access my SDA account? If you
have questions email the High
Performance File Systems group.
Although Mason is an IU resource dedicated to genome analysis research, access to the cluster is not restricted to IU researchers:
- At IU, students, faculty, and staff can request accounts on Mason
via the Account Management
Service (AMS); see At IU, if I already have some computing accounts, how do I get others?
- NSF-funded life sciences researchers at other institutions can
apply to the National Center for Genome Analysis Support (NCGAS) allocations committee to request
access to Mason. To request an allocation, submit the NCGAS
Allocations Request Form. If you have questions, email NCGAS.
- Access to Mason is also available to Extreme Science and Engineering Discovery Environment (XSEDE) researchers through the normal XSEDE allocation process.
For information about your responsibilities as a user of this resource, see:
IU and NCGAS users: What are my responsibilities as a computer user at IU?
- XSEDE users: What are my responsibilities as an XSEDE user?
Logging into Mason
IU users use their IU Network IDs to log into Mason.
Note: Mason login nodes use the IU Active Directory Service for user authentication. As a result, local passwords/passphrases are not supported. For information about changing your ADS passphrase, see At IU, how do I change my Network ID passphrase? For helpful information regarding secure passphrases, see Passwords and passphrases.
Researchers with NCGAS allocations authenticate with the credentials associated with their NCGAS allocations.
Researchers with XSEDE allocations authenticate with their XSEDE-wide logins.
Use of public key authentication is also permitted on Mason. For more, see In SSH and SSH2 for Unix, how do I set up public key authentication?
Methods of access
For IU and NCGAS users: Interactive access to
Mason is provided via SSH only. Use an SSH2 client to
This will resolve to one of the following login nodes:h1.mason.indiana.edu h2.mason.indiana.edu
XSEDE users: Use GSI-SSH from:
- The terminal applet in the XSEDE User Portal
- The Single Sign-On Login Hub
- One of the GSI-SSH desktop clients
Alternatively, if you're connected to another XSEDE system via SSH (i.e., not using one of the GSI-SSH methods) you can connect to Mason directly using GSI-SSH and a MyProxy certificate. For example, from Trestles (SDSC):
- Make sure the
globusmodule is loaded: [dartmaul@trestles-login2 ~]$ module load globus
- Get a certificate from the MyProxy server; when prompted, provide your XSEDE-wide password: [dartmaul@trestles-login2 ~]$ myproxy-logon -s myproxy.teragrid.org Enter MyProxy pass phrase: **************** A credential has been received for user dartmaul in /tmp/x509up_p13346.fileGzdqtd.1.
- Use GSI-SSH to connect to your XSEDE account on Mason (replace
usernamewith your XSEDE username): [dartmaul@trestles-login2 ~]$ gsissh firstname.lastname@example.org
Additionally, XSEDE users are permitted to connect via SSH using public key authentication. For more, see Access Resources on the XSEDE User Portal.
Software installed on Mason is made available to users via Modules, an environment management system that lets you easily and dynamically add software packages to your user environment. For a list of software modules available on Mason, see Mason Modules in the IU Cyberinfrastructure Gateway.
For more on Modules, see Modules below.
For more on the IU Cyberinfrastructure Gateway, see What is the IU Cyberinfrastructure Gateway?
Note: Users can install software in their home directories on Mason and request the installation of software for use by all users on the system. Only faculty or staff can request software. If students require software packages on Mason, their advisors must request them. For more, see At IU, what is the policy about installing software on Mason?
The shell is the primary method of interacting with the Mason cluster. The command line interface provided by the shell lets users run built-in commands, utilities installed on the system, and even short ad hoc programs.
Mason supports the Bourne-again (
bash) and TC
tcsh) shells. New user accounts are assigned the
bash shell by default. For more on
Reference Manual and the Bash (Unix
shell) Wikipedia page.
To change your shell on Mason, use the
chsh (instead of
changeshell) changes your shell only on the node on which
you run it, and leaves the other nodes of the cluster unchanged;
changeshell prompts you with the shells available on the
system, and changes your login shell system-wide within 15
The shell uses environment variables primarily to modify shell behavior and the operation of certain commands. A good example is the PATH variable.
When the shell parses a command you have entered (i.e., after you
Return), it interprets certain
words you've typed as program files that should be executed. The shell
then searches various directories on the system to locate these
files. The PATH variable determines which directories are searched,
and the order in which they are searched. In the
shell, the PATH variable is a string of directories separated by
/bin:/usr/bin:/usr/local/bin). The shell
searches for an executable file in the
/usr/bin directory, and finally the
/usr/local/bin directory. If files of the same name
foo) exist in all three directories,
/bin/foo will be run, because the shell will find it
To display and change the values of environment variables:
|Shell||Display a value||Change a value|
Shells offer much flexibility in terms of startup configuration. On
bash by default reads and executes commands from
the following directories (and in this order):
~ (tilde) represents your
home directory (e.g.,
~/.bash_profile is the
.bash_profile file in your home directory).
On logout, the shell reads and executes
~/.bash_logout. For more on
files, see the "Bash
Startup Files" section of the Bash Reference Manual.
On login, the
tcsh shell reads and executes commands
from the following directories (also in this order):
In practice, on Mason, only the first two files exist. You may create the others, and add commands and variables to them as you see fit.
Mason use the Modules package to provide a convenient method for dynamically modifying your environment.
Some common Modules commands include:
||List all software packages available on the system.|
||List all versions of
module avail openmpi
||List all packages currently loaded in your environment.|
||Add the specified
module load intel/11.1
To load the default version of the
||Remove the specified
||Swap the loaded package (
This is synonymous with:module switch package_A package_B
||Shows what changes will be made to your environment (e.g., paths to
libraries and executables) by loading the specified
This is synonymous with:module display package
For information about using Modules on IU research systems, see On Big Red II, Mason, Quarry, and Rockhopper at IU, how do I use Modules to manage my software environment? For information about using Modules on XSEDE digital services, see On XSEDE, how do I manage my software environment using Modules?
Transferring your files to Mason
Mason supports SCP and SFTP for transferring files:
SCP: The SCP command line utility is included
with OpenSSH. Basic use is:
scp username@host1:file1 username@host2:file2
For example, to copy
foo.txtfrom the current directory on your computer to your home directory on Mason, use (replacing
usernamewith your username): scp foo.txt email@example.com:foo.txt
You may specify absolute paths or paths relative to your home directory:scp foo.txt firstname.lastname@example.org:some/path/for/data/foo.txt
You also may leave the destination filename unspecified, in which case it will become the same as the source filename. For more, see In Unix, how do I use SCP to securely transfer files between two computers?
- SFTP: SFTP clients provide file access, transfer, and management, and offer functionality similar to FTP clients. For example, using a command-line SFTP client (e.g., from a Linux or Mac OS X workstation), you could transfer files as follows: $ sftp email@example.com firstname.lastname@example.org's password: Connected to mason.indiana.edu. sftp> ls -l -rw------- 1 username group 113 May 19 2011 loadit.pbs.e897 -rw------- 1 username group 695 May 19 2011 loadit.pbs.o897 -rw-r--r-- 1 username group 693 May 19 2011 local_limits sftp> put foo.txt Uploading foo.txt to /N/hd00/username/Mason/foo.txt foo.txt 100% 43MB 76.9KB/s 09:39 sftp> exit $
Additionally, XSEDE researchers can use GridFTP (via
globus-url-copy or Globus Online) to securely
move data to and from Mason's GridFTP endpoint:
For more on XSEDE data transfers, see What data transfer methods are supported on XSEDE, and where can I find more information about data transfers?
Mason is designed to support codes that have extremely large memory requirements. As these codes typically do not implement a distributed memory model, Mason is geared toward a serial or shared-memory parallel programming paradigm. However, Mason can support distributed memory parallelism.
The GNU Compiler Collection (GCC) is added by default to your user environment on Mason. The Intel and Portland Group (PGI) compiler collections, and the Open MPI and MPICH wrapper compilers, are also available.
Recommended optimization options are
-xHost option will optimize
based on the processor of the current host).
For the GCC compilers, the
march=native options are recommend to generate
instructions for the machine and CPU type.
- To compile the C program
- With the GCC compiler: gcc -O2 -o -mtune=native -march=native simple simple.c
- With the Intel compiler: icc -o simple simple.c
- To compile the Fortran program
- With the GCC compiler: g77 -o simple simple.f
- With the Intel compiler: ifort -O2 -o simple -lm simple.f
- To compile the C program
simple.cwith the MPI wrapper script: mpicc -o simple simple.c
- To compile the Fortran program
simple.fwith the MPI wrapper script: mpif90 -o simple -O2 simple.f
- To use the GCC C compiler to compile
simple.cto run in parallel using OpenMP: gcc -O2 -fopenmp -o simple simple.c
- To use the Intel Fortran compiler to compile
simple.fto run in parallel using OpenMP: ifort -openmp -o simple -lm simple.f
Both the Intel Math Kernel Library (MKL) and the AMD Core Math Library (ACML) are available on Mason.
Both the Intel Debugger (IDB) and the GNU Project Debugger (GDB) are available on Mason.
For information about using the IDB, see the Intel IDB page.
For information about using the GDB, see the GNU GDB page. For an example, see Step-by-step example for using GDB within Emacs to debug a C or C++ program.
Running your applications
CPU/Memory limits and batch jobs
User processes on the login nodes are limited to 20 minutes of CPU
time. Processes exceeding this limit are automatically terminated
without warning. If you require more than 20 minutes of CPU time, use
qsub command to submit a batch job
(see Submitting a job below).
Implications of these limits on the login nodes are as follows:
- The Java Virtual Machine must be invoked with a maximum heap
size. Because of the way Java allocates memory, under
ulimitconditions an error will occur if Java is called without the
- Memory-intensive jobs started on the login nodes will be killed
almost immediately. Debugging and testing on Mason should be done by
submitting a request for an interactive job via the batch system, for
qsub -I -q shared -l nodes=1:ppn=4,vmem=10gb,walltime=4:00:00
The interactive session will start as soon as the requested resources are available.
- A job is an instance of an application you wish to run.
- A queue is a pool of compute resources that accepts job to run, and executes them according to a fairshare policy.
- A job scheduler is the application responsible for scheduling jobs.
The BATCH queue is the default, general-purpose queue on Mason. The default walltime is one hour; the maximum limit is two weeks. If your job requires more than two weeks of walltime, email the High Performance Systems group for assistance.
The Moab job scheduler uses fairshare scheduling to track usage and prioritize jobs. For information on fairshare scheduling and using Moab to check the status of your jobs on Mason, see What is Moab? For a summary of commands, see Common Moab scheduler commands.
Submitting a job
To submit a job to run on Mason, use the
command. If the command exits successfully, it will return a job ID,
If you need attribute values different from the defaults, but less
than the maximum allowed, specify these either in the job script using
TORQUE directives, or on the command line with the
switch. For example, to submit a job that needs more than the default
60 minutes of walltime, use:
Jobs on Mason default to a per-job virtual memory resource of 8 MB. So, for example, to submit a job that needs 100 GB of virtual memory, use:qsub -l nodes=1:ppn=4,vmem=100gb job.script
Note: Command-line arguments override directives
in the job script, and you may specify many attributes on the command
line, either as comma-separated options following the
switch, or each with its own
-l switch. The following two
commands are equivalent:
qsub options include:
||Execute the job only after specified date and time.|
||Run the job interactively. (Interactive jobs are forced to not re-runnable.)|
||Mail a job summary report when the job terminates.|
||Specify the destination queue for the job. (Not applicable on Mason.)|
||Declare whether the job is re-runnable. Use the argument
||Export all environment variables in your current environment to the job.|
For more about the
qsub command in TORQUE, see the
Adaptive Computing qsub
page and the
qsub manual page.
Monitoring a job
To monitor the status of a queued or running job, use the TORQUE
qstat command. Useful
||Display all jobs.|
||Write a full status display to standard output.|
||List the nodes allocated to a job.|
||Display jobs that are running.|
||Display jobs owned by specified users.|
For more about the
qstat command in TORQUE, see the
Adaptive Computing qstat
page and the
qstat manual page.
Deleting a job
To delete a queued or running job, use the
Occasionally, a node will become unresponsive and unable to respond
to the TORQUE server's requests to kill a job. In such cases, try
qdel -W <delay> to override the delay between
SIGTERM and SIGKILL signals (for
<delay>, specify a
value in seconds).
For more about the
qdel command in TORQUE, see the
Adaptive Computing qdel
page and the
qdel manual page.
For IU and NCGAS users
Support for IU and NCGAS users is provided by the UITS High Performance Systems (HPS) and Scientific Applications and Performance Tuning (SciAPT) groups, and by the National Center for Genome Analysis Support (NCGAS):
- If you have system-specific questions about Mason, email the HPS
- If you have questions about compilers, programming,
scientific/numerical libraries, or debuggers on Mason, email the SciAPT
- If you need help installing software packages in your home directory on Mason, email NCGAS.
For XSEDE users
For more about XSEDE compute, advanced visualization, storage, and special purpose systems, see the Resources Overview, Systems Monitor, and User Guides. For scheduled maintenance windows, outages, and other announcements related to XSEDE digital services, see User News.
This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.
Last modified on September 12, 2013.