Big Red at Indiana University
On this page:
- System information
- Available software
- Logging in
- Using SoftEnv to set up your software environment
- File storage options
- Compiling and running programs
- Parallel applications and message-passing libraries
- Submitting batch jobs to LoadLeveler
- Running interactive jobs
Note: Big Red is scheduled to be retired from service September 30, 2013. Indiana University is replacing it with Big Red II, the fastest university-owned supercomputer in the nation, capable of performing one quadrillion floating-point operations per second (1 petaflop). For more, see Big Red II at Indiana University. No new Big Red accounts will be created after May 3, 2013. If you have a Big Red account as of May 3, you will still be able to access and use Big Red until your account is migrated to Big Red II. If you have questions or concerns contact the High Performance Systems group.When commissioned in 2006, Big Red (
bigred.teragrid.iu.edu) was one of the most powerful university-owned computers in the US, and one of the 50 fastest supercomputers in the world. Part of a comprehensive strategy to build an advanced cyberinfrastructure to support research at Indiana University, Big Red has a theoretical peak performance of more than 40 teraflops, and has achieved more than 28 teraflops on numerical computations.
For more, see the Big Red home page.
|System configuration||Aggregate information||Per node|
|Machine type||High-performance computing (Massively parallel computing)|
|Operating system||SuSE Linux Enterprise Server 9|
|Memory model||Distributed and shared|
|Processor cores||4,096 compute cores|
|CPUs||2,048 compute CPUs||2 x 2.5 GHz dual-core PowerPC 970MP processors|
|Nodes||1,024 compute nodes (JS21 Bladeserver nodes)
4 user (JS21 Bladeserver)
16 storage (pSeries 505 nodes)
|Rmax||40 teraflops theoretical (1,024 nodes)|
|Rpeak||21 teraflops benchmarked on 768 nodes with numerical computations|
|RAM||8,192 GB||8 GB|
|Storage||Connected via DataDirect Network S2A9550 storage controllers each dual-pathed to 5 SAF 4248 chassis|
User home directories are NFS-exported from a Network-Attached Storage (NAS) device - 10 GB shared by accounts on Mason, Quarry, and Research Database Complex (if you have them)
Local scratch space
73 GB Serial Attached SCSI (SAS)
Shared scratch space
Shared scratch space is hosted on the Data Capacitor.
Note: Indiana University will soon replace its current Data Capacitor with Data Capacitor II, a high-speed, high-capacity storage facility for very large data sets. With 5 PB of storage, Data Capacitor II will support big data applications used in computational research. IU partnered with DataDirect Networks, Inc. (DDN) to develop Data Capacitor II, which is scheduled to be installed in the IU Data Center in spring 2013. For more about Data Capacitor II, see the November 8, 2012, press release. If you have questions about how the change to Data Capacitor II will affect your research, email the High Performance File Systems group.
|Backup and purge policies||Files older than 60 days are periodically purged, following user notification.|
For a list of software installed on Big Red, see Big Red Software.
To request installation of a software package on Big Red, use the Research Systems Software Request form.
Use SSH2 and your Network ID to access Big
bigred.teragrid.iu.edu). The default
bash. To change your shell
permanently, use the
Windows users: If you use an SSH client in
Windows, you cannot open tools that need a graphical user
interface (GUI), such as the TotalView debugger and the
Vampir-NG profiler. You'll need X Window emulation
software, such as Cygwin. UITS recommends using XLiveCD, created by the
Research Technologies division of UITS.
Intra-cluster logins: When you log into your Big
Red account for the first time, passphrase-less SSH keys will be
automatically created in your home directory. Those keys should enable
you to log into compute nodes that you have gained access to through
LoadLeveler without entering a password or a passphrase. In
other words, parallel jobs should run seamlessly on multiple compute
nodes without any manual intervention.
However, you may see the following error message when you try to access LoadLeveler-assigned compute nodes:Permission denied (publickey,password,keyboard-interactive)
This indicates that the intra-cluster RSA key pair in your home directory is either not present or corrupted. If this happens, enter
gensshkeys; it will generate a passphrase-less key pair for you, allowing you seamless intra-cluster logins between any nodes in the cluster assigned for your use by LoadLeveler.
Forwarding email address for job-related
messages: Big Red will send email about your jobs to the
address specified in the
~/.forwardfile in your home directory. (Note the period [
.] preceding the filename.) By default, this is the email address you provided when you requested your account.
If you'd like to change this email address, enter a command similar to the following, replacinghpctrn01@BigRed:~> echo "email@example.com" > ~/.forward
firstname.lastname@example.org your email address:
Be sure to use a valid email address; if you do not, you will not be notified about the status of your jobs.
Using SoftEnv to set up your software environment
SoftEnv, an environment management system, lets you customize your environment (i.e., specify the software packages you plan to use) using symbolic keywords. For information about using SoftEnv on Big Red, see On Big Red at IU, how can I use SoftEnv to customize my software environment?
File storage options
You can store files on your home directory or in scratch space. For more, see At IU, how much disk space is available to me on the research systems?
Compiling and running programs
For information about available compilers and how to use them, see Compiling programs on Big Red at IU.
Parallel applications and message-passing libraries
Big Red is configured for massively parallel computing; it is not structured for serial codes. Serial codes waste 75 percent of the processor cores and the interconnect switch (which account for one third the cost of the machine).
Using message passing
- Select message-passing packages using SoftEnv. See On Big Red at IU, how can I use SoftEnv to customize my software environment?
- You must link the library that's consistent with the address
precision (32-bit or 64-bit) you chose for the compile.
- Once an MPI library is added, compiles are made through
a wrapper to the IBM/Gnu compiler that built the library. For
mpif90is actually just a wrapper to
xlf90_r. The same switches used by
xlf90are available to
- Argonne original
- MPICH 1 is available; MPICH 2 could be
- Limited runtime environment
- Replaced LAM
- No more
- Looks like MPICH to the user
- Improving with each new release
Submitting batch jobs to LoadLeveler
To manage the multiple users, processors, and jobs running on the system, Big Red uses LoadLeveler to submit and monitor jobs. LoadLeveler relies on the Moab scheduler software for job scheduling, incorporating a fair share mechanism based on research system time used by each user trying to run a job.
The fair share mechanism does not allow users to run jobs on the login node or on the compute node outside of the LoadLeveler job submission system. If you submit a job outside of LoadLeveler and it uses more than 20 minutes of CPU time, it will be terminated. This applies to Globus jobs that use the jobmanager/fork.
Writing a job script
- Job scripts are divided into a keyword stanza and an execution
section. If any lines exist in the script other than keywords
#!/bin/bash), the script is executed. Otherwise, it is sourced.
- It's best to write your job in its own script file, and tell
LoadLeveler to execute that.
- Preface all keywords with "
- Keywords used in scripts include
Following is an example of a script that runs a NAMD job:# @ output = test_namd.$(Cluster).out # @ environment = COPY_ALL # @ class = NORMAL # @ initialdir = /N/dc/scratch/namd_example # @ account_no = NONE # @ node_usage = not_shared # @ node = 4 # @ job_type = MPICH # @ checkpoint = no # @ queue mpirun -np $LOADL_TOTAL_TASKS -machinefile $LOADL_HOSTFILE namd2 apoa1.namd
For more, see Batch jobs on Big Red.
Running interactive jobs
Four compute Blades,
b512, are reserved for interactive
use on Big Red. Access is available via the Big Red login nodes, and
all users may log into these nodes at any time. They are intended
specifically for long-running (more than 20 minutes of CPU time)
interactive jobs, particularly interactive debugging
of parallel jobs run over the Myrinet interconnect.
To use the interactive nodes on Big Red, log into the cluster as you normally would, and then connect (via SSH) to one of these nodes:b509 b510 b511 b512
For example:my-host$ ssh bigred.teragrid.iu.edu Password: username@BigRed:~> hostname | host2blade b519 username@BigRed:~> ssh b509 username@BigRed:~> hostname | host2blade b509
These nodes are not running LoadLeveler, and are open for any and all users on the cluster. To run MPI jobs on them, you'll need to create a machine file containing the names of the nodes on which you want your job to run. For example, if you want to run an 8-processor job across two of the nodes, you could use a file containing these lines:b511 b511 b511 b511 b512 b512 b512 b512
Then, log into any of the nodes (
b512 would probably make the most sense in this example)
and run your job:
<your MPI-linked binary>
User activity is not restricted on these nodes, so you may run into
trouble if you request four tasks on a single node (e.g., another user
may have one or more processes running on one or more of the nodes
you've requested, using MX ports on the Myrinet adapter). You can see
the status of the MX ports on a node with the
In this example, one process (
26506) is using one
endpoint on the MX adapter (fma process
2804 is a Myrinet
mapping daemon; it's often listed and can be safely
ignored). Big Red supports up to eight endpoints on the MX adapters,
but since there is usually one process associated with each endpoint,
you may see some blocking on the node once you have more than four
endpoints open (Big Red has four processors per node).
To determine the state of MX endpoints, run the following command from the Big Red login nodes or the interactive nodes:$ psh interactive '/opt/mx/bin/mx_endpoint_info | grep "open"'
Big Red is supported by the High Performance Systems group, part of the Research Technologies division of UITS. If you have system-specific questions about Big Red, email High Performance Systems.
User support, including migrating code to Big Red and parallelizing it, is available from the Scientific Applications and Performance Tuning team, part of the Research Technologies division of UITS. If you have questions about compilers, programming, scientific/numerical libraries, or debuggers, email Scientific Applications and Performance Tuning.
Last modified on May 28, 2013.