Rockhopper at Indiana University
On this page:
- System overview
- System information
- System access
- Available software
- Computing environment
- Transferring your files to Rockhopper
- Application development
- Running your applications
rockhopper.uits.iu.edu) is Penguin Computing's Penguin-On-Demand (POD) supercomputing cloud appliance hosted by Indiana University. The Rockhopper POD is a collaborative effort between Penguin Computing, IU, the University of Virginia, the University of California Berkeley, and the University of Michigan to provide supercomputing cloud services in a secure US facility. Researchers at US institutions of higher education and Federally Funded Research and Development Centers (FFRDCs) can purchase computing time from Penguin Computing, and receive access via high-speed national research networks operated by IU.
Rockhopper consists of 11 Penguin Computing Altus 1804 servers, each containing four AMD Opteron 6172 12-core processors and 128 GB of RAM. The total RAM in the system is 1.5 TB. Each server chassis has a QDR (40 Gbps) InfiniBand interconnect to the cluster's switch fabric, which is then connected via four trunked 10 Gbps Ethernet links to IU's network infrastructure. For hardware configuration details, see System information.
The Rockhopper nodes run CentOS 5. Job management and scheduling are provided by the Sun Grid Engine (SGE) resource manager. The Modules system is used to simplify application and environment configuration. Users may log into the cluster via SSH, using their Penguin POD user IDs.
Rockhopper is a pay-for-usage system. For information about requesting an account, as well as the fee structure, see System access. For more about On Demand services, see Penguin Computing On Demand. For information about using Penguin On Demand clusters, see Penguin Computing's POD wiki.
|System configuration||Aggregate information||Per node information|
|Machine type||High-performance computing - Usage on demand
Penguin Computing Altus 1804 MPP cluster
|4 x 2.1 GHz 12-core AMD Opteron 6172 processors|
|Operating system||CentOS 5|
|CPUs||2.1 GHz 12-core AMD Opteron 6172 processors|
|Nodes||11 compute nodes; 2 login nodes|
|RAM||1.4 TB 1,333 MHz DDR3 ECC memory||128 GB 1,333 MHz DDR3 ECC memory|
|Availability scope||Researchers at US institutions of higher
|Computational systems details||Total||Per node|
|Processing capability||Rmax = 5242 gigaflops||Rmax = 403 gigaflops|
|Benchmark data||HPL gigaflops 2216.53 (8 nodes)||HPL gigaflops 288.67|
|Power usage||To be determined||0.000230 teraflops per watt|
|Disk space||67 TB (local)||6 TB (local)|
|Login nodes||Rmax = 806 gigaflops||Rmax = 403 gigaflops|
|Homogeneous compute nodes||Rmax = 4435 gigaflops||Rmax = 403 gigaflops|
|Heterogeneous compute nodes||Rmax = 4435 gigaflops|
|File systems||Home directories are on a local Lustre file system with no quotas.|
|Disk space||67 TB (local)|
|Total scratch space||Rockhopper does not include a separate
scratch file system.
Disk usage is a billable item.
Data Capacitor scratch space is available to IU students, faculty, and staff.
|Data Capacitor||Available to IU students, faculty, and staff|
|Scholarly Data Archive (SDA)||Available to IU students, faculty, and staff|
|Backup and purge policies||Home directories are not backed up.|
|QDR (40 Gbps) InfiniBand||Switch: Mellanox IS5030
HCA: Mellanox ConnectX 2, 1-port QSFP, QDR
1 GBps Ethernet
Note: Indiana University will soon replace its current Data Capacitor with Data Capacitor II, a high-speed, high-capacity storage facility for very large data sets. With 5 PB of storage, Data Capacitor II will support big data applications used in computational research. IU partnered with DataDirect Networks, Inc. (DDN) to develop Data Capacitor II, which is scheduled to be installed in the IU Data Center in spring 2013. For more about Data Capacitor II, see the November 8, 2012, press release. If you have questions about how the change to Data Capacitor II will affect your research, email the High Performance File Systems group.
Requesting an account
Before requesting an account, review the account policies below.
To request an account on Rockhopper, submit the account request form.
Note: Rockhopper is a fee-for-service system, and you will need a credit card to complete the account request form. To request an alternate financial arrangement, email Penguin Computing directly.
Methods of access
Interactive access to the Rockhopper cluster is provided via
key-based SSH shell login to head nodes. SSH2 clients may be used to connect to
rockhopper.uits.iu.edu, which resolves to one of
Rockhopper's two login nodes, either
login1.rockhopper.uits.iu.edu. Users create SSH keys, and
obtain instructions on how to use them, as part of the account
creation process. For more, see Accessing
POD on the POD
Public key authentication is the only permitted authentication mechanism available on the Rockhopper cluster.
Note: Rockhopper is not an IU resource; you cannot use your IU Network ID to access the POD system.
Logging into Rockhopper
Access Policy: Rockhopper is a pay-as-you-use Penguin On Demand system. Access requires prior financial arrangements (e.g., payment with a credit card) with Penguin Computing.
Passphrases: Rockhopper does not use IU ADS or local passwords/passphrases for user authentication. Authentication is only via SSH2 public key.
For a list of software packages available on Rockhopper, see the Scientific Applications and Performance Tuning group's Rockhopper Applications page.
The shell is the primary method of interacting with the Rockhopper cluster. The command line interface provided by the shell lets users run built-in commands, utilities installed on the system, and even short ad hoc programs.
Rockhopper supports the Bourne-again (
bash) and TC
tcsh) shells. New user accounts are assigned the
bash shell by default. For more on
Reference Manual and the Bash (Unix
shell) Wikipedia page.
To change your shell to
tcsh on Rockhopper, email Penguin
Environment variables: The shell uses environment variables primarily to modify shell behavior and the operation of certain commands. A good example is the PATH variable.
When the shell parses a command you have entered (i.e., after you
Return), it interprets certain
words you've typed as program files that should be executed. The shell
then searches various directories on the system to locate these
files. The PATH variable determines which directories are searched,
and the order in which they are searched. In the
shell, the PATH variable is a string of directories separated by
/bin:/usr/bin:/usr/local/bin). The shell
searches for an executable file in the
/usr/bin directory, and finally the
/usr/local/bin directory. If files of the same name
foo) exist in all three directories,
/bin/foo will be run, because the shell will find it
To display and change the values of environment variables:
|Shell||Display value||Change value|
Startup scripts: Shells offer much flexibility in
terms of startup configuration. On login,
bash by default
reads and executes commands from the following directories (and in
~ (tilde) represents your
home directory (e.g.,
~/.bash_profile is the
.bash_profile file in your home directory).
On logout, the shell reads and executes
~/.bash_logout. For more on
files, see the "Bash
Startup Files" section of the Bash Reference Manual.
On login, the
tcsh shell reads and executes commands
from the following directories (also in this order):
In practice, on Rockhopper, only the first two files exist. You may create the others, and add commands and variables to them as you see fit.
Modules: On Rockhopper, Modules provide a convenient method for dynamically modifying your environment. A few simple commands provide easy access to various applications on the cluster:
||List available modules on the system|
||Load (modifies your environment)|
||Load a specific module (replace module_name with the name of the module you want to load)|
||Display your currently loaded modules|
||Unload (modifies your environment)|
Transferring your files to Rockhopper
Rockhopper supports SCP and SFTP for transferring files. SCP is a command line utility included with OpenSSH. Basic use is:scp -i ~/.ssh/key-you-downloaded-from-pod [[user@]host1:]file1 [[user@]host2:]file2
For example, to copy
foo.txt from the current
directory on your computer to your home directory on Rockhopper, use
username with your Rockhopper username):
scp -i ~/.ssh/key-you-downloaded-from-pod foo.txt email@example.com:foo.txt
You may specify absolute paths, or paths relative to your home directory:scp -i ~/.ssh/key-you-downloaded-from-pod foo.txt firstname.lastname@example.org:/some/path/for/data/foo.txt
You also may leave the destination filename unspecified, in which case it will become the same as the source filename. For more, see In Unix, how do I use SCP to securely transfer files between two computers?
SFTP provides file access, transfer, and management, and offers client functionality similar to FTP. For example, from a computer with a command line SFTP client (e.g., a Linux or Mac OS X workstation), you could transfer files as follows:$ sftp -i ~/.ssh/key-you-downloaded-from-pod email@example.com: Connected to rockhopper.uits.iu.edu. Changing to: /home/username/ sftp> ls -l -rw------- 1 username group 113 May 19 2011 loadit.pbs.e897 -rw------- 1 username group 695 May 19 2011 loadit.pbs.o897 -rw-r--r-- 1 username group 693 May 19 2011 local_limits sftp> put foo.txt Uploading foo.txt to /home/username/foo.txt foo.txt 100% 95MB 8.7MB/s 00:11 sftp> exit
You can also ship detachable hard drives to Penguin Computing. To make such an arrangement, email Penguin Computing directly.
Graphical SFTP clients are available for many systems. For more, see What is SFTP, and how do I use an SFTP client to transfer files?
Rockhopper is designed to support codes that have reasonably large shared memory and/or distributed memory parallelism.
GNU, Intel, and Portland Group compilers are installed on the POD. Open MPI compiled with these compilers is available for MPI programs. For POD-specific information about compiling programs see, Compiling Applications on the POD wiki.
- Compilers available on Rockhopper:
- Intel, Portland Group, GNU
- Fortran, C, C++
- Compiler options:
- Recommended options are
- Recommended options are
- Serial codes
- Parallel codes
- Serial codes
- Example sessions include:
- MPI codes
- OpenMP codes
- Hybrid jobs
- Libraries available on Rockhopper are MKL, ACML
- Debuggers available on Rockhopper are GDB, IDB
Running your applications
A job is an instance of an application you wish to run.
A queue is a pool of compute resources that
accepts jobs to run, and executes them according to a First In First
Job schedulers are the applications responsible for scheduling jobs.
Users familiar with TORQUE/PBS implementations should
find it easy to work in the Sun Grid Engine (SGE) environment. Often,
qsub parameters are the same between TORQUE and SGE; the
only difference is SGE replaces
#$ . For a complete list of qsub parameters, see
- Rockhopper queues:
- all.q (general purpose)
- Queue policies:
- First In First Out (FIFO) scheduling
- No maximum walltime limit
To delete queued or running jobs, use
qdel . Occasionally, a node will become unresponsive
and unable to respond to the SGE server's requests to kill a job. In
such cases, try using
qdel -f .
qsub to submit jobs to run on Rockhopper. If the
command exits successfully, it will return a job ID, for example:
If you need attribute values different from the defaults, but less
than the maximum allowed, specify these either in the job script using
SGE directives, or on the command line with the
switch. For example, to submit a job requiring 10 hours of walltime,
Note: Command-line arguments override directives
in the job script, and you may specify many attributes on the command
line, either as comma-separated options following the
switch, or each with its own
-l switch. The following
commands are equivalent:
qsub switches include:
||Specify a user-selectable queue.|
||Make job re-runnable.|
||Execute the job only after a specified date and time.|
||Export environment variables in your current environment to the job.|
For more, see the
qsub man page.
Penguin Computing also makes available the PODShell remote job submission and data staging tool, which runs on remote Linux servers and personal computers, allowing for remote control of POD jobs.
qstat for monitoring the status of a queued or
running job. Switches include:
||Display jobs for users in the user list.|
||Display all jobs.|
||Display running jobs.|
||Display the full listing of jobs (excessive detail).|
||Display nodes allocated to jobs.|
qdel to delete queued or running jobs.
Occasionally, a node will become unresponsive and unable to respond to
SGE's requests to kill a job. In such cases, try
User and support information is available in the POD wiki.
Researchers at US institutions of higher education (with
.edu domain names) or Federally Funded Research and
Development Centers (FFRDCs) can
purchase computing time from Penguin Computing, and then receive
access to Rockhopper at IU.
Prospective users request Rockhopper accounts by filling out and submitting Penguin Computing's account request form.
To pay for an account on Rockhopper, you need to enter your credit card information when completing the account request form. To request an alternate financial arrangement, email Penguin Computing directly.
The Rockhopper cluster is a Penguin Computing resource. For information about your responsibilities as a user of this resource, see:
Home directories reside on a Lustre file system, with no quotas or backups.
Note: Rockhopper does not provide a separate scratch file system. Disk usage is a billable item; to make financial arrangements, email Penguin Computing directly.
Computational resources (queues)
Rockhopper has only one queue, and all jobs submitted will execute in the default queue. The only restriction is that individual jobs are limited to 128 cores.
Rockhopper does not provide a production mail service; however, SGE
communicates via email. Mail forwarding is not configured during
account creation; you should consider establishing a mail forwarding
.forward file (see How do I forward my mail from a Unix account?).
Rockhopper does not have a regularly scheduled maintenance window. Information about pending outages is sent via email to account holders.
This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.
Last modified on May 16, 2013.