Use Singularity on IU's research computing systems

On this page:


Singularity is a container solution that promotes reproducible science by giving researchers the ability to run their scientific applications and workflows in a variety of different Linux-based operating system (OS) environments.

A Singularity container is an encapsulation of an application and its dependencies (the libraries, packages, and data files it needs for execution), which is saved as a single, distributable image file. The image file can be copied, shared, and launched without modification in most Linux-based OS environments (as long as the destination host has Singularity installed). When the image launches, the container daemon executes on the host, virtualizes the contained application's OS environment, and then runs the application inside the virtualized environment, allowing you (as a non-root user on a shared HPC resource), to run your application even when it cannot be natively installed or supported on the host due to OS and/or library incompatibilities or conflicts.

Because the Singularity container runs on the host's kernel, it's able to leverage the host's physical hardware (for example, GPUs and accelerators), interconnects, and file systems, giving the contained application the same performance characteristics as native applications. Singularity also natively supports HPC resource managers and job schedulers, and features built-in Open MPI support.

Although some aspects of container execution require escalated privileges, all escalated privileges are dropped once the container environment is instantiated. Singularity also prevents user context escalation within the container. Although root privileges are not needed to run them, Singularity images can be built, configured, and/or modified only on hosts for which the user has root privileges.

Singularity was developed under the leadership of Gregory Kurtzer, Linux Cluster Technical Architect for the Berkeley Research Computing High Performance Computing service. For more, see the Singularity website.

Singularity at IU

At Indiana University, Singularity is available on Big Red II, Karst, and Carbonate. To get starting using Singularity, add it to your user environment with the following command:

  module load singularity

To make permanent changes to your environment, edit your ~/.modules file. For more, see Use a .modules file in your home directory to save your user environment on an IU research supercomputer.

A sample Singularity container image is available on Big Red II for running TensorFlow with GPU support in a virtualized CentOS environment; for more, see the Using a Singularity container section of Run TensorFlow on Big Red II at IU.

Although Singularity containers will run on the aforementioned systems, you first should contact the UITS Research Applications and Deep Learning team to check whether they can install your application natively.

If you have a Singularity container image that you want to use (after ruling out native installation), contact the UITS Research Applications and Deep Learning team for help migrating it to your space on an IU research system. Most likely, you will need to modify your image to add appropriate mount points; you can modify a Singularity container only on systems for which you have root privileges (not on IU's research systems).


Big Red II will be retired from service on December 15, 2019. After that date, you will no longer be able to log into Big Red II; however, the data in your Big Red II home directory will remain accessible from your home directory on any of the other IU research supercomputers. New software requests for Big Red II will no longer be accepted after the October 13, 2019, maintenance window.

Following a hardware upgrade and expansion, Big Red II+ will be renamed Big Red III; IU graduate students, faculty, and staff will be able to create Big Red III accounts beginning October 14, 2019. Big Red 200 will be available for use by IU graduate students, faculty, and staff in January of 2020. Undergraduate students and affiliates will be able to get Big Red III and Big Red 200 accounts if they are sponsored by full-time IU faculty or staff members. For more, see Upcoming changes to research supercomputers at IU.

Singularity commands

Singularity uses a primary command wrapper called singularity, several sub-commands, and global and command-level options. To see a list of Singularity sub-commands and options, on the command line, enter singularity (without any options).

The general syntax for the singularity command wrapper is:

  singularity [global options] <sub-command> [sub-command options] <container_path>

Following are some sample Singularity commands (replace /N/dc2/scratch/<username>/my_image.img with the path to your Singularity container):

  • To spawn a shell within your container, on the command line, enter:
      singularity shell /N/dc2/scratch/<username>/my_image.img

    To see options available for the shell sub-command, on the command line, enter singularity shell -h. For more, see Singularity Shell in the Singularity User Guide.

  • To execute a program inside your Singularity container (for example, /path/inside/container/go_fish), on the command line, enter:
      singularity exec /N/dc2/scratch/<username>/my_image.img /path/inside/container/go_fish

    To see options available for the exec sub-command, on the command line, enter singularity exec -h. For more, see Singularity Exec in the Singularity User Guide.

As mentioned previously, you can modify Singularity containers only on systems for which you have root privileges (not Big Red II, Karst, Carbonate, or DC2). Furthermore, by default, all Singularity containers are read-only. To make your container's file system accessible in writable mode, add the -w option to your Singularity sub-commands. For example, to spawn a writable shell in your container (for example, my_image.img, on the command line (of the system on which you are root), enter:

  singularity shell -w my_image.img

Using Singularity with Docker containers

You can use Singularity's pull sub-command to import a container image directly from Docker Hub without having root or superuser privileges (or Docker) on your host system. For example, to use Singularity to import the image of the latest long-term support (LTS) version of Ubuntu into the present working directory on your host system, use the following command:

  singularity pull docker://ubuntu:latest

Alternatively, you can use Singularity's shell sub-command to spawn an interactive shell within a Docker container "on the fly". In the following example, user bkyloren on Carbonate shells into the latest LTS Ubuntu image available on Docker Hub:

  [bkyloren@h1 ~]$ singularity shell docker://ubuntu:latest
  Docker image path:
  Cache folder set to /gpfs/home/b/k/bkyloren/Carbonate/.singularity/docker
  [5/5] |===================================| 100.0%
  Creating container runtime...
  WARNING: Could not create bind point file
  in container /etc/localtime: No such file or directory
  Singularity: Invoking an interactive shell within container...
  Singularity ubuntu:latest:~>

On Carbonate (only), using the above method to shell directly into a Docker image provides access to the local home directory and DC2 filesystems; for example (for user bkyloren):

  Singularity ubuntu:latest:~> pwd
  Singularity ubuntu:latest:~> cd /N/dc2/scratch/bkyloren
  Singularity ubuntu:latest:/N/dc2/scratch/bkyloren>

When you use the same method on Big Red II or Karst, the local home directory and DC2 filesystems are not mounted (cannot be accessed from the container shell).

For more about using Singularity to work with Docker images, see the Singularity and Docker page and the Docker section of the Singularity Pull page.

Getting help

For help using Singularity, see the Singularity User Guide. If you have questions or need help using Singularity at IU, contact the UITS Research Applications and Deep Learning team.

This is document aofz in the Knowledge Base.
Last modified on 2019-08-20 16:22:46.

Contact us

For help or to comment, email the UITS Support Center.