Lustre file systems at IU

On this page:

Lustre overview and key components

Lustre is a high-performance storage architecture and scalable parallel file system for use with computing clusters, supercomputers, visualization systems, and desktop workstations. Lustre can scale to provide petabytes of storage capacity, with hundreds of gigabytes per second of I/O bandwidth, to thousands of clients. Lustre also features integrated network diagnostics, and mechanisms for performance monitoring and tuning.

Lustre started as a research project at Carnegie Mellon University, and is now developed and distributed as open source software under the GNU General Public License version 2 (GPLv2). Development of Lustre is supported by the non-profit Open Scalable File Systems OpenSFS organization. For more, see the Lustre website.

Key components of a Lustre file system include:

  • Lustre clients: Lustre clients run on computational, visualization, or desktop nodes that communicate with file system's servers via the Lustre Network (LNET) layer, which supports a variety of network technologies, including InfiniBand, Ethernet, Seastar, and Myrinet. When Lustre is mounted on a client, its users can transfer and manage file system data as if they were stored locally (however, clients never have direct access to the underlying file storage).
  • Management Target (MGT): The MGT stores file system configuration information for use by clients and other Lustre components. Although MGT storage requirements are relatively small even in the largest file systems, the information stored there is vital to system access.
  • Management Server (MGS): The MGS manages the configuration data stored on the MGT. Lustre clients contact the MGS to retrieve information from the MGT.
  • Metadata Target (MDT): The MDT stores filenames, directories, permissions, and other namespace metadata.
  • Metadata Server (MDS): The MDS manages the namespace metadata stored on the MDT. Lustre clients contact the MDS to retrieve this information from the MDT. The MDS is not involved in file read/write operations.
  • Object Storage Targets (OSTs): The OSTs store user file data in one or more logical objects that can be striped across multiple OSTs.
  • Object Storage Server (OSS): The OSS manages read/write operations for (typically) multiple OSTs.

Implementation at IU

At Indiana University, the UITS High Performance File Systems (HPFS) team operates the Lustre-based Data Capacitor II (DC2) and Data Capacitor Wide Area Network 2 (DC-WAN2) file systems, delivering high-speed, large-capacity shared scratch space for data-intensive applications running on IU's research compute systems.

The DC2 and DC-WAN2 file systems are mounted on IU's research compute systems as follows:

  • DC2: /N/dc2/scratch
  • DC-WAN2: /N/dcwan/projects

If you have an account on one of IU's research compute systems, you have a scratch directory on the DC2 file system, located at (replace username with your IU username):


For more, see The Data Capacitor II and DC-WAN2 high-speed file systems at Indiana University.

Some helpful commands

Following are some helpful commands for working with files on Lustre file systems:

  • Get the total sum of data stored, for example, in your DC2 scratch directory (replace username with your IU username):
    du -hc /N/dc2/scratch/username
  • List your files in reverse order by date modified:
    find . -type f -exec ls -1hltr "{}" +;
    On Lustre file systems, using the ls command with the -l option to list the contents of a directory in long format can cause performance issues for you and other users, especially if the directory contains a large number of files. Because Lustre performs file read/write and metadata operations separately, executing ls -l involves contacting both the Lustre MDS (to get path, ownership, and permissions metadata) and one or more OSSs (which in turn must contact one or more OSTs to get information about the data objects that make up your files). Use ls -l only on individual files (for example, to get a file's actual, uncompressed size) or directories that contain a small number of files. For more, see Listing files.
  • Set Lustre striping:
    lfs setstripe -c X <file|directory>

    In the example above, replace X with the number of stripes to set for a file or directory (the default is one stripe).

    Too many stripes may negatively impact performance (16 should be the maximum). Also, setstripe does not affect existing data.
  • Show the number of stripes for a file and the OSTs on which the stripes are located:
    lfs getstripe <file|directory>

Get help with Lustre file systems at IU

For technical support or general information about the DC2 and DC-WAN2 file systems, contact the UITS High Performance File Systems group.

For after-hours support, call Data Center Operations (812-855-9910), and ask to have High Performance File Systems contacted.

To receive maintenance and downtime information, subscribe to the mailing list; see Subscribe to an IU List mailing list.

This is document ayfh in the Knowledge Base.
Last modified on 2019-02-13 17:17:56.

Contact us

For help or to comment, email the UITS Support Center.