Understanding measures of supercomputer performance and storage system capacity
- Measuring computer performance in FLOPS
- Measuring storage capacity in bytes
- Prefixes for representing orders of magnitude
- Understanding orders of magnitude in computer performance
- Understanding orders of magnitude in storage capacity
- IU examples
Measuring computer performance in FLOPS
The performance capabilities of supercomputers (e.g., Indiana University's research computing systems)(e.g., the high-performance compute systems of the Extreme Science and Engineering Discovery Environment) are expressed using a standard rate for indicating the number of floating-point arithmetic calculations systems can perform on a per-second basis. The rate, floating-point operations per second, is abbreviated as FLOPS.
Note: The "S" in the acronym "FLOPS" stands for "second" and is used in combination with "P" (for "per") to indicate a rate, such as "miles per hour" (MPH) or gigabits per second (Gbps). The per-second rate "FLOPS" is commonly misinterpreted as the plural form of "FLOP" (short for "floating-point operation").
Computer vendors and service providers typically list the theoretical peak performance (Rpeak) capabilities of their systems expressed in FLOPS. A system's Rpeak is calculated by multiplying the number of processors by the clock speed of the processors, and then multiplying that product by the number of floating-point operations the processors can perform in one second on standard benchmark programs, such as the LINPACK DP TPP and HPC Challenge (HPCC) benchmarks, and the SPEC integer and floating-point benchmarks.
Measuring storage capacity in bytes
Computer storage and memory capacities are expressed in units called bits and bytes. A bit is the smallest unit of measurement for digital information in computing. A byte is the number of bits a particular computing architecture needs to store a single text character. Consequently, the number of bits in a byte can differ between computing platforms. However, due to the overwhelming popularity of certain major computing platforms, the 8-bit byte has become the international standard, as defined by the International Electrotechnical Commission (IEC).
An uppercase "B" is used for abbreviating "byte(s)"; a lowercase "b" is used for abbreviating "bit(s)". This difference can cause confusion. For example, file sizes are commonly represented in bytes, but download speeds for electronic data are commonly represented in bits per second. With a download speed of 10 megabits per second (Mbps), you might mistakenly assume a 100 MB file will download in only 10 seconds. However, 10 Mbps is equivalent to only 1.25 MB per second, meaning a 100 MB file would take at least 80 seconds to download.
Note: Storage vendors and service providers typically list the storage capacities of their systems in terms of "disk space", even when referring to tape storage systems, such as IU's Scholarly Data Archive (SDA).
Prefixes for representing orders of magnitude
Orders of magnitude (in base 10) are expressed using standard metric prefixes, which are abbreviated to single characters when prepended to other abbreviations, such as FLOPS and B (for byte):
Order of magnitude
(as a factor of 10)
|Computer performance||Storage capacity|
Note: These prefixes also are used to convey the scale and complexity of the computational and analytical methods employed when working with supercomputers; for example:
- Terascale: Refers to methods and processes for using supercomputers capable of performing at least 1 TFLOPS or storage systems capable of storing at least 1 TB
- Petascale: Refers to methods and processes for using supercomputers capable of performing at least 1 PFLOPS or storage systems capable of storing at least 1 PB
- Exascale: Refers to methods and processes for using supercomputers capable of performing at least 1 EFLOPS or storage systems capable of storing at least 1 EB
Understanding orders of magnitude in computer performance
A 1 gigaFLOPS (GFLOPS) computer system is capable of performing one billion (109) floating-point operations per second. To match what a 1 GFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31.69 years.
A 1 teraFLOPS (TFLOPS) computer system is capable of performing one trillion (1012) floating-point operations per second. The rate 1 TFLOPS is equivalent to 1,000 GFLOPS. To match what a 1 TFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688.77 years.
A 1 petaFLOPS (PFLOPS) computer system is capable of performing one quadrillion (1015) floating-point operations per second. The rate 1 PFLOPS is equivalent to 1,000 TFLOPS. To match what a 1 PFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688,765 years.
A 1 exaFLOPS (EFLOPS) computer system is capable of performing one quintillion (1018) floating-point operations per second. The rate 1 EFLOPS is equivalent to 1,000 PFLOPS. To match what a 1 EFLOPS computer system can do in just one second, you'd have to perform one calculation every second for 31,688,765,000 years.
Understanding orders of magnitude in storage capacity
A gigabyte is equal to one billion bytes. You can fit 4.37 GB of data on one single-sided DVD (each DVD is about 1.2 mm, or 0.047 inches, thick).
A terabyte is equal to one trillion (one thousand billion) bytes, or 1,000 gigabytes. To hold 1 TB of data, you would need a stack of single-sided DVDs that's 282 mm (11.1 inches) tall.
A petabyte is equal to one quadrillion (one thousand trillion) bytes, or 1,000 terabytes. To hold 1 PB of data, you would need a stack of single-sided DVDs that's 290 meters (79.25 feet) tall.
An exabyte is equal to one quintillion (one thousand quadrillion), or 1,000 petabytes. To hold 1 EB, you would need a stack of single-sided DVDs that's 294 km (183 miles) tall.
Following are some examples of tera-, peta-, and exascale computing at IU:
- IU's Big Red II system has a theoretical peak performance of 1 PFLOPS. Big Red II was the first petascale system to be owned and operated solely by (and for) a US university.
- The SDA provides 42 PB of long-term storage capacity for research data.
- The Data Capacitor II high-speed, high-throughput file system provides 3.5 PB of temporary storage for applications running on IU supercomputers; the Data Capacitor Wide Area Network (DC-WAN) provides 339 TB of temporary storage for applications running on remote supercomputers.
- The Research File System (RFS) provides 50 TB of HIPAA-aligned, long-term storage for research and collaborative projects at IU.
- IU's Center for Research in Extreme Scale Technologies (CREST) develops methods, technologies, and training resources to enable exascale data analysis and computation.
This document was developed with support from National Science Foundation (NSF) grant OCI-1053575. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.
This is document apeq in the Knowledge Base.
Last modified on 2014-12-18.
- Fill out this form to submit your issue to the UITS Support Center.
- Please note that you must be affiliated with Indiana University to receive support.
- All fields are required.