ARCHIVED: NCGAS: Use Bridges (PSC) to run large-memory genome assembly applications
On this page:
- Overview
- Get an allocation on Bridges
- Log into Bridges
- Transfer files to Bridges
- Home directory and scratch space
- Run jobs on Bridges
Overview
NCGAS partners with the ACCESS project to provide access to large-memory nodes on Bridges. Equipped with data-intensive high performance computing system equipped with regular, large, and extreme shared memory compute nodes, GPU nodes, database nodes, web server nodes, and data transfer nodes, Bridges is designed for extreme flexibility, functionality, and usability, and is well-suited for running genome assembly applications that require large amounts of memory.
If your genome assembly application running on the large-memory nodes on ARCHIVED: Carbonate continues to run out of memory, you may consider requesting an allocation on Bridges. With a Bridges large memory allocation, you can submit your job to the LM partition, which provides access to compute nodes with up to 12 TB of RAM.
Get an allocation on Bridges
To get an allocation on Bridges, NCGAS researchers have the following options:
- Submit a request for information on accessing the existing NCGAS ACCESS allocation. This is a good option if you are still trying to determine whether or not Bridges is optimal for your work. To submit a request, first go to the ACCESS website and create an ACCESS. Once you have an ACCESS username, fill out and submit the NCGAS Allocations Request form.
- Request your own ACCESS allocation. If you are certain your project will require large memory nodes, you should request your own ACCESS allocation on Bridges; for instructions, see the ACCESS website.
You will be notified via email when your account on Bridges is created.
Log into Bridges
Follow the instructions in the email notification to set a password using the PSC Password Change Utility.
For command-line access to your home directory on Bridges, use your preferred SSH client to connect to bridges.psc.edu
. Authenticate using your PSC username and password, or set up public-key authentication; for instructions, see Using SSH to Access PSC Resources.
Transfer files to Bridges
Once you have an account on Bridges, you can transfer your data from Carbonate to Bridges using the IU Globus Web App.
- For instructions on accessing your home directory space on Carbonate with the IU Globus Web App, see ARCHIVED: Types of sensitive institutional data appropriate for the research supercomputers at IU.
- In the IU Globus Web App, to set up a data transfer to your account on Bridges, activate the endpoint.
- If you have a symlinked
scratch
folder in your home directory when you activate the endpoint, you'll be able to transfer data directly from Carbonate to persistent scratch storage on PSC's Lustre-based pylon5 file system.To create a
scratch
directory on Bridges that's symlinked to your scratch space on pylon5, on the Bridges command line, enter:ln -s $(echo $SCRATCH) scratch
Alternatively, you can use SFTP or SCP to transfer data directly from Carbonate to your account on Bridges. For help, see:
Home directory and scratch space
Following is information about the file systems mounted on Carbonate and Bridges. In the examples, replace username
with your IU or PSC username, whichever is applicable.
Carbonate
File system | Path to your files | Allotment | Backup/purge policy |
---|---|---|---|
Home directory | $HOME or/N/u/username/Carbonate |
100 GB (800,000 files maximum) |
Data are backed up once a month; snapshots are taken daily and stored within each source home directory. No purge policy |
Slate | /N/slate/username |
Up to 1.6 TB | No backups No purge policy |
Bridges
File system | Path to your files | Allotment | Backup/purge policy |
---|---|---|---|
Home directory | $HOME or/home/username |
10 GB | No backups No purge policy |
pylon5 scratch space | $SCRATCH or/pylon5/chargeid/username |
Based on the proposal | No backups No purge policy |
Use your home directory on Bridges to store batch scripts, source code, and parameter files. Your Bridges home directory is backed up daily, but you should store important files that you want to keep in another location.
PSC's Lustre-based pylon5 file system provides persistent storage and fast I/O access for jobs running on Bridges. Files on pylon5 are not backed up, so you should save copies of important files to another location.
For more, see Bridges-2 User Guide.
Run jobs on Bridges
Bridges uses the Slurm Workload Manager to coordinate resource management and job scheduling. For help with Slurm, see Use Slurm to submit and manage jobs on IU's research computing systems.
To submit a job to the LM partition on Bridges, you must include the following sbatch
options either in your job script or on the command line:
Option | Description |
---|---|
-p LM |
Request that your job is allocated resources in the LM partition. |
--mem=<n>GB |
Request the amount of memory your application needs; replace <n> with any value up to 12000 . Slurm will place your job on either a 3 TB or 12 TB node based on your memory request. |
-t <HH:MM:SS> |
Set a limit in HH:MM:SS format on the total wall time for your job. |
For more about the LM partition on Bridges, see Bridges-2 User Guide.
For more about running jobs on Bridges, see Bridges-2 User Guide.
This is document azbz in the Knowledge Base.
Last modified on 2023-02-17 13:25:14.