The confluence migration is currently ongoing. Please note that any modifications made right now will not be transferred or preserved in the new instance. Plan your activities accordingly to accommodate this limitation. For details on the migration please click here

Storage Overview

After logging in to Ginsburg you will be in your home directory. This home directory storage space (50 GB) is appropriate for smaller files, such as documents, source code, and scripts but will fill up quickly if used for data sets or other large files.


Ginsburg's shared storage server is named "burg", and consequently the path to all home and scratch directories begins with "/burg". Your home directory is located at /burg/home/<UNI>. This is also the value of the environment variable $HOME.


Each group account on Ginsburg has an associated scratch storage space that is at least 1 terabyte (TB) in size.

Note the important "No backups" warning regarding this storage at the bottom of this page.

Your group account's scratch storage is located under /burg/<ACCOUNT>. The storage area for each account is as following:


LocationSize Default User Quota
$HOME
50 GB
/burg/abernathey20 TB50 GB
/burg/anastassiou5 TB50 GB
/burg/apam7 TB

50 GB

/burg/asenjo1 TB50GB
/burg/astro65 TB50 GB
/burg/berkelbach16 TB50 GB
/burg/biostats10 TB50 GB
/burg/camargo6 TB50 GB
/burg/ccce22 TB50 GB
/burg/cgl30 TB50 GB
/burg/crew11 TB50 GB
/burg/dsi52 TB50 GB
/burg/dslab7 TB50 GB
/burg/e3b2 TB50 GB
/burg/edru2 TB50 GB
/burg/emlab1 TB50 GB
/burg/fiore21 TB50 GB
/burg/glab30 TB50 GB
/burg/gsb2 TB50 GB
/burg/hblab20 TB50 GB
/burg/iicd21 TB50 GB
/burg/jalab8 TB50 GB
/burg/katt32 TB50 GB
/burg/kellylab1 TB50 GB
/burg/mckinley21 TB50 GB
/burg/millis10 TB50 GB
/burg/mjlab8 TB50 GB
/burg/morphogenomics-lab50 TB50 GB
/burg/myers2 TB50 GB
/burg/ntar_lab2 TB50 GB
/burg/ocp100 TBOCP shared volume with per user 10 TB quota.
/burg/oshaughnessy2TB50 GB
/burg/palab120 TB50 GB
/burg/psych15 TB50 GB
/burg/qmech2 TB50 GB
/burg/rqlab10 TB50 GB
/burg/sail3 TB50 GB
/burg/seager10 TB50 GB
/burg/seasdean16 TB50 GB
/burg/sobel10 TB50 GB
/burg/sscc90 TB50 GB
/burg/stats20 TB50 GB
/burg/stock11 TB50 GB
/burg/subram4 TB50 GB
/burg/thea88 TB50 GB
/burg/theory11 TB50 GB
/burg/ting5 TB50 GB
/burg/tosches5 TB50 GB
/burg/urban5 TB50 GB
/burg/vedula21 TB50 GB
/burg/wu6 TB50 GB
/burg/zi10 TB50 GB

The amount of data stored in any directory along with its subdirectories can be found with:


cd ~
df -h .
Filesystem      Size  Used Avail Use% Mounted on
xxx:xxx:/burg   50G   20K   50G   1% /burg 

Size shows your quota and Avail shows what is used.

Inodes

Inodes are used to store information about files and directories and an inode is used up for every file and directory that's created. The inode quota for home directories is 150,000.

$ df -hi /burg/<ACCOUNT>
  
Should your group run out of inodes and there are free inodes available, we may be able to increase your inode allocation. Please contact us for more details about this if your group is running out of inodes.

Anaconda keeps a cache of the package files, tarballs etc. of the packages you've installed. This is great when you need to reinstall the same packages. But, over time, the space can add up.

You can use the 'conda clean' command and run the command in dry-run mode to see what would get cleaned up,

conda clean --all --dry-run

Once you're satisfied with what might be deleted, you can run the clean up,

conda clean --all

This will clean the index cache, lock files, tarballs, unused cache packages, and the source cache.

User and Project Scratch Directories


Ginsburg users can create directories in their account's scratch storage using their UNI or a project name.


$ cd /burg/<ACCOUNT>/users/
$ mkdir <UNI>


For example, an astro member may create the following directory:


$ cd /burg/astro/users/
$ mkdir <UNI>


Alternatively, for a project shared with other users:


$ cd /burg/astro/projects/
$ mkdir <PROJECT_NAME>


Naming conventions (such as using your UNI for your users directory) are not enforced, but following them is highly recommended as they have worked well as organization mechanisms on previous clusters.

No Backups

Storage is not backed up. User files may be lost due to hardware failure, user error, or other unanticipated events.


It is the responsibility of users to ensure that important files are copied from the system to other more robust storage locations.




  • No labels