The confluence migration is currently ongoing. Please note that any modifications made right now will not be transferred or preserved in the new instance. Plan your activities accordingly to accommodate this limitation. For details on the migration please click here

Getting Access

Access to the cluster is subject to formal approval by selected members of the participating research groups. See the HPC service catalog for more information on access options.


Introductory Linux Training and Resources

If you're not familiar with basic Linux commands and usage, or if you need a refresher on these topics, please refer to the following resources from our workshop series:

For a list of recorded trainings and upcoming research computing workshops and events, please see:

https://www.cuit.columbia.edu/rcs/training




Logging In


You will need to use SSH (Secure Shell) in order to access the cluster.  Windows users can use PuTTY or Cygwin. MacOS users can use the built-in Terminal application.


Users log in to the cluster's submit node, located at terremoto.rcs.columbia.edu or use the shorter form moto.rcs.columbia.edu.  If logging in from a command line, type:


$ ssh <UNI>@terremoto.rcs.columbia.edu



OR


$ ssh <UNI>@moto.rcs.columbia.edu


where <UNI> is your Columbia UNI. Please make sure not to include the angle brackets ('<' and' >') in your command; they only represent UNI as a variable entity.


Once prompted,  you need to provide your usual Columbia password.


Submit Account


You must specify your account whenever you submit a job to the cluster. You can use the following table to identify the account name to use.


Note that at this time not all groups names have been finalized.


Account

Full Name

apam

Applied Physics and Applied Mathematics

asenjo

Asenjo Lab

astro

Astronomy and Astrophysics

atmchm

Atmospheric Chemistry

axs

Axel Lab

berkelbach

Berkelbach Group

cboyce

Boyce

chemeChemical Engineering
csComputer Science (Yang, Jana, Wing)
eatonEaton

edu

Education Users

fortinFortin Lab
febioAteshian / Morrison

gsb

Graduate School of Business

hblabHarmen Bussemaker Lab
hillHill

iicd

Irving Institute for Cancer Dynamics

katt2

Hirschberg

gsbGraduate School of Business
mauelMichael Mauel 

nklab

Kriegeskorte Lab
palabPrzeworski / Andolfatto Lab

pdlab

Dutrieux

qmech

Quantum Mechanics

roamCiocarlie
slabSharma Lab
ssccSocial Science Computing Committee

stats

Statistics

trl

Turbulence Research Lab

urbanUrban Lab
yoonYoon

zi

Zuckerman Institute

rent<UNI>

Renters



Your First Cluster Job

When you first login to Terremoto, you are on a login node. Login nodes are not places where users should do actual work aside from simple tasks like editing a file or creating new folders.

Instead, it is important to move from the initial login node to a compute node before doing most work. Example:

srun --pty -t 0-2:00 -A <ACCOUNT> /bin/bash

Now you have moved from the login node to one of the compute nodes on the cluster.  The simple tasks mentioned above can also be done here, but from here is where it is especially important to submit scripts for processing.

If the HPC group notices jobs being run on a login node, such jobs will be terminated and the user notified.

Submit Script


This script will print "Hello World", sleep for 10 seconds, and then print the time and date. The output will be written to a file in your current directory.


In order for this example to work you need to replace <ACCOUNT> with your account name. If you don't know your account name the table in the previous section might help.


#!/bin/sh
#
# Simple "Hello World" submit script for Slurm.
#
# Replace <ACCOUNT> with your account name before submitting.
#
#SBATCH --account=<ACCOUNT>      # The account name for the job.
#SBATCH --job-name=HelloWorld    # The job name.
#SBATCH -c 1                     # The number of cpu cores to use.
#SBATCH --time=1:00              # The time the job will take to run (here, 1 min)
#SBATCH --mem-per-cpu=1gb        # The memory the job will use per cpu core.

echo "Hello World"
sleep 10
date

# End of script


Job Submission


If this script is saved as helloworld.sh you can submit it to the cluster with:


$ sbatch helloworld.sh


This job will create one output file name slurm-####.out, where the #'s will be replaced by the job ID assigned by Slurm. If all goes well the file will contain the words "Hello World" and the current date and time.


See the Slurm Quick Start Guide for a more in-depth introduction on using the Slurm scheduler.

  • No labels