CMS: Personal Programming Project (PPP)

Wiki

HPC Cluster

  1. Overview
  2. Connection
  3. File transfer
  4. Jobs
  5. Abaqus with User Routines

Overview

The HPC Cluster of the Computing Centre of the University runs in batch mode and may be used for PPP or master theses under certain circumstances. In particular, approval by the supervisor will be required. If the need for access to the HPC Cluster is approved by the supervisor, students need to apply at here for a login at Point "HPC: Beantragung". Name the supervisor as Projektleiter.

Connection

The HPC Cluster can be accessed only from remote via SSH. SSH clients are shipped with Linux and MacOS by default. Under Windows, the free Client MobaXTerm is recommended. Firstly, connect to the university Network via VPN. Start the SSH client and connect to the login node via

 ssh USERNAME@mlogin01.hrz.tu-freiberg.de

from the prompt. Therein, USERNAME is your central university login Name. MobaXterm offers alternatively a session manager for SSH connections. Experienced users can use an SSH tunnel instead of VPN through sshproxy.tu-freiberg.de .

The HPC cluster can be controlled via the command line only. The Midnight Commander (command "mc") offers some convenience for the file management.

File transfer

Each user has a separate home directory at the HPC cluster. Files can be transferred from and to the HPC Cluster by means of an SFTP connection to the aforementioned login node. Under Linux, most file managers like Nautilus or Midnight Commander can access SFTP repositories directly or SFTP repositories can be mounted to the System.

Under Windows, a separate SFTP client is required. MobaXTerm contains such a client and builds an SFTP connection together with any SSH connection by default. Files can be downloaded and uploaded to the HPC home directory in the tree view at the tab "Sftp" at the left-hand side of the main window. Numerous other SFTP clients are freely available, like FileZilla or WinSCP.

Jobs

No computations must be started at the login node! Rather, you need to submit Jobs via the queuing System PBS Professional. Each request certain resources like number of CPU cores ("ncpus"), memory ("mem") and maximum runtime ("walltime"). For first steps and testing purposes an interactive job can be started via

 qsub -I -X ncpus=4:mem=40GB,walltime=01:00:00

In this case, the commands that are typed into the current terminal will be directly forwarded to the assigned computing node. Firstly, respective software modules have to be loaded by the command

 module add MODULNAME

Typical modules for PPP could be

 module add python

for Python scripts or

 module add abaqus

for Abaqus. All available modules can be listed by

 module av

Most computing jobs are submitted as shell scripts via

 qsub ncpus=1:mem=5GB,walltime=2:00:00 script.sh

whereby the script.sh in the current folder contains the commands to be performed, including loading the modules. The state of submitted job can be traced by the command

 qstat

Further information on the queuing system can be found here.

Abaqus with User Routines

(The following example requires that the files "uelnonli.f" and "uelnonli.inp" from Section 4.1.14 of the Abaqus Verification Guide lie in the current folder)

Commands (either in interactive Job or in Job script):

 module add gcc
 module add intel
 module add abaqus/6.14-2
 abq6142 interactive job=uelnonli user=uelnonli.f
Abaqus/Viewer can be started for visualization in an interactive job as follows:
 qsub -I -X
 module add abaqus/6.14-2
 abq6142 viewer -mesa
Alternatively, the resulting .odb file can copied to the local machine where Abaqus/Viewer is run.

Diskussion: [Name der Diskussion]