scart.dlr.de>High-Performance Computing
Friday, 29. March 2024
 

SCART high performance cluster

An Overview

SCART offers a High Performance Computing (HPC) Linux cluster which provides significant computational resources for the SCART in Göttingen.

The cluster is maintained by T-Sytems Solutions for Research (SfR) The SCART system configuration was designed by IBM - the design goal being to acheive the  best possible computational performance with optimal memory and I/O bandwidth - and uses an Intel platform with Intel Ivybridge processors operating under a Novell Linux Enterprise server to provide high performance over a wide range of applications. An Infiniband system ensures excellent cluster performance with acceptable bandwidth losses. The SCART cluster bas been constructed as a modular structure in order to minimize potential difficulties in maintaining system concurrency with envisaged updates in hardware and software technology.

Hardware Configuration


The HPC-Linux-Cluster contains 256 computational nodes, 4 GPU nodes, 4 MIC modes, with 550 TB raw storage space in a storage architecture consisting of 2 General Parallel File System (GPFS)nodes and 2 storage boxes, 4 frontend nodes, a management server, as well as 2 separate network systems (standard Gigabit Ethernet and Infiniband). Each computational node uses a local solid-state disk with 64 GB capacity. Computing nodes are dual socket systems equiped with 10-core Intel Chips (2.8 GHz) and a local SSD disk (64 GB) per socket. Configured memory operates at the maximal possible performance across all nodes and  node I/O traffic is prioritized, so that the interplay between IO traffic and interprocess communication can be controlled, so that MPI latencies remain small - even under strong IO traffic. Frontend nodes remain inter-connected within the cluster via GigE switching, and all computing nodes are connected via full non-bloking Infiniband architectures. All central nodes (eg.GPFS, Management, Frontends) are equipped with RAID systems to improve system reliability. Finally the GPFS I/O nodes are connected directly via optic fibre cable.  
 
Usage policy/Installed Applications

The SCART cluster resources are now shared according to a formal resource allocation policy. User access and limits are contolled via a queing management system. At the present time users are permitted up to 10 job entries onto the queing systems, with limits also being placed on the total number of CPU's that a user can use simulataneously. Opensource software, as requested by users, is available over the entire cluster. Licensed software is also installed but can be only be accessed dependent on licensing terms. At the following time the following software is installed on the system

Compilers:

Intel Compiler – Version 2013 SP1- Update 1 inkl. Support für Xeon Phi, GNU Compilersuit

Python:

V2.7, V3.0

Parallel libraries:

MPI Intel MPI libraries, Mpavich2

Development:

Eclipse Development environment

Debugging:

Alinea DDT for parallel code debugging

Postprocessing software:

Tecplot, IDL, Paraview

CFD:

TAU, AMROC, OpenFOAM


To view the remote desktop on the client side software RealVNC is being used. RealVNC allows establish encrypted end-to-end connections which ensures system secutity and allows convenient system access from remote workstations.

 

Please fill out the following "Request for use of the high-performance computers at SCART". Leave fields blank which you cannot answer. Fields with bold print must be filled out. If you have any questions, you may contact the computing coordinator Dr. Keith Weinman (phone:  +49(0)551 709-2339).

 
German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, SCART
Bunsenstraße 10, 37075 Göttingen, Germany