This is an old revision of the document!
DRAGNET GPU Cluster Specifications
Raw specifications of the cluster. More details can be found in the tender offer.
Nodes
- 1 head node
- 1 processing node
- 23 worker nodes
Head Node (1)
dragnet.control.lofar
: login, home dirs + file server (installed programs) (NFS), job scheduling
- Intel Xeon Haswell-EP E5-1620 v3, 4 cores (8 hw threads), 3.50 GHz
- 32 GiB RAM
- 2x 1 Gb eth network
- 2x 4 TB storage (RAID 1, i.e. 4 TB netto)
Processing Node (1)
dragproc.control.lofar
: central post-processing, database, reliable storage, secondary/backup node for login/file server/job scheduling
- dual Intel Xeon Haswell-EP E5-2630 v3, 16 cores (32 hw threads), 2.40 GHz
- 128 GiB RAM
- 2x 1 Gb eth network
- 10 Gb eth network
- 8x 4 TB storage (RAID 6, i.e. 24 TB netto) on
/data
Worker Nodes (23)
drg[01-23].control.lofar
: bulk data processing, receive data from COBALT, scratch storage
- dual Intel Xeon Haswell-EP E5-2630 v3, 16 cores (32 hw threads), 2.40 GHz
- 4x NVIDIA GeForce GTX Titan X, 12 GiB RAM each (all GPUs total: 565.25 TFlop/s (FP32))
- 128 GiB RAM
- 2x 1 Gb eth network
- 10 Gb eth network
- 56 Gb FDR infiniband network
- 4x 4 TB storage (dual RAID-0, i.e. 16 TB netto; all nodes total: 368 TB) on
/data1
and/data2
(Note: local per node and not redundant!)
Networking
- infiniband
- 10 Gb ethernet
- dual 1 Gb ethernet
Infiniband
- Mellanox MSX 6025F-1SFS - 36 port unmanaged FDR infiniband switch
- 5x 56G LAG to COBALT infiniband switch (~280 Gb)
10 Gb Ethernet
- Supermicro SSE-X3348S Ethernet Switch; Ports: 48x 10 Gb, 4x 40 Gb, 2x 1 Gb
- 6x 10G LAG to LOFAR Core 10 Gb network
1 Gb Ethernet
- 2x Supermicro SSE-G2252 Ethernet Switch
Support
- 4 years support contract on the complete system (delivery was on Thu+Fri 9+10 July 2015)
- delivery + installation