Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
public:lofar_cluster [2009-10-13 11:45] – Document moved from engineering:lofar_cluster Arno Schoenmakers | public:lofar_cluster [2020-10-20 14:16] (current) – Bernard Asabere | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The LOFAR Cluster | + | ====== The LOFAR Clusters |
- | This page describes the LOFAR cluster | + | This page describes the LOFAR clusters |
+ | We welcome authorized users on the LOFAR clusters. | ||
- | ===== Hello User! ===== | + | ===== CEP3 ===== |
- | We welcome authorized users on this cluster. For the time being the users are bound to only a part of the cluster. This part is called a subcluster. Various | + | CEP3 is the processing |
- | The current subcluster assignment is: | + | ===== CEP4 ===== |
- | lfe001:~> showsub | + | CEP4 is the main storage and processing cluster for online applications. It is not accessible by other users than LOFAR staff. Information on CEP4 can be found [[:cep4: |
- | This script shows the subcluster definitions | + | ===== User Access ===== |
- | sub lce-nodes | + | ==== Access through portal to Cluster frontend |
- | | + | |
- | sub1 lce001-lce009 | + | |
- | sub2 lce010-lce018 | + | |
- | sub3 lce019-lce027 | + | |
- | sub4 lce028-lce036 | + | |
- | sub5 lce037-lce045 | + | |
- | sub6 lce046-lce054 | + | |
- | sub7 lce055-lce063 | + | |
- | sub8 lce064-lce072 | + | |
- | ===== LOFAR Cluster layout (brief) ===== | + | You can access the Lofar cluster through the portal. On your own system, try: |
+ | < | ||
+ | > ssh -X portal.lofar.eu | ||
- | The Lofar cluster is devided into 8 subclusters. Each subcluster is a processing cluster for a specific commsissioning group. There are 72 compute nodes and 24 storages in total. Each subcluster has 9 compute nodes and 3 storage nodes of 24TB raw capacity. The storage nodes have 4 RAID5 partitions each. A partition holds a single XFS filesystem. Each filesystem is NFS mounted on all 9 compute nodes. So 1 compute node has 12 NFS data volumes mounted. | + | </ |
- | For all the gory details go to [[engineering: | + | to connect. We maintain a ssh whitelist, so only registered institutes are able to login. Please send an email to //grit at astron.nl// or //rbokhorst at astron.nl// to add your institute or personal IP number to this white list. When you are logged in for the first time, you'll have an empty home directory. |
- | ==== Portal ==== | + | To get onto the CEP3 cluster, you first have to login to the frontend node '' |
- | You can access the Lofar cluster through the portal: "**ssh -X portal.lofar.eu**" | + | For more sophisticated useage of '' |
- | We maintain a ssh whitelist, so only known institutes are able to login. | + | |
- | Please send an email to grit@astron.nl or h.paas.rug.nl to add your institute or personal IP number. | + | |
- | Once logged in, you'll find an empty home directory. You'll have to login at one of the two frontends " | + | |
- | ==== Frontend | + | ==== LOGIN environment |
- | A frontend has 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 2TB disks in RAID5 configuration. | + | See [[:public:lle|The Lofar Login Environment]] page for information on your default login environment. Only the default environment is supported, so if you defer from this, you're on your own..! |
- | There are actually two identical frontends. Both of them serve a specific group of subclusters. The frontends are used to build the software and regulate the workload on the subclusters. There is a page that describes the frontend | + | |
- | + | ||
- | ==== Processing units ==== | + | |
- | + | ||
- | The compute elements have 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 1TB disks in RAID0 configuration. They can be accessed by secure shell and they are grouped to allow the use of clustertools like " | + | |
- | + | ||
- | + | ||
- | ==== Storage Units ==== | + | |
- | + | ||
- | The storage nodes are HP DL180G5 boxes, having 2 Intel Xeon L5420 quad core processors, 16GB internal memory, 6 GbE network interfaces and 24TB disks. | + | |
- | The disks are devided into 4 partitions of 6 disks each, set up in RAID5 configuration. The XFS filesystems are called "/ | + | |
- | + | ||
- | ===== User Access ===== | + | |
- | + | ||
- | ==== Access through portal to Cluster frontend ==== | + | |
- | + | ||
- | ==== LOGIN environment ==== | + | |
==== Do and Don'ts ==== | ==== Do and Don'ts ==== | ||
- | ===== Old Stuff ===== | + | **DON' |
- | ==== Offline processing clusters ==== | + | * :!: Store data in your '' |
+ | * Pump around large amounts of data between nodes. | ||
+ | * Leave unused data on the systems. Clean it up, so others can make use of the available disk space. | ||
- | The disks of the offline storage nodes are NFS mounted on all these offline nodes. These systems are in use for offline processing (i.e. when data has been transfered from the central storage facility). | + | **DO:** |
- | The clusters are described on more detail [[engineering: | + | * Help improve the system by sending suggestions |
- | ==== Offline storage cluster | + | ===== Contributed Software ===== |
- | This cluster | + | Some groups or people would like their favourite tools or pipelines added to the cluster |
- | More information can be found [[engineering:system:databuffercluster|on this page]]. | + | |