public:lofar_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
public:lofar_cluster [2009-10-13 11:59] Arno Schoenmakerspublic:lofar_cluster [2020-10-20 14:16] (current) Bernard Asabere
Line 1: Line 1:
-====== The LOFAR Cluster ======+====== The LOFAR Clusters ======
  
-This page describes the LOFAR cluster for (potential) users. The cluster is used to store the correlated data from the BlueGene and to run pipelines that do the "standard" reduction and calibration. After this pipeline processing, the results are stored in the Lofar Export (staging) Archive.+This page describes the LOFAR clusters for (potential) users. The CEP4 cluster is used to store the data from Cobalt and to run the pipelines that will do the "standard" reduction and calibration of this data. After this pipeline processing, the results are stored in the Lofar Export (staging) Archive. For further processing the CEP3 cluster can be used.
  
-We welcome authorized users on this cluster. For the time being the users are bound to only a part of the cluster. This part is called a subcluster. Various users groups have been granted access to a specific subcluster. Users can utilized a standard login environment (see below) and they can access the cluster resources using one of the two frontends "lfe001" and "lfe002".+We welcome authorized users on the LOFAR clusters.
  
-===== LOFAR Cluster layout (brief) =====+===== CEP3 =====
  
-The Lofar cluster is devided into 8 subclusters. Each subcluster is a processing cluster for a specific commsissioning group. There are 72 compute nodes and 24 storages in total. Each subcluster has 9 compute nodes and 3 storage nodes of 24TB raw capacityThe storage nodes have 4 RAID5 partitions each. A partition holds a single XFS filesystem. Each filesystem is NFS mounted on all 9 compute nodes. So 1 compute node has 12 NFS data volumes mounted.+CEP3 is the processing cluster that is available to LOFAR users that need additional processing and storage of their dataDetails on requesting access and its usage are [[:cep3:start|on this page]].
  
 +===== CEP4 =====
  
 +CEP4 is the main storage and processing cluster for online applications. It is not accessible by other users than LOFAR staff. Information on CEP4 can be found [[:cep4:start|on this page]].
  
-==== Frontend ====+===== User Access =====
  
-A frontend has 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 2TB disks in RAID5 configuration. +==== Access through portal to Cluster frontend ====
-There are actually two identical frontends. Both of them serve a specific group of subclusters. The frontends are used to build the software and regulate the workload on the subclusters. There is a page that describes the frontend [[engineering:frontend|in detail]]+
  
-==== Processing units ====+You can access the Lofar cluster through the portal. On your own system, try: 
 +<code> 
 +> ssh -X portal.lofar.eu
  
-The compute elements have 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 1TB disks in RAID0 configuration. They can be accessed by secure shell and they are grouped.+</code>
  
 +to connect. We maintain a ssh whitelist, so only registered institutes are able to login. Please send an email to //grit at astron.nl// or //rbokhorst at astron.nl// to add your institute or personal IP number to this white list. When you are logged in for the first time, you'll have an empty home directory.
  
-==== Storage Units ==== +To get onto the CEP3 clusteryou first have to login to the frontend node ''lhdhead.offline.lofar'' using ''ssh -X'' (as above).
- +
-The storage nodes are HP DL180G5 boxeshaving 2 Intel Xeon L5420 quad core processors, 16GB internal memory, 6 GbE network interfaces and 24TB disks. +
-The disks are devided into 4 partitions of 6 disks each, set up in RAID5 configurationThe XFS filesystems are called "/data1" till "/data4"+
- +
-===== Available Subclusters ===== +
- +
-The current subcluster assignment is: +
- +
-  lfe001:~> showsub +
- +
-  This script shows the subcluster definitions +
- +
-  sub  lce-nodes      lse-nodes       cexec-lce    cexec-lse   In use for: +
-  ====  =========      =========      =========    =========   =========== +
-  sub1  lce001-lce009  lse001-lse003  lce:0-8      lse:0-2     production group +
-  sub2  lce010-lce018  lse004-lse006  lce:9-17     lse:3-5     no power +
-  sub3  lce019-lce027  lse007-lse009  lce:18-26    lse:6-8     imaging group +
-  sub4  lce028-lce036  lse010-lse012  lce:27-35    lse:9-11    no power +
-  sub5  lce037-lce045  lse013-lse015  lce:36-44    lse:12-14   pulsar group +
-  sub6  lce046-lce054  lse016-lse018  lce:45-53    lse:15-17   no power +
-  sub7  lce055-lce063  lse019-lse021  lce:54-62    lse:18-20   developers group +
-  sub8  lce064-lce072  lse022-lse024  lce:63-71    lse:21-23   no power +
- +
-The lce-nodes ar ethe compute/processing nodes, the lse-nodes are the associated storage nodesOnly use the storage nodes that are associated with your processing node! +
- +
-===== User Access ===== +
- +
-==== Access through portal to Cluster frontend ==== +
  
-You can access the Lofar cluster through the portal: "**ssh -X portal.lofar.eu**" +For more sophisticated useage of ''ssh'' read [[:public:ssh-usage|this page]].
-We maintain a ssh whitelist, so only known institutes are able to login. +
-Please send an email to grit at astron.nl or h.paas at rug.nl to add your institute or personal IP number. +
-Once logged in, you'll find an empty home directory. You'll have to login at one of the two frontends "lfe001" of "lfe002" using "ssh -X". When you don't know which one, please use "lfe001" by default.+
  
 ==== LOGIN environment ==== ==== LOGIN environment ====
  
-See [public:lle|The Lofar Login Environment]] page for information on your default login environment. Only the default environment is supported, so if you defer from this, you're on your own..!+See [[:public:lle|The Lofar Login Environment]] page for information on your default login environment. Only the default environment is supported, so if you defer from this, you're on your own..!
  
 ==== Do and Don'ts ==== ==== Do and Don'ts ====
  
-===== Old Stuff =====+**DON'T:**
  
-==== Offline processing clusters ====+   * :!: Store data in your ''$HOME''  on ''portal.lofar.eu''  ; this system has a ridiculously small disk and filling it will prevent other users to access properly :!: 
 +  * Pump around large amounts of data between nodes. 
 +  * Leave unused data on the systems. Clean it up, so others can make use of the available disk space.
  
-The disks of the offline storage nodes are NFS mounted on all these offline nodes. These systems are in use for offline processing (i.e. when data has been transfered from the central storage facility).+**DO:**
  
-The clusters are described on more detail [[engineering:systems:offlinecluster|on this page]]. See [[engineering:software:environment:lofarcluster|this page]] for more information on the login environment.+  * Help improve the system by sending suggestions for improvements or new packages to the administrators
  
-==== Offline storage cluster ====+===== Contributed Software =====
  
-This cluster acts as a temporary data bufferData is copied from the online storage systems to these systems, so that the online systems can be used to store new data. These systems are accessable from the offline processing clusters, allowing post-processing of the data. +Some groups or people would like their favourite tools or pipelines added to the cluster softwareThat can usually be accomodated, but there are some guidelinesClick [[:public:user_software_guidelines|here]] to learn more!
-More information can be found [[engineering:system:databuffercluster|on this page]].+
  
  
  • Last modified: 2009-10-13 11:59
  • by Arno Schoenmakers