public:lofar_cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
public:lofar_cluster [2011-11-08 12:24] gritpublic:lofar_cluster [2017-06-02 09:27] – [Access through portal to Cluster frontend] Arno Schoenmakers
Line 1: Line 1:
 ====== The LOFAR Clusters ====== ====== The LOFAR Clusters ======
  
-This page describes the LOFAR clusters for (potential) users. The CEP2 cluster is used to store the data from the BlueGene and to run the pipelines that will do the "standard" reduction and calibration of this data. After this pipeline processing, the results are stored in the Lofar Export (staging) Archive. For further processing the CEP1 cluster can be used. +This page describes the LOFAR clusters for (potential) users. The CEP4 cluster is used to store the data from Cobalt and to run the pipelines that will do the "standard" reduction and calibration of this data. After this pipeline processing, the results are stored in the Lofar Export (staging) Archive. For further processing the CEP3 cluster can be used. 
  
 We welcome authorized users on the LOFAR clusters. We welcome authorized users on the LOFAR clusters.
 +
 +===== CEP3 ======
 +
 +CEP3 is the processing cluster that is available to LOFAR users that need additional processing and storage of their data. Details on requesting access and its usage are [[cep3:start|on this page]].
 +
 +===== CEP4 ======
 +
 +CEP4 is the main storage and processing cluster for online applications. It is not accessible by other users than LOFAR staff. Information on CEP4 can be found [[cep4:start|on this page]].
  
 ===== User Access ===== ===== User Access =====
Line 13: Line 21:
 > ssh -X portal.lofar.eu > ssh -X portal.lofar.eu
 </code> to connect. We maintain a ssh whitelist, so only registered institutes are able to login. </code> to connect. We maintain a ssh whitelist, so only registered institutes are able to login.
-Please send an email to //grit at astron.nl// or //h.paas at rug.nl// to add your institute or personal IP number to this white list.+Please send an email to //grit at astron.nl// or //rbokhorst at astron.nl// to add your institute or personal IP number to this white list.
 When you are logged in for the first time, you'll have an empty home directory.  When you are logged in for the first time, you'll have an empty home directory. 
  
-To get onto the CEP1 cluster, you first have to login at one of the two frontend nodes ''lfe001'' of''lfe002'' using ''ssh -X'' (as above). When you don't know which one, please use ''lfe001'' by default.  +To get onto the CEP3 cluster, you first have to login to the frontend node ''lhdhead.offline.lofar'' using ''ssh -X'' (as above). 
- +
-To get onto the CEP2 cluster, you first have to login at one of the two frontend nodes ''lhn001'' of''lhn002'' +
  
 For more sophisticated useage of ''ssh'' read [[public:ssh-usage| this page]]. For more sophisticated useage of ''ssh'' read [[public:ssh-usage| this page]].
Line 40: Line 46:
  
 Some groups or people would like their favourite tools or pipelines added to the cluster software. That can usually be accomodated, but there are some guidelines. Click [[user_software_guidelines|here]] to learn more! Some groups or people would like their favourite tools or pipelines added to the cluster software. That can usually be accomodated, but there are some guidelines. Click [[user_software_guidelines|here]] to learn more!
- 
- 
-===== Short CEP1 LOFAR Cluster layout ===== 
- 
- 
-==== Frontend ==== 
- 
-A frontend has 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 2TB disks in RAID5 configuration. 
-There are actually two identical frontends: ''lfe001'' and ''lfe002''. Both of them serve a specific group of subclusters. The frontends are used to build the software and regulate the workload on the subclusters.  
- 
-==== Processing units ==== 
- 
-The compute elements have 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 1TB disks in RAID0 configuration. They can be accessed by secure shell and they are grouped. 
- 
- 
-==== Storage Units ==== 
- 
-The storage nodes are HP DL180G5 boxes, having 2 Intel Xeon L5420 quad core processors, 16GB internal memory, 6 GbE network interfaces and a total of 24TB disk space. 
-The nodes have been named ''lsexxx'' with ''xxx'' a number from ''001'' to ''024''. 
- 
-The disk space is divided into 4 partitions of 6 disks each, set up in RAID5 configuration. The partitions are called "/data1" - "/data4". The installed filesystem is the [[http://en.wikipedia.org/wiki/XFS|XFS]] filesystem. 
- 
-===== Short CEP2 cluster layout ===== 
- 
- 
-The CEP2 cluster consists of 100 processing/storages nodes, each having 64GB memory, 24 cpu cores and 20TB storage. 
- 
-===== Subclusters ===== 
- 
-The concept of subclusters is gone. All "lce" compute nodes can reach all "lse" storage nodes. The are 24 lse nodes with 4 data partitions each. They can be accessed on lce nodes using automounts like /cep1/lse023_data2. Automounts are setup when access is needed and will drop off after 60 minutes being unused. 
- 
-To achieve a proper distribution, you can select a few lce nodes from the same row as their corresponding lse nodes. 
-<code> 
-lce-nodes      lse-nodes 
-============   ============= 
-lce001-lce009  lse001-lse003 
-lce010-lce018  lse004-lse006 
-lce019-lce027  lse007-lse009 
-lce028-lce036  lse010-lse012 
-lce037-lce045  lse013-lse015 
-lce046-lce054  lse016-lse018 
-lce055-lce063  lse019-lse021 
-lce064-lce072  lse022-lse024 
-</code> 
- 
-====== Bootleg mechanism ====== 
- 
-The new cluster is subject to the “bootleg” deployment regime. This service and installation facility was developed by Harm Paas of the CIT department of the RUG University in Groningen. He already had many years of experience with this system at the Computer Science Faculty. One server in the CEP domain is capable of installing many cluster nodes at once in less than 10 minutes. The administration of all nodes is done on this single Bootleg server. More details can be found on the [[operations:bootleg|Bootleg page]]. 
- 
-==== Central administration ==== 
- 
-The way the CEP cluster has been set up is as follows: 
-We are using Ubuntu Linux LTS-8.04 as Operating System within the whole cluster. Bootleg takes care of all of the server management in the cluster by creating new images, administering servers, keeping track of updates and correct start up of servers by sending them new images when it finds outdated versions at boot time. 
- 
-==== Profiles ==== 
- 
-Bootleg also administers machine profiles: Depending on the function of the server in the cluster it needs other settings and profiles. 
-This management strategy implies that we have to keep track of all programs installed in the Linux system itself, so we will be able to add them to the image. We plan to build fresh images every month (at the maintenance days). 
-  
-==== Instantanious changes ==== 
- 
-We are also able to respond quickly on extra-program demands for system programs. Within bootleg there is a mechanism to roll-out extra programs through the cluster from a central administration point (CAP). From the CAP we order to deploy a program or a settings-change and after 1 minute the whole cluster will be updated with the change. So if you need extra system programs, simply ask, and you get it almost instantly on every server in the cluster! 
- 
-==== Application programs ==== 
- 
-For an application program change or addition there is another method. 
-Of course you develop your programs in your own user space and/or svn repositories. So you have completely control over all of this. 
-Program releases for the cluster itself reside on a central disk. 
-Normally we (as system administrators) are not involved in the application program deployment. 
-Only if a completely new package is added under a new directory name we 
-need to make this available in the cluster by connecting it in the O.S. 
-via a link to the newly created package name.  
-So there is a clear interface between system programs and applications: 
-The connection between them is at a well defined point, the /opt directory is intended for this.  
-The same goes for database content (mysql, postgresql, ...).  
-Although it might involve OS program installation and services, the database content itself will never be on the Linux system image but stored on a separate data server. 
- 
-Again: Simply ask, we can connect your program and make it available cluster-wide. 
  
  
  
  • Last modified: 2020-10-20 14:16
  • by Bernard Asabere