Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
public:lofar_cluster [2009-10-15 07:47] – Arno Schoenmakers | public:lofar_cluster [2020-10-20 14:16] (current) – Bernard Asabere | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The LOFAR Cluster | + | ====== The LOFAR Clusters |
- | This page describes the LOFAR cluster | + | This page describes the LOFAR clusters |
- | We welcome authorized users on this cluster. For the time being the users are bound to only a part of the cluster. This part is called a subcluster. Various users groups have been granted access to a specific subcluster. Users can utilized a standard login environment (see below) and they can access the cluster resources using one of the two frontends " | + | We welcome authorized users on the LOFAR clusters. |
- | ===== LOFAR Cluster layout (brief) | + | ===== CEP3 ===== |
- | The Lofar cluster is devided into 8 subclusters. Each subcluster is a processing | + | CEP3 is the processing |
+ | ===== CEP4 ===== | ||
+ | CEP4 is the main storage and processing cluster for online applications. It is not accessible by other users than LOFAR staff. Information on CEP4 can be found [[: | ||
- | ==== Frontend | + | ===== User Access ===== |
- | A frontend has 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 2TB disks in RAID5 configuration. | + | ==== Access through portal |
- | There are actually two identical frontends. Both of them serve a specific group of subclusters. The frontends are used to build the software and regulate the workload on the subclusters. | + | |
- | ==== Processing units ==== | + | You can access the Lofar cluster through the portal. On your own system, try: |
+ | < | ||
+ | > ssh -X portal.lofar.eu | ||
- | The compute elements have 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 1TB disks in RAID0 configuration. They can be accessed by secure shell and they are grouped. | + | </ |
+ | to connect. We maintain a ssh whitelist, so only registered institutes are able to login. Please send an email to //grit at astron.nl// or //rbokhorst at astron.nl// to add your institute or personal IP number to this white list. When you are logged in for the first time, you'll have an empty home directory. | ||
- | ==== Storage Units ==== | + | To get onto the CEP3 cluster, you first have to login to the frontend node '' |
- | The storage nodes are HP DL180G5 boxes, having 2 Intel Xeon L5420 quad core processors, 16GB internal memory, 6 GbE network interfaces and 24TB disks. | + | For more sophisticated useage |
- | The disks are devided into 4 partitions | + | |
- | + | ||
- | ===== Available Subclusters ===== | + | |
- | + | ||
- | The current subcluster assignment is: | + | |
- | + | ||
- | lfe001: | + | |
- | + | ||
- | This script shows the subcluster definitions | + | |
- | + | ||
- | sub lce-nodes | + | |
- | ==== ========= | + | |
- | sub1 lce001-lce009 | + | |
- | sub2 lce010-lce018 | + | |
- | sub3 lce019-lce027 | + | |
- | sub4 lce028-lce036 | + | |
- | sub5 lce037-lce045 | + | |
- | sub6 lce046-lce054 | + | |
- | sub7 lce055-lce063 | + | |
- | sub8 lce064-lce072 | + | |
- | + | ||
- | The lce-nodes ar ethe compute/ | + | |
- | + | ||
- | ===== User Access ===== | + | |
- | + | ||
- | ==== Access through portal to Cluster frontend ==== | + | |
- | + | ||
- | + | ||
- | You can access the Lofar cluster through the portal: "**ssh -X portal.lofar.eu**" | + | |
- | We maintain a ssh whitelist, so only known institutes are able to login. | + | |
- | Please send an email to grit at astron.nl or h.paas at rug.nl to add your institute or personal IP number. | + | |
- | Once logged in, you'll find an empty home directory. You'll have to login at one of the two frontends " | + | |
==== LOGIN environment ==== | ==== LOGIN environment ==== | ||
- | See [[public: | + | See [[:public: |
==== Do and Don'ts ==== | ==== Do and Don'ts ==== | ||
- | ===== Old Stuff ===== | + | **DON'T:** |
- | + | ||
- | ===== The lioffen processing cluster ===== | + | |
- | + | ||
- | + | ||
- | ==== Hardware setup ==== | + | |
- | + | ||
- | The 10-node post processing cluster has '' | + | |
- | + | ||
- | Each node has 2-Gbyte of physical RAM, and another 2-Gbyte swap space. Each node contains two AMD Opteron 2-GHz CPUs with 1 Mbyte cache. They are currently running Ubuntu 7.10 (Gutsy). | + | |
- | + | ||
- | ==== Usage ==== | + | |
- | + | ||
- | The offline nodes are used for processing the CS1 data products that are stored on the lifs-nodes. To this purpose, all disks of all lifs-nodes are NFS mounted on all offline nodes. | + | |
- | + | ||
- | + | ||
- | ==== Environment | + | |
- | + | ||
- | There is a login environment with some startup-scripts that make it easier to use several available tools. Please see [[public:lle|this description]] with more information. | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | ==== Disks and NFS mounts | + | |
- | + | ||
- | The big disks on the off-line storage nodes ('' | + | |
- | + | ||
- | On each node is a 360 Gbyte ''/ | + | |
- | + | ||
- | Finally, there is the NFS mounted ''/ | + | |
- | + | ||
- | + | ||
- | ==== Applications | + | |
- | + | ||
- | Most applications can be found in directory ''/ | + | |
- | | + | |
- | + | ||
- | ===== listfen 32-bit cluster ===== | + | |
- | + | ||
- | The old 32 bit offline cluster has been removed. Only frontend " | + | |
- | + | ||
- | ==== LIFS storage nodes ==== | + | |
- | + | ||
- | There are 12 old 1TB storage nodes, lifs001 till lifs012. They are still in use for development and testing. Lifs001 till lifs008 are used by the Observatory to store recent observations. Lifs009 til lifs012 hold datasets that are a result of some kind of processing. We'll start storing new observations on the new cluster soon. | + | |
- | + | ||
- | + | ||
- | + | ||
- | ==== Offline processing clusters ==== | + | * :!: Store data in your '' |
+ | * Pump around large amounts of data between nodes. | ||
+ | * Leave unused data on the systems. Clean it up, so others can make use of the available disk space. | ||
- | The disks of the offline storage nodes are NFS mounted on all these offline nodes. These systems are in use for offline processing (i.e. when data has been transfered from the central storage facility). | + | **DO:** |
- | The clusters are described on more detail [[engineering: | + | * Help improve the system by sending suggestions for improvements or new packages to the administrators |
- | ==== Offline storage cluster | + | ===== Contributed Software ===== |
- | This cluster | + | Some groups or people would like their favourite tools or pipelines added to the cluster |
- | More information can be found [[databuffercluster|on this page]]. | + | |