Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revisionLast revisionBoth sides next revision | ||
public:lofar_cluster [2009-10-13 11:58] – Arno Schoenmakers | public:lofar_cluster [2017-06-02 09:27] – [Access through portal to Cluster frontend] Arno Schoenmakers | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The LOFAR Cluster | + | ====== The LOFAR Clusters |
- | This page describes the LOFAR cluster | + | This page describes the LOFAR clusters |
- | We welcome authorized users on this cluster. For the time being the users are bound to only a part of the cluster. This part is called a subcluster. Various users groups have been granted access to a specific subcluster. Users can utilized a standard login environment (see below) and they can access the cluster resources using one of the two frontends " | + | We welcome authorized users on the LOFAR clusters. |
- | ===== LOFAR Cluster layout (brief) | + | ===== CEP3 ====== |
- | The Lofar cluster is devided into 8 subclusters. Each subcluster is a processing | + | CEP3 is the processing |
+ | ===== CEP4 ====== | ||
+ | CEP4 is the main storage and processing cluster for online applications. It is not accessible by other users than LOFAR staff. Information on CEP4 can be found [[cep4: | ||
- | ==== Frontend | + | ===== User Access ===== |
- | A frontend has 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 2TB disks in RAID5 configuration. | + | ==== Access through portal |
- | There are actually two identical frontends. Both of them serve a specific group of subclusters. The frontends are used to build the software and regulate the workload on the subclusters. There is a page that describes the frontend [[engineering: | + | |
- | ==== Processing units ==== | ||
- | The compute elements have 2 Intel Xeon L5420 quad core processors, 16GB internal meomory, 2 GbE interfaces and 1TB disks in RAID0 configuration. They can be accessed by secure shell and they are grouped. | + | You can access the Lofar cluster through the portal. On your own system, try:< |
+ | > ssh -X portal.lofar.eu | ||
+ | </ | ||
+ | Please send an email to //grit at astron.nl// or //rbokhorst at astron.nl// to add your institute or personal IP number to this white list. | ||
+ | When you are logged in for the first time, you'll have an empty home directory. | ||
+ | To get onto the CEP3 cluster, you first have to login to the frontend node '' | ||
- | ==== Storage Units ==== | + | For more sophisticated useage of '' |
- | The storage nodes are HP DL180G5 boxes, having 2 Intel Xeon L5420 quad core processors, 16GB internal memory, 6 GbE network interfaces and 24TB disks. | + | ==== LOGIN environment ==== |
- | The disks are devided into 4 partitions of 6 disks each, set up in RAID5 configuration. The XFS filesystems are called "/ | + | |
- | ===== Available Subclusters ===== | + | See [[public: |
- | + | ||
- | The current subcluster assignment is: | + | |
- | + | ||
- | lfe001: | + | |
- | + | ||
- | This script shows the subcluster definitions | + | |
- | + | ||
- | sub lce-nodes | + | |
- | ==== ========= | + | |
- | sub1 lce001-lce009 | + | |
- | sub2 lce010-lce018 | + | |
- | sub3 lce019-lce027 | + | |
- | sub4 lce028-lce036 | + | |
- | sub5 lce037-lce045 | + | |
- | sub6 lce046-lce054 | + | |
- | sub7 lce055-lce063 | + | |
- | sub8 lce064-lce072 | + | |
- | + | ||
- | The lce-nodes ar ethe compute/ | + | |
- | + | ||
- | ===== User Access ===== | + | |
- | + | ||
- | ==== Access through portal to Cluster frontend ==== | + | |
- | + | ||
- | + | ||
- | You can access the Lofar cluster through the portal: "**ssh -X portal.lofar.eu**" | + | |
- | We maintain a ssh whitelist, so only known institutes are able to login. | + | |
- | Please send an email to grit at astron.nl or h.paas at rug.nl to add your institute or personal IP number. | + | |
- | Once logged in, you'll find an empty home directory. You'll have to login at one of the two frontends " | + | |
- | + | ||
- | ==== LOGIN environment ==== | + | |
==== Do and Don'ts ==== | ==== Do and Don'ts ==== | ||
- | ===== Old Stuff ===== | + | **DON' |
- | + | * :!: Store data in your '' | |
- | ==== Offline processing clusters ==== | + | * Pump around large amounts |
- | + | * Leave unused data on the systems. | |
- | The disks of the offline storage nodes are NFS mounted on all these offline | + | |
+ | **DO:** | ||
+ | * Help improve the system by sending suggestions for improvements or new packages to the administrators | ||
+ | |||
- | The clusters are described on more detail [[engineering: | + | ===== Contributed Software ===== |
- | ==== Offline storage | + | Some groups or people would like their favourite tools or pipelines added to the cluster |
- | This cluster acts as a temporary data buffer. Data is copied from the online storage systems to these systems, so that the online systems can be used to store new data. These systems are accessable from the offline processing clusters, allowing post-processing of the data. | ||
- | More information can be found [[engineering: | ||