Here are the most recent information on how to make use of Jureca for LOFAR Processing. Last edit August 2015.

Account

First of all you need an account on the system. The Project leader is Matthias Hoeft and the Project ID is HTB00 (needed for registration). The following website contains all necessary links for allocating computing time in the Jülich Supercomputing Centre (JSC). Click on the link “User Accounts for projects on JUQUEEN, JURECA,…” and follow the instructions.
http://www.fz-juelich.de/ias/jsc/EN/Expertise/Services/JSConline/ComputingTime/_node.html
german version:
http://www.fz-juelich.de/ias/jsc/DE/Leistungen/Dienstleistungen/JSCOnline/Rechenzeitvergabe/_node.html
Get in contact with Matthias so he can sign your account application and initiate the next steps.

Acquiring Data

Take a look at this site on how to get the data from the LTA
http://www.lofar.org/operations/doku.php?id=public:lta_howto
To download data from the web you need the full filename. You can look those up in the catalog
http://lofar.target.rug.nl/Lofar
The Juelich Http download server is here
https://lofar-download.fz-juelich.de/
For Sara
https://lofar-download.grid.sara.nl/

The recommended way to copy data is via srm copy. For doing this you need a Grid Certificate and to Register in the Virtual Organization (VO) as a Lofar User.

Register with the Virtual Organization

You can register with the Lofar VO here: https://voms.grid.sara.nl:8443/voms/lofar

Grid Certificate

To get direct srm copy access to the LTA storage you need a Grid Certificate.
Its best to ask around in your institute where to get and how to install such a certificate. General information about german grid certificates can be found here:
http://dgi-2.d-grid.de/zertifikate.php

SRM Copy

For data acquisition with grid-tools please use the Judac system. Your account for Jureca will also work for Judac.
http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/JUDAC/JUDAC_node.html

voms-proxy-init

To use the grid-proxies simple follow these steps:
Store your private key in $HOME/.globus/userkey.pem
Store your signed certificate in $HOME/.globus/usercert.pem
Execute:

chmod 400 $HOME/.globus/userkey.pem
chmod 600 $HOME/.globus/usercert.pem

Store your signed certificate in $HOME/.globus/usercert.pem
Then you have to generate a proxy. This creates a proxy (valid for 48 hours, increase if needed) in your home directory:

voms-proxy-init -valid 48:00 -voms lofar:/lofar/user

Test data retrieval:

srmcp -server_mode=passive srm://srm.grid.sara.nl/pnfs/grid.sara.nl/data/lofar/ops/fifotest/file1M file:///file1M

LOFAR Software

The LOFAR Software Framework is installed in the home directory of user htb003. You load the environment with

. /homea/htb00/htb003/env_lofar_2.11.sh


This loads Release version 2.11.
There is more software available:

  • Casapy 4.2 → env_casapy.sh
  • Karma → env_karma.sh
  • losoto → env_losoto.sh

Losoto is also part of the python installation.
In addition you might need a copy of the measurement data
/homea/htb00/htb003/dataCEP
Put it in your home directory and point to it in a file .casarc (just contains:“measures.directory: [yourhome]/dataCEP”)

If you require access to the GlobalSkyModel database, there is a copy of the database from the CEP Cluster running on the Jureca login node jrl09. Access the databse “gsm” on port 51000 with user “gsm” and pass “msss”

To run your jobs on the compute nodes you first have to setup and submit a job via the batch system. A detailed description can be found on the Jureca homepage
http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JURECA/UserInfo/QuickIntroduction.html?nn=1803700

When configuring the resources on the system please use only the '–nnodes' option. Ignore '–ntasks', '–ntasks-per-node' and '–cpus-per-task'. A (for the moment) working configuration is implemented in the framework itself.
To run a generic pipeline on multiple nodes configure the pipeline as follows: In your 'pipeline.cfg' configure

[remote]
method = slurm_srun
max_per_node = 1

Set 'max_per_node' individually for every step in your parset to the number tasks you want to (can) run per node.

  • Last modified: 2017-03-08 15:27
  • (external edit)