public:lofar_processing_juelich

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:lofar_processing_juelich [2015-08-10 12:24] – created Stefan Froehlichpublic:lofar_processing_juelich [2017-03-08 15:27] (current) – external edit 127.0.0.1
Line 30: Line 30:
 General information about german grid certificates can be found here:\\ General information about german grid certificates can be found here:\\
 [[http://dgi-2.d-grid.de/zertifikate.php]] [[http://dgi-2.d-grid.de/zertifikate.php]]
 +
 +==== SRM Copy ====
 +For data acquisition with grid-tools please use the Judac system. Your account for Jureca will also work for Judac.\\
 +[[http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/JUDAC/JUDAC_node.html]]
 +
 +== voms-proxy-init ==
 +To use the grid-proxies simple follow these steps:\\
 +Store your private key in ''$HOME/.globus/userkey.pem''\\
 +Store your signed certificate in ''$HOME/.globus/usercert.pem''\\
 +Execute:\\
 +<code>
 +chmod 400 $HOME/.globus/userkey.pem
 +chmod 600 $HOME/.globus/usercert.pem
 +</code>
 +Store your signed certificate in ''$HOME/.globus/usercert.pem''\\
 +Then you have to generate a proxy.
 +This creates a proxy (valid for 48 hours, increase if needed) in your home directory:
 +<code>
 +voms-proxy-init -valid 48:00 -voms lofar:/lofar/user
 +</code>
 +Test data retrieval:\\ <code>srmcp -server_mode=passive srm://srm.grid.sara.nl/pnfs/grid.sara.nl/data/lofar/ops/fifotest/file1M file:///file1M</code>
 +
 +==== LOFAR Software ====
 +The LOFAR Software Framework is installed in the home directory of user htb003. You load the environment with
 +<code>. /homea/htb00/htb003/env_lofar_2.11.sh</code>\\
 +This loads Release version 2.11.\\
 +There is more software available:
 +
 +  * Casapy 4.2 -> env_casapy.sh
 +  * Karma ->     env_karma.sh
 +  * losoto ->    env_losoto.sh
 +
 +Losoto is also part of the python installation.\\
 +In addition you might need a copy of the measurement data\\
 +/homea/htb00/htb003/dataCEP\\
 +Put it in your home directory and point to it in a file .casarc (just contains:"measures.directory:
 +[yourhome]/dataCEP")
 +
 +If you require access to the GlobalSkyModel database, there is a copy of
 +the database from the CEP Cluster running on the Jureca
 +login node jrl09. Access the databse "gsm" on port 51000 with user
 +"gsm" and pass "msss"\\
 +\\
 +To run your jobs on the compute nodes you first have to setup and submit
 +a job via the batch system. A detailed description can be found on the
 +Jureca homepage \\
 +[[http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JURECA/UserInfo/QuickIntroduction.html?nn=1803700]]
 +
 +When configuring the resources on the system please use only the '--nnodes' option. Ignore '--ntasks', '--ntasks-per-node' and '--cpus-per-task'. A (for the moment) working configuration is implemented in the framework itself.\\
 +To run a generic pipeline on multiple nodes configure the pipeline as follows:
 +In your 'pipeline.cfg' configure
 +<code>
 +[remote]
 +method = slurm_srun
 +max_per_node = 1
 +</code>
 +Set 'max_per_node' individually for every step in your parset to the number tasks you want to (can) run per node.
  • Last modified: 2015-08-10 12:24
  • by Stefan Froehlich