This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:lofar_processing_juelich [2015-08-10 12:24]
Stefan Froehlich created
public:lofar_processing_juelich [2017-03-08 15:27] (current)
Line 30: Line 30:
 General information about german grid certificates can be found here:\\ General information about german grid certificates can be found here:\\
 [[http://​dgi-2.d-grid.de/​zertifikate.php]] [[http://​dgi-2.d-grid.de/​zertifikate.php]]
 +==== SRM Copy ====
 +For data acquisition with grid-tools please use the Judac system. Your account for Jureca will also work for Judac.\\
 +== voms-proxy-init ==
 +To use the grid-proxies simple follow these steps:\\
 +Store your private key in ''​$HOME/​.globus/​userkey.pem''​\\
 +Store your signed certificate in ''​$HOME/​.globus/​usercert.pem''​\\
 +chmod 400 $HOME/​.globus/​userkey.pem
 +chmod 600 $HOME/​.globus/​usercert.pem
 +Store your signed certificate in ''​$HOME/​.globus/​usercert.pem''​\\
 +Then you have to generate a proxy.
 +This creates a proxy (valid for 48 hours, increase if needed) in your home directory:
 +voms-proxy-init -valid 48:00 -voms lofar:/​lofar/​user
 +Test data retrieval:​\\ <​code>​srmcp -server_mode=passive srm://​srm.grid.sara.nl/​pnfs/​grid.sara.nl/​data/​lofar/​ops/​fifotest/​file1M file:///​file1M</​code>​
 +==== LOFAR Software ====
 +The LOFAR Software Framework is installed in the home directory of user htb003. You load the environment with
 +<​code>​. /​homea/​htb00/​htb003/​env_lofar_2.11.sh</​code>​\\
 +This loads Release version 2.11.\\
 +There is more software available:
 +  * Casapy 4.2 -> env_casapy.sh
 +  * Karma ->     ​env_karma.sh
 +  * losoto ->    env_losoto.sh
 +Losoto is also part of the python installation.\\
 +In addition you might need a copy of the measurement data\\
 +Put it in your home directory and point to it in a file .casarc (just contains:"​measures.directory:​
 +If you require access to the GlobalSkyModel database, there is a copy of
 +the database from the CEP Cluster running on the Jureca
 +login node jrl09. Access the databse "​gsm"​ on port 51000 with user
 +"​gsm"​ and pass "​msss"​\\
 +To run your jobs on the compute nodes you first have to setup and submit
 +a job via the batch system. A detailed description can be found on the
 +Jureca homepage \\
 +When configuring the resources on the system please use only the '​--nnodes'​ option. Ignore '​--ntasks',​ '​--ntasks-per-node'​ and '​--cpus-per-task'​. A (for the moment) working configuration is implemented in the framework itself.\\
 +To run a generic pipeline on multiple nodes configure the pipeline as follows:
 +In your '​pipeline.cfg'​ configure
 +method = slurm_srun
 +max_per_node = 1
 +Set '​max_per_node'​ individually for every step in your parset to the number tasks you want to (can) run per node.
  • Last modified: 2015-08-10 12:24
  • by Stefan Froehlich