public:lofar_processing_juelich

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
public:lofar_processing_juelich [2015-08-10 13:08] – [LOFAR Software] Stefan Froehlichpublic:lofar_processing_juelich [2017-03-08 15:27] (current) – external edit 127.0.0.1
Line 32: Line 32:
  
 ==== SRM Copy ==== ==== SRM Copy ====
-For data acquisition with grid-tools please use the Judac system. Your account for Jureca will also work for Judac.+For data acquisition with grid-tools please use the Judac system. Your account for Jureca will also work for Judac.\\
 [[http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/JUDAC/JUDAC_node.html]] [[http://www.fz-juelich.de/ias/jsc/EN/Expertise/Datamanagement/JUDAC/JUDAC_node.html]]
  
Line 38: Line 38:
 To use the grid-proxies simple follow these steps:\\ To use the grid-proxies simple follow these steps:\\
 Store your private key in ''$HOME/.globus/userkey.pem''\\ Store your private key in ''$HOME/.globus/userkey.pem''\\
-Execute:\\ <code>chmod 600 $HOME/.globus/userkey.pem</code>+Store your signed certificate in ''$HOME/.globus/usercert.pem''\\ 
 +Execute:\\ 
 +<code> 
 +chmod 400 $HOME/.globus/userkey.pem 
 +chmod 600 $HOME/.globus/usercert.pem 
 +</code>
 Store your signed certificate in ''$HOME/.globus/usercert.pem''\\ Store your signed certificate in ''$HOME/.globus/usercert.pem''\\
 Then you have to generate a proxy. Then you have to generate a proxy.
Line 68: Line 73:
 "gsm" and pass "msss"\\ "gsm" and pass "msss"\\
 \\ \\
 +To run your jobs on the compute nodes you first have to setup and submit
 +a job via the batch system. A detailed description can be found on the
 +Jureca homepage \\
 +[[http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JURECA/UserInfo/QuickIntroduction.html?nn=1803700]]
 +
 +When configuring the resources on the system please use only the '--nnodes' option. Ignore '--ntasks', '--ntasks-per-node' and '--cpus-per-task'. A (for the moment) working configuration is implemented in the framework itself.\\
 +To run a generic pipeline on multiple nodes configure the pipeline as follows:
 +In your 'pipeline.cfg' configure
 +<code>
 +[remote]
 +method = slurm_srun
 +max_per_node = 1
 +</code>
 +Set 'max_per_node' individually for every step in your parset to the number tasks you want to (can) run per node.
  • Last modified: 2015-08-10 13:08
  • by Stefan Froehlich