Job class is the interface to the writer which stores all parameters required to sucessfully populate the templte. It also wraps additional functionality such as directory and script IO that will try and prevent overwriting of previously created scripts.

class lfd.createjobs.createjobs.Jobs(n, runs=None, template_path=None, save_path=None, queue='standard', wallclock='24:00:00', ppn='3', cputime='48:00:00', pernode=False, command='python3 -c "import detecttrails as dt; dt.DetectTrails($).process()"n', res_path='run_results', **kwargs)[source]

Class that holds all the important functions for making Qsub jobs.

Template is located inside this package in “createjobs” folder under the name “generic”. Location where final results are saved on Fermi cluster by default is:


Can be changed by editing the template or providing a new one. One can also be provided in string format as a kwargs named “template”.

  • n (int) – number of jobs you want to start.
  • save_path (str) – path to directory where jobs will be stored. By default set to ~/Desktop/createjobs
  • res_path (str) – path to subdirectory on cluster master where results will be copied once the job is finished.
  • template_path (str) – path to the desired template
  • template (str) – a full template text as a string
  • queue (str) – sets the QSUB queue type: serial, standard or parallel. Defaults to standard. Your local QSUB setup will limit wallclock, cputime and queue name differently than assumed here.
  • wallclock (str) – set maximum wallclock time allowed for a job in hours. Default: 24:00:00
  • cputime (str) – set maximum cpuclock time allowed for a job in hours. Default: 48:00:00
  • ppn (str) – maximum allowed processors per node. Default: 3
  • command (str) – command that will be invoked by the job. Default: python -c “import detect_trails as dt; dt.DetectTrails($).process()” where “$” gets expanded depending on kwargs.
  • **kwargs (dict) – named parameters that will be forwarded to command. Allow for different targeting of data. See documentation for examples
  • runs – if runs are not specified, all SDSS runs found in runlist.par file will be used. If runs is a list of runs only those runs will be sorted into jobs. If runs is a list of Event or Frame instances, only those frames will be sorted into jobs. See docs on detailed usage.

Creates job#.dqs files from runlst. runlst is a list(list()) in which inner list contains all runs per job. Length of outter list is the number of jobs started. See class help.


Returns a list of all runs found in runlist.par file.


Create a runlst from a list of runs or Results instance. Recieves a list of runs: [N1,N2,N3,N4,N5…] and returns a runlst:

  (N1, N2...N( n_runs / n_jobs)) # idx = 0
  (N1, N2...N( n_runs / n_jobs)) # idx = n_jobs

Runlst is a list of lists. Inner lists contain runs that will be executed in a single job. Lenght of outter list matches the number of jobs that will be started, f.e.:

runls = list(
              (2888, 2889, 2890)
              (3001, 3002, 3003)

will start 2 jobs (job0.dqs, job1.dqs), where job0.dqs will call DetectTrails.process on 3 runs: 2888, 2889, 2890.

If (optionally) a list of runs is supplied a run list will be produced fom that list, instead of the runs attribute.

lfd.createjobs.writer.get_node_with_files(job, run)[source]

Deprecated since version 1.0.

Reads lst-lnk file to retrieve nodes on which fits files of run are stored. Returns the node number. In cases where error occured while reading node number returns the first, “01”, node.

lfd.createjobs.writer.writeJob(job, verbose=True)[source]

Writes the job#.dqs files. Takes in a Jobs instance and processes the “generic” template replacing any/all keywords using values from Jobs instance. For each entry in runlst it creates a new job#.dqs file, which contains commands to execute detecttrails processing for each entry of entry in runlst.

  • job (lfd.createjobs.Job) – Job object from which job scripts are to be created.
  • verbose (bool) – deprecated to alleviate clutter