StarCluster - Mailing List Archive

Re: SGE have a job consume all slots?

From: Rayson Ho <no email>
Date: Thu, 23 Feb 2012 18:43:48 -0800 (PST)

If you want every job (OpenMP job?) to consume all the CPUs, then forcing all nodes to have a slot of 1 would work. But if you have a mix of serial and threaded workloads, then the correct way of ensuring exclusive execution is still via SGE's built-in mechanism. Rayson ================================= Open Grid Scheduler / Grid Engine Scalable Grid Engine Support Program ________________________________ From: Don MacMillen <> To: David Erickson <> Cc: Sent: Thursday, February 23, 2012 8:43 PM Subject: Re: [StarCluster] SGE have a job consume all slots? Hmm, the docs don't seem to be all that enlightening.  Here is another way. init_node is called for every node in the run method of a starcluster plugin as well as in the on_add_node method of the plugin. HTH. Regards, Don     def init_node(self, node, master):         ...         # Set the number of slots to 1.  We do this so that only one job is         # is submitted per machine, since we will use all of its available         # threads in multi-threaded mode.         cmd_strg = 'source /opt/sge6/default/common/;' \                'qconf -mattr exechost complex_values slots=1 %s' % node.alias         self.logger.debug("Executing: |%s|" % cmd_strg)         output = master.ssh.execute(cmd_strg)         ... On Thu, Feb 23, 2012 at 3:56 PM, David Erickson <> wrote: Hi- >I've been digging through SGE/OGS docs for the last hour or so trying to >sort out the easiest way to enforce a one job per host restriction, does >anyone have a suggestion on how to do this?  My hosts are coming up with >8 slots, so I tried launching with -l slots=8 but it complained about >wanting me to use parallel environments which looks even more complicated.. > >Thanks, >David >_______________________________________________ >StarCluster mailing list > > > _______________________________________________ StarCluster mailing list
Received on Thu Feb 23 2012 - 21:43:50 EST
This archive was generated by hypermail 2.3.0.


Sort all by: