Re: SGE have a job consume all slots?
This archive was generated by
On 02/23/2012 05:43 PM, Don MacMillen wrote:
> Hmm, the docs don't seem to be all that enlightening. Here is another way.
> init_node is called for every node in the run method of a starcluster plugin
> as well as in the on_add_node method of the plugin. HTH.
> def init_node(self, node, master):
> # Set the number of slots to 1. We do this so that only one job is
> # is submitted per machine, since we will use all of its available
> # threads in multi-threaded mode.
> cmd_strg = 'source /opt/sge6/default/common/settings.sh;' \
> 'qconf -mattr exechost complex_values slots=1 %s' % node.alias
> self.logger.debug("Executing: |%s|" % cmd_strg)
> output = master.ssh.execute(cmd_strg)
> On Thu, Feb 23, 2012 at 3:56 PM, David Erickson <derickso_at_stanford.edu
> <mailto:derickso_at_stanford.edu>> wrote:
> I've been digging through SGE/OGS docs for the last hour or so trying to
> sort out the easiest way to enforce a one job per host restriction, does
> anyone have a suggestion on how to do this? My hosts are coming up with
> 8 slots, so I tried launching with -l slots=8 but it complained about
> wanting me to use parallel environments which looks even more
> StarCluster mailing list
> StarCluster_at_mit.edu <mailto:StarCluster_at_mit.edu>
Received on Thu Feb 23 2012 - 21:05:40 EST