Re: SGE have a job consume all slots?
This archive was generated by
If you want every job (OpenMP job?) to consume all the CPUs, then forcing all nodes to have a slot of 1 would work.
But if you have a mix of serial and threaded workloads, then the correct way of ensuring exclusive execution is still via SGE's built-in mechanism.
Open Grid Scheduler / Grid Engine
Scalable Grid Engine Support Program
From: Don MacMillen <macd_at_nimbic.com>
To: David Erickson <derickso_at_stanford.edu>
Sent: Thursday, February 23, 2012 8:43 PM
Subject: Re: [StarCluster] SGE have a job consume all slots?
Hmm, the docs don't seem to be all that enlightening. Here is another way.
init_node is called for every node in the run method of a starcluster plugin
as well as in the on_add_node method of the plugin. HTH.
def init_node(self, node, master):
# Set the number of slots to 1. We do this so that only one job is
# is submitted per machine, since we will use all of its available
# threads in multi-threaded mode.
cmd_strg = 'source /opt/sge6/default/common/settings.sh;' \
'qconf -mattr exechost complex_values slots=1 %s' % node.alias
self.logger.debug("Executing: |%s|" % cmd_strg)
output = master.ssh.execute(cmd_strg)
On Thu, Feb 23, 2012 at 3:56 PM, David Erickson <firstname.lastname@example.org> wrote:
>I've been digging through SGE/OGS docs for the last hour or so trying to
>sort out the easiest way to enforce a one job per host restriction, does
>anyone have a suggestion on how to do this? My hosts are coming up with
>8 slots, so I tried launching with -l slots=8 but it complained about
>wanting me to use parallel environments which looks even more complicated..
>StarCluster mailing list
StarCluster mailing list
Received on Thu Feb 23 2012 - 21:43:50 EST