-----BEGIN PGP SIGNED MESSAGE-----
Did you specify the memory usage in your job script or at command line
and what parameters did you use exactly?
Doing a quick search I believe that the following will solve the problem
although I haven't tested myself:
$ qsub -l mem_free=MEM_NEEDED,h_vmem=MEM_MAX yourjob.sh
Here, MEM_NEEDED and MEM_MAX are the lower and upper bounds for your
job's memory requirements.
On 7/22/64 2:59 PM, Amirhossein Kiani wrote:
> Dear Star Cluster users,
> I'm using Star Cluster to set up an SGE and when I ran my job list,
although I had specified the memory usage for each job, it submitted too
many jobs on my instance and my instance started going out of memory and
> I wonder if anyone knows how I could tell the SGE the max memory to
consider when submitting jobs to each node so that it doesn't run the
jobs if there is not enough memory available on a node.
> I'm using the Cluster GPU Quadruple Extra Large instances.
> Many thanks,
> Amirhossein Kiani
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
-----END PGP SIGNATURE-----
Received on Tue Nov 08 2011 - 08:47:43 EST