StarCluster - Mailing List Archive

Re: SGE slow with 50k+ jobs

From: Jacob Barhak <no email>
Date: Sun, 30 Mar 2014 22:23:15 -0500

Thanks Chris,

This is helpful.

Your suggestion to change to in demand scheduling makes sense. If I set flush_finish_sec to 1 sec I will get faster submission of my faster jobs that take less than a second. Currently I have 20 slots available so this should allow processing about 1200 of these in a minute. What I am afraid of is that the scheduler will get called 20 times - once for each such job. Since grid engine seg faulted in the last day under pressure I am afraid of stressing it to this level - so I am inquiring about behavior before I make this change on a running simulation. Does each event trigger another instance of grid engine to launch? Or are all 20 events handled at once a second after the first event about completion is received?

Also, your suggestions do not address the issue that I find disturbing. Deleting jobs just takes forever when the machine is loaded with jobs. So if there is trouble it takes me about an hour just to clean the queue and restart everything from scratch. If there is an easier way to clear a queue of jobs it would help. I have a single queue, will deleting it and creating a new one we faster?

By the way, my queue type according to qmon is BIP. I am running 20 slots with 8 CPUs and arch is lx26amd64.

Can this queue type handle 50k+ jobs efficiently for submission/deletion? I think the information you provided gets launching queued jobs covered, the real issue is submission/deletion overhead. And I would like to know from others who launch that many dependent jobs about their experiences.

Also, is there a way to tell grid engine to resubmit interrupted jobs under the same name and job number in case of failure. I have many dependencies and if a machine breaks during a long run I want grid engine to relaunch the jobs it was working on in a way that will maintain dependencies defined at submission time. Currently I had to write a script to recover from interrupted runs - if grid engine has this capability it could make things easier. And by the way, I need to submit many little jobs rather than job arrays partially because of these dependencies.

I would appreciate to learn about other people experiences with 50k+ jobs.


Sent from my iPhone

On Mar 30, 2014, at 6:35 AM, Chris Dagdigian <> wrote:

> The #1 suggestion is to change the default scheduling setup over to the
> settings for "on demand scheduling" - you can do that live on a running
> cluster I believe. Google can help you with the parameters but it
> basically involves switching from the looping/cyclical scheduler
> behavior to an event based method where SGE runs a scheduling/dispatch
> cycle each time a job enters or leaves the system. This is purpose built
> for the "many short jobs" use case.
> The #2 suggestion is to make sure you use berkeleyDB based spooling
> (this is one scenario where I'll switch from my preference for classic
> spooling) AND make sure that you are spooling to local disk on each node
> instead of spooling back to a shared filesystem. These settings cannot
> easily be changed on a live system so in starcluster land you might have
> to tweak the source code to come up with a different installation
> template that would put these settings in. (Note that I'm not sure what
> SC puts in by default so there is a chance that this is already set up
> the way you'd want ...)
> Other stuff:
> - SGE Array Jobs are way more efficient than single jobs. Instead of
> 70,000 jobs can you submit a single array job that has *70,000 tasks*
> instead? Or 7 jobs containing 10,000 tasks etc. ?
> - Is there any way to batch up or collect your short jobs into bigger
> collections so that the average job run time is longer? SGE is not the
> best at jobs that run for mere seconds or less - most people would batch
> those together to at least get a 60-90 second runtime ...
> -Chris
> Jacob Barhak wrote:
>> Hi to SGE experts,
>> This is less of a StarCluster specific issue. It is more of an SGE issue I encountered and was hoping someone here can help with.
>> My system runs many many smaller jobs - tens of thousands. When I need a rushed solution to reach a deadline I use StarCluster. However, if I have time, I run the simulations on a single 8 core machine that has SGE installed over Ubuntu 12.04. This machine is new and fast with SSD drive and freshly installed yet I am encountering issues.
>> 1. When I launch about 70k jobs submitting a single new job to the queue takes a while - about a second or so, compared to fractions of a second when the queue is empty.
>> 2. Deleting all the jobs from the queue using qdel -u username takes a long time. It reports about 24 deletes to the screen every few seconds - at this rate it will take hours to delete the entire queue. It is still deleting while I am writing these words. Way too much time.
>> 3. The system was working ok for a few days yet now has trouble with qmaster. It report the following:
>> error: commlib error: got select error (connection refused)
>> unable to send message to qmaster using port 6444 on host 'localhost': got send error.
>> Also qmon reported cannot reach qmaster. I had to restart and suspend and disable the queue.
>> Note that qstat -j currently reports:
>> All queues dropped because of overload or full
>> Note that I configured the schedule interval to 2 seconds since many of my jobs are so fast that even 2 seconds is very inefficient for them yet some are longer and memory consuming so I cannot allow more slots to launch too many jobs.
>> Am I overloading the system with too many jobs? What is the limit on a single strong machine? How will this scale when I run this on StarCluster?
>> Any advice on how to efficiently handle many jobs, some of which are very short, will be appreciated. And I hope this interests the audience.
>> Jacob
>> Sent from my iPhone
>> _______________________________________________
>> StarCluster mailing list
Received on Sun Mar 30 2014 - 23:23:25 EDT
This archive was generated by hypermail 2.3.0.


Sort all by: