The Load Balancer has an "Alpha" candidate checked into my branch.
You can see how my test is going:
That site displays graphs of my test that is currently running.
Continually refreshed and real-time.
If you would like to try it out, you can get the latest code from my repo:
** This is an ALPHA release. There are a few known issues. Your
testing will help me find more and fix them promptly. I haven't
written the docs yet, but commented the code heavily.
1. add_node and remove_node will infrequently encounter problems, like
SSHExceptions. They will be ignored and the operation will be retried
during the next loop.
2. When a node, say Node002 is deleted, but then Node003 and Node004
are working, the balancer should add Node005 but will erroneously add
a NEW Node003. This causes some problems. In the meantime, I'd advise
you to submit all of your jobs at the beginning, if possible, and let
the cluster scale up to handle the load, then scale down to 1 node
(master) to save you money when the queue is empty. If cluster size
goes from 1-> 2 -> 3 -> 2 -> 3 -> 4 -> 5 , I anticipate issues.
HOW TO USE
1. Get my code from the github repository above. You won't need to
change your config file. All of the parameters (for the time being)
are in starcluster/balancer/sge/__init__.py : class SGELoadBalancer.
Comments and explanations are there. I hope the default parameters
will suffice. Set max_nodes before proceeding.
2. Start your cluster in the usual way with the number of nodes you
usually use, remember your cluster_tag.
3. Launch the Load Balancer:
"starcluster balancer cluster_tag" (no visualizer, recommended)
"starcluster balancer -p cluster_tag" (visualizer will plot graphs,
not recommended, needs matplotlib and numpy installed on the
( Look at starcluster/balancers/sge/visualizer.py and configure for
your environment. cute feature but not ready for prime time)
WHAT IT WILL DO:
1. Examine the SGE queue, host list and qacct output. Poll every 60
2. If there are lots of jobs, the balancer will check the following metrics:
- How long has the oldest job been waiting? If more than 15 minutes, then
- Are there enough queued jobs to fill all slots on a new machine?
- Has the cluster stabilized since the last change? If so start a new instance
- What has the throughput been on the cluster so far? This is just for
observational purposes. I need to see if it is a useful metric.
3. If there are no jobs in the queue,
- Are any of the machines idle?
- If so, have any of them been up for > 45 minutes (You've already
paid for the hour, might as well use it)
- If so, kill any nodes that are idle and have been up for > 45 minutes.
- This can scale the cluster down to one node. This alone should save you $$$$.
- The master node will NEVER be killed.
WHAT IT WILL NOT DO
- Kill the master.
- Consider the load on the cluster. To mitigate the load on any one
machine, you'd have to migrate already-started tasks between machines.
Though SGE may support task migration, this is a complex problem and
we haven't looked at it.
So, give it a shot if you'd like. I will answer questions till 6pm
tonight, then again from Sunday afternoon. I'm sure there will be some
questions and perhaps some bug reports. Let me know what you think!
A Big thanks to Justin for helping me get this project together.
Received on Fri Jul 23 2010 - 14:16:59 EDT