StarCluster - Mailing List Archive

Re: Issue creating a cluster of 30 nodes with starcluster

From: Justin Riley <no email>
Date: Wed, 09 Nov 2011 14:54:28 -0500

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Sumita,

> 2. I will with spot instance to check with the speed.
Spot instances will not be any faster; they're just cheaper and you
can launch more of them by default without having to ask permission
from Amazon. In fact, waiting for spot requests to become 'active' (ie
for Amazon to decide that your request should be granted) usually
takes even longer than normal flat rate instances. Now knowing that
you already requested to increase your instance rate this shouldn't be
the problem, however, you should still look into spot instances to cut
costs in general when using EC2.

> 3. As per one of my query.
>
> Does starcluster wait for all the nodes to be up and then it
> starts configuring them all at one time. Is there any parameter in
> the config file or any options in the starcluster start command
> that says "configuration of the cluster and installing
> SGE/Configuring NFS to be a parallel operation. any node should
> not wait for the other nodes to be up for getiing configured that's
> if we post a job on that ready node it should start executing the
> job with the available no of nodes that are running and
> configured."
>
> If the above is not possible , is there any specific reason while
> starting a cluster, starcluster does the configuration of nodes
> only when all are running.

StarCluster waits for all nodes to be in a 'running' state and for SSH
to come up on all machines before configuring the cluster. This makes
it very straight forward to configure everything (/etc/hosts files,
NFS shares, hostnames, passwordless ssh, SGE etc) in one fell swoop.
Support for adding and configuring nodes as they come up has not been
implemented yet and would require refactoring the code and significant
testing. Of course, I'm happy to accept patches that introduce this
functionality assuming they're well tested.

With that said, before embarking on this work the current code should
really be profiled (using a profiler) over multiple cluster sizes in
order to identify what the "real" bottlenecks are. I agree that adding
nodes as they come up *seems* likely to be the optimal solution but
optimizing based on speculation can potentially be a huge time waster.
The current code already uses workerpool[1] to perform the majority of
the setup in a concurrent fashion after all nodes are up and running
which showed significant performance improvements between the 0.91 and
0.92 releases. Although workerpool uses Python threads which are
subject to the GIL, the Python interpreter will internally release the
GIL for blocking I/O operations[2]. In StarCluster's case, all of its
operations are almost purely blocking I/O over SSH which means most
operations *should* be performed more or less in parallel.

So it's a fair question how to further improve the performance and I
think the best way to start is by systematically profiling (with a
profiler) multiple cluster sizes and seeing how the code scales with
size and what operations take up the majority of the setup time. Also
an analysis of the average "wait time" for a cluster vs the actual
"configure time" once the cluster nodes are all up will also be useful
in determining if there's a worthwhile opportunity to mask instance
start-up latency. I've split these two metrics in the latest 0.92.1
release so you'll see the "wait" and "configure" times separately in
the output of the 'start' command in addition to the total time (wait
+ configure).

HTH,

~Justin

[1] http://pypi.python.org/pypi/workerpool
[2]
http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk662nQACgkQ4llAkMfDcrmvHQCfY4GK4MU5BTUbw3f7zfhWvv5Z
Q9kAoJPGNJxjdNdEBQSAl91tcpwgamvF
=uLhB
-----END PGP SIGNATURE-----
Received on Wed Nov 09 2011 - 14:54:32 EST
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject