StarCluster - Mailing List Archive

Re: Plugin for mounting disks properly and easily

From: Ed Gray <no email>
Date: Tue, 27 May 2014 11:15:59 -0400

Hi Dave,

We use crossmnt and simply modified the code where the mount options are hard coded to include crossmnt. That part works well.

We have a reserved instance as our master node started from a reserved instance as our headnode. You could avoid that part and startup/shutdown the master node.

Regarding spots etc. we encounter the same issues as you and our solution is to write data to the crossmounted volumes. If a spot goes down due to price, sge takes care of the rest, requeing the jobs. Then we start more nodes. This is ok for a an "embarrasingly parallel" architecture with 10,000s of short running jobs, less good for longer running job environments.

Anyhow, our setup works very well for what we do (bioinformatics) although I can see what you suggest would be even better.

Ed


 



Date: Tue, 27 May 2014 10:46:00 -0400
From: dave31415_at_gmail.com
To: StarCluster_at_mit.edu
Subject: [StarCluster] Plugin for mounting disks properly and easily


Hi,
I just started using star cluster. It's fantastic.


One issue I am seeing though is that there is some difficulty getting disks up and mounted in a reasonable place. I was wondering if anyone has written a plugin for handling this. I have some work-arounds in terms of a quick and dirty plugin but I don't know how general purpose it would be.


What I would like is the following:


I want to use EBS volumes for everything except for scratch space as I want to be able to shutdown (to save money) and bring back up just as if it were non-virtual hardware even with spot instances. I want it to survive (no persisted data loss) having my spot instances being seized due to a spot price surge. I believe this is possible. I am wrong?


I would like /home NFS cross-mounted EBS.


I'd like a "restart" command to find the right EBS volumes and mount them in the usual places. In the case of a fresh "start", I'd like it to create new EBS volumes. I believe the current EBS volume attachment functionality (in config file), assumes that they already exists and that you know their id.


I believe that when you change instance types from say r3.2xlarge to m3.xlarge, the storage devices (/dev/xdvv etc) change names so this requires some knowledge of what the devices will be names for each instance.


Also, I want to avoid over-mounting a device on a directory with files and subdirectories inside.


I can probably do all of this with a DIY solution but I figured everyone must be dealing with similar issues. Is anyone working on a general plugin for this?


I can imagine a simple plugin (similar to an stab file) that looks like this in the config file
[volumesetup]
/home 100GB cross
/data 100GB cross
/hdfs1 200GB separate
/hdfs2 200GB separate


with the remaining local storage being automatically assigned to /scratch1 /scratch2 etc depending on the number of partitions.


Anyone have any ideas of suggestions?
Dave


























 
_______________________________________________ StarCluster mailing list StarCluster_at_mit.edu http://mailman.mit.edu/mailman/listinfo/starcluster
Received on Tue May 27 2014 - 11:16:04 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject