Re: Volumes not being destroyed on terminate
This archive was generated by
Here's the summary of what was going on, in case it helps anyone else:
I had dropped the -o or --create-only option on my startup of an imagehost.
So it ran thru the full cluster setup, mounting my custom EBS device,
effectively using up the /dev/sd[f-p] device ID when I would subsequently
create a new EBS image from the updated imagehost instance. (Even though
I'd deleted the volumes associated w/those block devices, it was creating
an equal-sized empty device.) So over several AMI-building iterations, I'd
accumulated what were effectively several unneeded "phantom" /dev/sd* devs.
So, if you are attaching custom EBS volumes during your normal cluster
operations, make sure you use the -o option on your imagehost startups.
If you find you have this problem in your existing AMIs, be sure you
unmount the filesystem and then detach the volume (I did the latter from
the AWS console) before you generate a new EBS image from the imagehost.
On Fri, Mar 1, 2013 at 12:49 PM, Lyn Gerner <schedulerqueen_at_gmail.com>wrote:
> Hi All,
> I am using StarCluster in the process of iteratively building and testing
> an AMI for a particular app. While I am attaching a single, customized EBS
> volume to each cluster I start, I am seeing many unrelated volumes being
> attached and detached as I start and terminate each cluster. Further, I'm
> seeing an increasingly large number of EBS volumes in my AWS acct.
> If I understand correctly, the command "starcluster terminate mycluster"
> should also be deleting any EBS volumes that were created as a result of
> mycluster's instantiation. Is that correct, and if so, any ideas why these
> "extra" volumes are not being deleted/destroyed during termination?
> Thanks for any insights or suggestions.
Received on Fri Mar 22 2013 - 17:23:54 EDT