StarCluster - Mailing List Archive

Re: Unable to mount EBS volume to instance when starting

From: Don Morton <no email>
Date: Tue, 20 Oct 2015 15:45:56 +0000

Hello, I believe I have found the error of my ways. It seems I need to
attempt this initial mounting with a new cluster - before I was stopping,
then changing the config, then starting, and the mount was never being made.

Best,

Don

---
Don Morton, Owner/Manager
Boreal Scientific Computing LLC
Fairbanks, Alaska USA
http://www.borealscicomp.com/
http://www.borealscicomp.com/Miscellaneous/MortonBio/
On Tue, Oct 20, 2015 at 3:10 PM, Don Morton <don.morton_at_borealscicomp.com>
wrote:
> Hello,
>
> I'm new to Star Cluster, and it's not clear to me if this is the
> appropriate forum to post "new dummy" questions.  If not, please accept my
> apologies.
>
> First, let me thank you for the great effort that has gone into this
> project.  Wow.  I've built my own MPI clusters from scratch in EC2, and boy
> was that tedious.  This is amazing.  I also tried cfncluster, but so far
> that appears a bit too restrictive in terms of building a custom AMI that I
> can use over and over again (but I might be misreading the docs there).
>
> My problem at hand - try as I may, I keep failing at mounting an EBS
> volume to my cluster.  I have a few theories I can explore via the
> inefficient "trial and error" approach, but I feel like my issue is
> probably so fundamental that someone will easily be able to point me in the
> right direction.
>
> The instance:
>
> id: i-b6fdea70
> dns_name: N/A
> private_dns_name: ip-172-31-25-191.us-west-2.compute.internal
> state: stopped (User initiated (2015-10-20 02:23:53 GMT))
> public_ip: N/A
> private_ip: 172.31.25.191
> vpc: vpc-7f99af17
> subnet: subnet-7199af19
> zone: us-west-2a
> ami: ami-80bedfb0
> virtualization: hvm
> type: m3.large
> groups: _at_sc-wrfcluster
> keypair: starclusterkey
> uptime: N/A
> tags: alias=node001, Name=node001
>
>
> The volume (yes, same availability zone):
>
> volume_id: vol-f1942530
> size: 20GB
> status: available
> availability_zone: us-west-2a
> create_time: 2015-10-20 02:07:20
> tags: Name=WRFClusterPrograms
>
>
> Relevant excerpts of ~/.starcluster/config:
>
> AWS_REGION_NAME = us-west-2
> AWS_REGION_HOST = ec2.us-west-2.amazonaws.com
>
> [cluster smallcluster]
> # ami-80bedfb0 us-west-2 starcluster-base-ubuntu-13.04-x86_64-hvm (HVM-EBS)
> NODE_IMAGE_ID = ami-80bedfb0
> NODE_INSTANCE_TYPE = m3.large
> VOLUMES = wrfprogs
>
> [volume wrfprogs]
> VOLUME_ID = vol-f1942530
> MOUNT_PATH = /wrfprogs
>
>
> ---------------------------------------------------------------------
>
> Right now, some things I will try by trial and error, but I don't "think"
> they will resolve issues
>
> 1) My AVAILABILITY_ZONE is commented out in config, maybe I should hard
> code in my us-west-2a, but both the instance and volume are already in there
>
> 2) Maybe I should manually create the mount point /wrfprogs in my master
> and try again.  Doubtful - the examples you provide don't suggest this is
> necessary
>
> 3) In the config file, list the volume BEFORE listing the cluster specs.
> Again, doubtful it will make a difference.
>
> Thank you for any insight.
>
> Best,
>
> Don
>
>
> ---
> Don Morton, Owner/Manager
> Boreal Scientific Computing LLC
> Fairbanks, Alaska USA
> http://www.borealscicomp.com/
> http://www.borealscicomp.com/Miscellaneous/MortonBio/
>
Received on Tue Oct 20 2015 - 11:45:58 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject