I'm new to Star Cluster, and it's not clear to me if this is the
appropriate forum to post "new dummy" questions. If not, please accept my
First, let me thank you for the great effort that has gone into this
project. Wow. I've built my own MPI clusters from scratch in EC2, and boy
was that tedious. This is amazing. I also tried cfncluster, but so far
that appears a bit too restrictive in terms of building a custom AMI that I
can use over and over again (but I might be misreading the docs there).
My problem at hand - try as I may, I keep failing at mounting an EBS volume
to my cluster. I have a few theories I can explore via the inefficient
"trial and error" approach, but I feel like my issue is probably so
fundamental that someone will easily be able to point me in the right
state: stopped (User initiated (2015-10-20 02:23:53 GMT))
tags: alias=node001, Name=node001
The volume (yes, same availability zone):
create_time: 2015-10-20 02:07:20
Relevant excerpts of ~/.starcluster/config:
AWS_REGION_NAME = us-west-2
AWS_REGION_HOST = ec2.us-west-2.amazonaws.com
# ami-80bedfb0 us-west-2 starcluster-base-ubuntu-13.04-x86_64-hvm (HVM-EBS)
NODE_IMAGE_ID = ami-80bedfb0
NODE_INSTANCE_TYPE = m3.large
VOLUMES = wrfprogs
VOLUME_ID = vol-f1942530
MOUNT_PATH = /wrfprogs
Right now, some things I will try by trial and error, but I don't "think"
they will resolve issues
1) My AVAILABILITY_ZONE is commented out in config, maybe I should hard
code in my us-west-2a, but both the instance and volume are already in there
2) Maybe I should manually create the mount point /wrfprogs in my master
and try again. Doubtful - the examples you provide don't suggest this is
3) In the config file, list the volume BEFORE listing the cluster specs.
Again, doubtful it will make a difference.
Thank you for any insight.
Don Morton, Owner/Manager
Boreal Scientific Computing LLC
Fairbanks, Alaska USA
Received on Tue Oct 20 2015 - 11:10:46 EDT