StarCluster - Mailing List Archive

Re: issue with mounting an EBS volume

From: Justin Riley <no email>
Date: Wed, 20 Jul 2011 14:05:15 -0400

Hi Sebastian,

This is either an issue with HVM instances and EBS volumes or an issue
with the AMI itself. Have you tested whether this happens with the
CentOS-based StarCluster HVM AMI? If it's the same error, I'll have to
put a special case in StarCluster to look for alternate devices (ie
/dev/xvdz) when attaching EBS volumes. I've created a bug to keep
track of this:

https://github.com/jtriley/StarCluster/issues/34

I'll try to patch this before the next release.

~Justin

On Thu, Jul 14, 2011 at 12:28 PM, sebastian rooks <srooks10_at_gmail.com> wrote:
> Hi Justin,
>
> First of all, thank you for StarCluster.
>
> I am trying to set up a custom hvm AMI based on Ubuntu Natty. I
> managed to get to the point where I get a cluster running with SGE and
> OpenMPI.
> But I have got an issue with mounting an EBS volume before sharing it as NFS.
> The cluster definition is:
>
>  [cluster computecluster]
>  # change this to the name of one of the keypair sections defined above
>  KEYNAME = starcluster_1
>  # number of ec2 instances to launch
>  CLUSTER_SIZE = 2
>  # create the following user on the cluster
>  CLUSTER_USER = sgeadmin
>  # optionally specify shell (defaults to bash)
>  CLUSTER_SHELL = bash
>  # AMI for cluster nodes.
>  NODE_IMAGE_ID = ami-af11d5c6
>  # instance type for all cluster nodes
>  NODE_INSTANCE_TYPE = cc1.4xlarge
>  # list of volumes to attach to the master node (OPTIONAL)
>  # these volumes, if any, will be NFS shared to the worker nodes
>  VOLUMES = computehome
>
>  [volume computehome]
>  VOLUME_ID = vol-e9a69582
>  MOUNT_PATH = /home
>
> When launching the cluster, an error message is printed:
>
>  ...
>  >>> Setting up the cluster...
>  >>> Attaching volume vol-e9a69582 to master node on /dev/sdz ...
>  >>> Configuring hostnames...
>  2/2 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> 100%
>  *** WARNING - Cannot find device /dev/sdz for volume {'device':
> '/dev/sdz', '__name__': 'volume computehome', 'partition': None,
> 'mount_path': '/home', 'volume_id': 'vol-e9a69582'}
>  *** WARNING - Not mounting vol-e9a69582 on /home
>  *** WARNING - This usually means there was a problemattaching the
> EBS volume to the master node
>  >>> Creating cluster user: sgeadmin (uid: 1001, gid: 1001)
>  2/2 |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 100%
>  ...
>
> And indeed the volume appears to not be on /dev/sdz but on /dev/xvdz:
>  root_at_master:~# fdisk -l
>
>  Disk /dev/sda: 8589 MB, 8589934592 bytes
>  255 heads, 63 sectors/track, 1044 cylinders
>  Units = cylinders of 16065 * 512 = 8225280 bytes
>  Sector size (logical/physical): 512 bytes / 512 bytes
>  I/O size (minimum/optimal): 512 bytes / 512 bytes
>  Disk identifier: 0x00000000
>
>     Device Boot      Start         End      Blocks   Id  System
>  /dev/sda1   *           2        1044     8377897+  83  Linux
>
>  Disk /dev/xvdz: 21.5 GB, 21474836480 bytes
>  255 heads, 63 sectors/track, 2610 cylinders
>  Units = cylinders of 16065 * 512 = 8225280 bytes
>  Sector size (logical/physical): 512 bytes / 512 bytes
>  I/O size (minimum/optimal): 512 bytes / 512 bytes
>  Disk identifier: 0x00000000
>
>      Device Boot      Start         End      Blocks   Id  System
>  /dev/xvdz1               1        2610    20964793+  83  Linux
>
> But when I check on AWS management console it says the volume is
> attached to /dev/sdz.
>
> What am I doing wrong ?
>
> Regards,
>
>  Seb
> _______________________________________________
> StarCluster mailing list
> StarCluster_at_mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
>
Received on Wed Jul 20 2011 - 14:05:16 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject