StarCluster - Mailing List Archive

createvolume works / mount fails

From: Lyn Gerner <no email>
Date: Tue, 12 Feb 2013 13:50:19 -0800

Hi All,

I've been receiving an error, consistently, from multiple attempts to boot
a cluster that references an EBS volume that I've created w/"starcluster

Here is the output from the most recent createvolume; looks like everything
goes fine:

.starcluster mary$ alias sc=starcluster
.starcluster mary$ sc createvolume --name=usrsharejobs-cv5g-use1c 5
StarCluster - ( (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to

>>> No keypair specified, picking one from config...
>>> Using keypair: lapuserkey
>>> Creating security group _at_sc-volumecreator...
>>> No instance in group _at_sc-volumecreator for zone us-east-1c, launching
one now.
>>> Waiting for volume host to come up... (updating every 30s)
>>> Waiting for all nodes to be in a 'running' state...
1/1 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Waiting for SSH to come up on all nodes...
1/1 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Waiting for cluster to come up took 1.447 mins
>>> Checking for required remote commands...
>>> Creating 5GB volume in zone us-east-1c
>>> New volume id: vol-53600b22
>>> Waiting for new volume to become 'available'...
>>> Attaching volume vol-53600b22 to instance i-6b714b1b...
>>> Formatting volume...
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
mke2fs 1.41.14 (22-Dec-2010)

>>> Leaving volume vol-53600b22 attached to instance i-6b714b1b
>>> Not terminating host instance i-6b714b1b
*** WARNING - There are still volume hosts running: i-6b714b1b
*** WARNING - Run 'starcluster terminate volumecreator' to terminate *all*
volume host instances once they're no longer needed
>>> Your new 5GB volume vol-53600b22 has been created successfully
>>> Creating volume took 1.871 mins

.starcluster mary$ sc terminate volumecreator
StarCluster - ( (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to

Terminate EBS cluster volumecreator (y/n)? y
>>> Detaching volume vol-53600b22 from volhost-us-east-1c
>>> Terminating node: volhost-us-east-1c (i-6b714b1b)
>>> Waiting for cluster to terminate...
>>> Removing _at_sc-volumecreator security group

.starcluster mary$ sc listvolumes

volume_id: vol-53600b22
size: 5GB
status: available
availability_zone: us-east-1c
create_time: 2013-02-12 13:12:16
tags: Name=usrsharejobs-cv5g-use1c


So here is the subsequent attempt to boot a cluster that tries to mount the
new EBS volume:

.starcluster mary$ sc start -b 0.25 -i m1.small -I m1.small -c jobscluster
StarCluster - ( (v. 0.93.3)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to

*** WARNING - ************************************************************
*** WARNING - Spot instances can take a long time to come up and may not
*** WARNING - come up at all depending on the current AWS load and your
*** WARNING - max spot bid price.
*** WARNING - StarCluster will wait indefinitely until all instances (2)
*** WARNING - come up. If this takes too long, you can cancel the start
*** WARNING - command using CTRL-C. You can then resume the start command
*** WARNING - later on using the --no-create (-x) option:
*** WARNING - $ starcluster start -x jobscluster
*** WARNING - This will use the existing spot instances launched
*** WARNING - previously and continue starting the cluster. If you don't
*** WARNING - wish to wait on the cluster any longer after pressing CTRL-C
*** WARNING - simply terminate the cluster using the 'terminate' command.
*** WARNING - ************************************************************

*** WARNING - Waiting 5 seconds before continuing...
*** WARNING - Press CTRL-C to cancel...
>>> Validating cluster template settings...
>>> Cluster template settings are valid
>>> Starting cluster...
>>> Launching a 2-node cluster...
>>> Launching master node (ami: ami-4b9f0a22, type: m1.small)...
>>> Creating security group _at_sc-jobscluster...
>>> Launching node001 (ami: ami-4b9f0a22, type: m1.small)
>>> Waiting for cluster to come up... (updating every 30s)
>>> Waiting for open spot requests to become active...
1/1 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Waiting for all nodes to be in a 'running' state...
2/2 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Waiting for SSH to come up on all nodes...
2/2 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>>> Waiting for cluster to come up took 6.245 mins
>>> The master node is
>>> Setting up the cluster...
>>> Attaching volume vol-53600b22 to master node on /dev/sdz ...
>>> Configuring hostnames...
2/2 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
*** WARNING - Cannot find device /dev/xvdz for volume vol-53600b22
*** WARNING - Not mounting vol-53600b22 on /usr/share/jobs
*** WARNING - This usually means there was a problem attaching the EBS
volume to the master node

So per the relevant, past email threads, I'm using the createvolume
command, and it still gives this error. Also tried creating the volume
thru the AWS console; subsequent cluster boot fails at the same point w/the
same problem of not finding the device.

I'll appreciate any suggestions.

Thanks much,
Received on Tue Feb 12 2013 - 16:50:20 EST
This archive was generated by hypermail 2.3.0.


Sort all by: