StarCluster - Mailing List Archive

Re: EBS vol on starcluster

From: Justin Riley <no email>
Date: Fri, 04 Mar 2011 14:19:16 -0500

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Archie/Stuart,

Sorry for the delay. The "createvolume" command is capable of doing all
of this for you (launch instance, create vol, attach vol, partition, and
format) automatically, no need to do this manually. The instructions in
the docs are just for reference for those interested in doing things by
hand.

Archie, the error you were getting is because the current 0.91.2 version
requires partitioned EBS volumes. The latest github code removes this
limitation and allows using *both* unpartitioned and partitioned
volumes. So for now you can either use "createvolume" to create a new
partitioned volume and sync the data from your old volume OR use the
latest github code if you're interested in testing the new unpartitioned
volume support.

HTH,

~Justin



On 02/04/2011 12:34 AM, Stuart Young wrote:
> Hi Archie,
>
> I came across the same problem and it turned out to be due to the volume
> not being partitioned although it was formatted correctly.
>
> This documentation on Stardev covers manually partitioning and formatting:
> http://web.mit.edu/stardev/cluster/docs/create_volume_manually.html#partitioning-and-formatting-the-new-volume
>
> Try the following steps to create the volume on 'myInstance', a separate
> (e.g., non-Starcluster) EC2 instance then detach it ready for
> incorporation into your new Starcluster instance. (NB: Expected output
> is indented - you should see something like it when you run the commands.)
>
> 1. ON myInstance, CREATE A VOLUME (using your volume name as an example):
>
> ec2-create-volume --availability-zone us-east-1a --size 40
>
> VOLUME vol-521d803a 40 us-east-1a
> creating 2011-01-05T15:34:28+0000
>
> ec2-attach-volume vol-521d803a -i i-b42f3fd9 -d /dev/sdz
>
> ATTACHMENT vol-521d803a i-b42f3fd9 /dev/sdz
> attaching 2011-01-05T15:36:28+0000
>
> ec2-describe-volumes
>
>
> 2. PARTITION THE VOLUME WITH ONE LINUX ext2 PARTITION USING THE WHOLE VOLUME
> (NB: ext2 is the format of starcluster AMI partitions but in theory ext3
> is fine)
>
> echo ",,L" | sfdisk -L /dev/sdz
>
> Checking that no-one is using this disk right now ...
> OK
>
> Disk /dev/sdz: 5221 cylinders, 255 heads, 63 sectors/track
> Old situation:
> Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting
> from 0
>
> Device Boot Start End #cyls #blocks Id System
> /dev/sdz1 0+ 5220 5221- 41937682 83 Linux
> /dev/sdz2 0 - 0 0 0 Empty
> /dev/sdz3 0 - 0 0 0 Empty
> /dev/sdz4 0 - 0 0 0 Empty
> New situation:
> Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting
> from 0
>
> Device Boot Start End #cyls #blocks Id System
> /dev/sdz1 0+ 5220 5221- 41937682 83 Linux
> /dev/sdz2 0 - 0 0 0 Empty
> /dev/sdz3 0 - 0 0 0 Empty
> /dev/sdz4 0 - 0 0 0 Empty
> Warning: no primary partition is marked bootable (active)
> This does not matter for LILO, but the DOS MBR will not boot this disk.
> Successfully wrote the new partition table
>
> Re-reading the partition table ...
>
> If you created or changed a DOS partition, /dev/foo7, say, then use
> dd(1)
> to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512
> count=1
>
>
> 3. FORMAT THE NEWLY CREATED PARTITION (NB: ***/dev/sdz1*** ):
>
> mkfs.ext3 /dev/sdz1
>
> mke2fs 1.39 (29-May-2006)
> Filesystem label=
> OS type: Linux
> Block size=4096 (log=2)
> Fragment size=4096 (log=2)
> 5242880 inodes, 10484420 blocks
> 524221 blocks (5.00%) reserved for the super user
> First data block=0
> Maximum filesystem blocks=4294967296
> 320 block groups
> 32768 blocks per group, 32768 fragments per group
> 16384 inodes per group
> Superblock backups stored on blocks:
> 32768, 98304, 163840, 229376, 294912, 819200, 884736,
> 1605632, 2654208,
> 4096000, 7962624
>
> Writing inode tables: done
> Creating journal (32768 blocks): done
> Writing superblocks and filesystem accounting information: done
>
> This filesystem will be automatically checked every 36 mounts or
> 180 days, whichever comes first. Use tune2fs -c or -i to override.
>
>
> 4. MOUNT THE NEWLY CREATED PARTITION ON myInstance (NB: ***/dev/sdz1*** ):
> mount -t ext2 /dev/sdz1 /scvol
>
> 5. COPY OVER DATA FROM /data TO /scvol
> cp -rp /data/* /scvol
>
> 6. UNMOUNT DEVICE AND DETACH VOLUME FROM myInstance
> umount /dev/sdz1
> ec2-detach-volume vol-521d803a
>
>
> 7. ADD [volume ...] SECTION TO STARCLUSTER CONFIG
> (You can call it anything you like but I used 'data' mounting to the
> folder '/data'.)
>
> [volume data]
> DEVICE=/dev/sdz
> MOUNT_PATH=/data
> PARTITION=1
> VOLUME_ID=vol-521d803a
>
> 8. LAUNCH YOUR STARCLUSTER INSTANCE
> (E.g., 'smallcluster')
>
> starcluster -c /full/path/to/config start smallcluster
>
>
> Hope that helps?
>
> Cheers,
>
> Stuart
>
>
>
>
>
>
>
> On 2/3/2011 5:13 PM, Archie Russell wrote:
>>
>> Hi,
>>
>> Thanks for the help so far guys, I got starcluster to fire up an AWS
>> cluster (config file needed a strategic " ")
>>
>> I am trying to mount a volume now and getting this error:
>>
>> clustersetup.py:200 - WARNING - Cannot find partition /dev/sdz1 on
>> volume vol-521d803a
>> clustersetup.py:202 - WARNING - Not mounting vol-521d803a on /bioreference
>> clustersetup.py:204 - WARNING - This either means that the volume has
>> not beenpartitioned or that the partition specifieddoes not exist on
>> the volume
>>
>> I've mounted this volume before and it worked OK, but never dealt with
>> partitions. What should I do?
>>
>> Thanks,
>> Archie
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster_at_mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk1xOzQACgkQ4llAkMfDcrnt2ACdFiGLBE5AXi771PZj/ckshqg2
V4YAmwbyPePSWIRKHTaH4biosd5mN9jA
=6nzl
-----END PGP SIGNATURE-----
Received on Fri Mar 04 2011 - 14:19:23 EST
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject