Ah, perhaps I've answered my own question. Having created the larger
volume, I simply started a cluster with that volume attached.
It looks like it already sees the larger volume without my having to
re-write the partition table, or do something like grow_volume. is this
correct?
I ran fdisk -l :
[root_at_domU-12-31-39-0C-DC-22 home]# fdisk -l
...
Disk /dev/sdz: 536.8 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdz1 1 2610 20964824+ 83 Linux
....
Does this completely confirm that the larger 500 GB volume is completely
"seen"? (And if so, does this mean that nothing like resize2fs or
xfs_growfs is necessary?)
Thanks -- and again, sorry for the ignorant / OT questions
Dan
On Thu, Aug 12, 2010 at 11:23 AM, Dan Yamins <dyamins_at_gmail.com> wrote:
> I've got a 20GB volume that I created using startcluster createvolume that
> I want to enlarge to 1 TB. I've tried to follow these instructions:
>
> http://www.capsunlock.net/2009/06/enlarge-amazon-ebs-volume.html
>
> for ext3 filesystems.
>
> However, I think I'm using the wrong fs type, since -- even before trying
> to re-write the partition table -- I can't mount the drive as ext3, e.g.
>
> [root_at_domU-12-31-39-0C-DC-22 /]# mount -t ext3 /dev/sdi /home
> mount: wrong fs type, bad option, bad superblock on /dev/sdi,
> missing codepage or other error
> In some cases useful info is found in syslog - try
> dmesg | tail or so
>
>
> Similary, I can't mount it as xfs either. Knowing that nfs is somehow
> involved here, I tried:
>
> [root_at_domU-12-31-39-0C-DC-22 /]# mount -t nfs /dev/sdi /home
> mount: directory to mount not in host:dir format
>
> I'm not at all familiar with the details of how nfs works ... So --, what
> is the correct procedure for me to follow?
>
> Thanks!
> Dan
>
Received on Thu Aug 12 2010 - 11:43:47 EDT