Hello,
I hate being a headache, but this didn't go smooth as I was hoping, and I
appreciate your support to get moving,
I finally successfully attached the volume I created, but didn't see where
it should be on the cluster, and how my data will be saved from session to
session,
The volume I created is a 30 GB, I first mounted it to /mydata, and didn't
see this when I started the cluster, this is what I get:
root_at_ip-10-16-3-102:/dev# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 * 16065 16771859 8377897+ 83 Linux
Disk /dev/xvdb: 901.9 GB, 901875499008 bytes
255 heads, 63 sectors/track, 109646 cylinders, total 1761475584 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdb doesn't contain a valid partition table
Disk /dev/xvdc: 901.9 GB, 901875499008 bytes
255 heads, 63 sectors/track, 109646 cylinders, total 1761475584 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdc doesn't contain a valid partition table
root_at_ip-10-16-3-102:/dev# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.9G 5.1G 2.5G 68% /
udev 12G 4.0K 12G 1% /dev
tmpfs 4.5G 216K 4.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 12G 0 12G 0% /run/shm
/dev/xvdb 827G 201M 785G 1% /mnt
no 30GB volume attached, then I terminated and followed the suggestions in
this page:
http://web.mit.edu/star/cluster/docs/latest/manual/configuration.html
making it mount to /home thinking it will be used in place of the /home
folder, and this way all my installations and downloads will be saved after
I terminate the session,
however, when I started the cluster this is what I get:
root_at_ip-10-16-24-98:/home# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.9G 5.1G 2.5G 68% /
udev 12G 4.0K 12G 1% /dev
tmpfs 4.5G 216K 4.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 12G 0 12G 0% /run/shm
/dev/xvdb 827G 201M 785G 1% /mnt
There is no 30 GB volume as well, and neither / nor /mnt are getting
bigger,
here is what I am having in my config file:
[cluster mycluster]
VOLUMES = mydata
[volume mydata]
# attach vol-c9999999 to /home on master node and NFS-shre to worker nodes
VOLUME_ID = vol-c9999999 #(used the volume ID I got from the AWS console)
MOUNT_PATH = /home #(not sure if this is true or not, I used /mydata in
the first run and didn't work as well)
also when I was running before attaching the volume, I had starcluster put
and starcluster get commands working very well. After attaching the volume,
I had them working and saying 100% complete on my local machine, but when I
log in to the cluster, I find the paths where I was uploading the files to,
empty, no files went through! I am not sure if this is related to attaching
the volume and whether there should be anything I need to do
P.S. I noticed in the ec2 command line tools to attach a volume to an
instance, I should define the volume ID, the instance ID and the device ID
(/dev/sdf), same as found in the aws online console. However, the mount
path in the starcluster configuration file, doesn't seem to be a device ID
that should have been (/dev/sdf) for linux as far as I understand. Not sure
where to define this in starcluster if this is the missing point,
I appreciate your help very much,
thanks again,
Manal
Received on Mon May 21 2012 - 07:42:04 EDT