Re: Adding New NFS Volumes to Cluster
Hi Dave,
There is a plugin to help define NFS share. I believe it defines NFS sharing for newly added nodes. Yet I think your cluster has to start with this.
Here is a link to a discussion defining this plugin:
https://github.com/jtriley/StarCluster/issues/44
However, I am not sure this is what you are looking for.
I am using this in my system and happy with it.
Jacob
Sent from my iPhone
On Aug 27, 2014, at 9:41 PM, Jennifer Staab <jstaab_at_cs.unc.edu> wrote:
> I going to assume are using NFS for volumes mounted on the master node of your cluster, the answer is different if you are using NFS between your cluster and an EC2 outside your cluster but within the same VPC.
>
> You should be able to use the config file to set the volumes you want mounted and call them in your cluster template. Noting that you can only have one cluster with that mounted volume at any one time. If your volume is mounted to another cluster (running or not), it cannot be mounted to the new cluster you create (thought this might be the issue you are having).
> ######## CONFIG WILL HAVE SOMETHING LIKE ###########
> [cluster YourClusterTemplate]
> VOLUMES = YourVolume
> # Sections starting with "volume" define your EBS volumes
> [volume YourVolume]
> VOLUME_ID = vol-xxxxxxxxx
> MOUNT_PATH = /YourMountPath
>
> Now if you have a running cluster and you want to mount other volumes that are seen by all nodes on that cluster there is a way to do this, but it requires you have the "crossmnt" option specified in "/etc/exports" for "/home" and/or "/YourMountPath". I edited my version of Starcluster so that the "crossmnt" option was automatically set so that I could mount underneath the volumes that were mounted via starcluster config.
> LIKE /etc/exports:
> /home YourCluster(async,no_root_squash,no_subtree_check,rw,crossmnt)
> /YourMountPath YourCluster(async,no_root_squash,no_subtree_check,rw,crossmnt)
> /opt/sge6 YourCluster(async,no_root_squash,no_subtree_check,rw)
>
> Then you can simply use the AWS console to attach your new volume to the running Master node. Next login to the the Master node and use the "mount" command to mount the newly attached volume underneath an already mounted volume LIKE "mount /dev/xvdXX/ home/NewVolume" OR "mount /dev/xvdXX /YourMountPath/NewVolume". This trick only works if you have "crossmnt" option set in /etc/exports. If you edit /etc/exports to add "crossmnt" option you will have to run "exportfs -r" or restart NFS on Master node to reload /etc/exports file. The command "lsblk" should show you the new volumes all mounted. Admittedly I haven't personally tried to mount under /home -- I just suggested it should work since StarCluster software automatically shares /home among all nodes of the cluster. One hard fast AWS rule is that an EBS volume can only be mounted to one EC2 at a time, this is why the EBS volume is attached and mounted to the Master node, and NFS is used to allow the volume to be seen by all nodes of the cluster.
>
> Hope this helps,
>
> -Jennifer
>
>
>
> On 8/27/14 7:26 PM, Dave Lin wrote:
>> I was trying to figure out the best way to add new volumes (NFS mounted) to a running cluster.
>>
>> I searched through the archives and found this open feature request https://github.com/jtriley/StarCluster/issues/333
>>
>> 1) Is there a suggested process or plugin for doing this?
>>
>> 2) One way I've been doing this is to just modify the config, terminate and restart the cluster. When I tried to restart the cluster after modifying the config, the new volumes didn't seem to get mounted. Is the restart supposed to read the config again?
>>
>> Thanks in advance,
>> Dave
>>
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster_at_mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>
> _______________________________________________
> StarCluster mailing list
> StarCluster_at_mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
Received on Thu Aug 28 2014 - 03:49:42 EDT
This archive was generated by
hypermail 2.3.0.