StarCluster - Mailing List Archive

Re: NFS Help !!!!

From: Jennifer Staab <no email>
Date: Fri, 5 Jun 2015 11:47:47 -0400

Something doesn't seem right with how you are mounting your volumes. If
you are mounting a volume make sure that volume exists in your AWS system
and is unattached to any EC2's. Next make sure the mount point let's say
it's called /MyData exists as an empty directory on both Master and Worker
AMIs. Make certain your OS is up-to-date on all AMIs (workers and master).
Also make sure your volumes and EC2's are in the same availability zone.

Try this, create an empty volume, attach to the EC2 that's based upon your
Master Node AMI. Then mount it to that Master Node EC2 (not using
Starcluster) -- make sure it mounts successfully and you can write some
temporary text files to it. Next unmount it and unattach it from the EC2.
Once it is unmounted and unattached create a new cluster and see if you can
get that cluster to successfully NFS that volume between Master & Workers.
If you can get that volume to mount and NFS successfully between Master and
Workers, then it is likely something about the volumes themselves and/or
how you are mounting them that's causing the problem.

If you can't get that empty volume to NFS between Master & Workers -- then
it is likely something regarding NFS or the OS you have running on the
AMIs. Make sure you are using up-to-date OS and software and that your
mount points are empty directories. Also don't mount under an existing
mount point unless you are using crossmnt option in /etc/exports.
Starcluster doesn't use the crossmnt option -- I altered my version of
Starcluster as to allow crossmnt option - but it isn't a native feature.

Next I would try running NFS on an EC2 based upon the Master node AMI with
that empty volume mounted and seeing if you can successfully NFS that
volume with an EC2 based upon your Worker Node. My guess is if you can set
NFS up successfully without running Starcluster -- in the process you will
likely see where the NFS problem is.

Good Luck!

-Jennifer


On Fri, Jun 5, 2015 at 9:28 AM, Andrey Evtikhov <Andrey_Evtikhov_at_epam.com>
wrote:

> Really need help with this issue – just created new cluster – still the
> same mess – everything nice on master and complete mess on NFS client:
>
>
>
> Except – /home ! Home mounted Ok !
>
>
>
> root_at_node001:/geodata# ls -l
>
> ls: cannot access biodata: Too many levels of symbolic links
>
> ls: cannot access geodata: Too many levels of symbolic links
>
>
>
> in config :
>
> VOLUMES = geodata, biodata
>
> [volume geodata]
>
> VOLUME_ID = vol-17feadfc
>
> MOUNT_PATH = /geodata
>
>
>
> [volume biodata]
>
> VOLUME_ID = vol-f3fdae18
>
> MOUNT_PATH = /biodata
>
>
>
> It is how it looks on client /biodata
>
> total 8
>
> d????????? ? ? ? ? ? biodata
>
> d????????? ? ? ? ? ? geodata
>
> drwxr-xr-x 4 root root 4096 Jun 5 13:11 home
>
> drwxr-xr-x 4 root root 4096 Jun 5 13:11 opt
>
>
>
>
>
>
>
>
>
>
>
> *Andrey Evtikhov*
>
> *Lead Software Maintenance Engineer*
>
>
>
> *Email: *andrey_evtikhov_at_epam.com
>
> *Saint-Petersburg,* *Russia *(GMT+3) *epam.com <http://www.epam.com>*
>
>
>
> CONFIDENTIALITY CAUTION AND DISCLAIMER
> This message is intended only for the use of the individual(s) or
> entity(ies) to which it is addressed and contains information that is
> legally privileged and confidential. If you are not the intended recipient,
> or the person responsible for delivering the message to the intended
> recipient, you are hereby notified that any dissemination, distribution or
> copying of this communication is strictly prohibited. All unintended
> recipients are obliged to delete this message and destroy any printed
> copies.
>
>
>
> *From:* Andrey Evtikhov
> *Sent:* Tuesday, June 2, 2015 9:20 PM
> *To:* 'starcluster_at_mit.edu'
> *Subject:* Cannot mount properly /home over NFS
>
>
>
> Cannot mount properly /home
>
>
>
> VOLUMES = mycompany-data
>
>
>
> [volume mycompany-data]
>
> VOLUME_ID = vol-xxxxxxx
>
> MOUNT_PATH = /home
>
>
>
> Master seems to be OK , but nodes has mounted home inside home !
>
>
>
>
>
> root_at_prod-sc-triad-node001:/home# cd /home
>
> root_at_prod-sc-triad-node001:/home# ls -l
>
> ls: cannot access home: Too many levels of symbolic links
>
> total 4
>
> d????????? ? ? ? ? ? home
>
> drwxr-xr-x 4 root root 4096 Apr 30 17:55 opt
>
>
>
>
>
> *Andrey Evtikhov*
>
> _______________________________________________
> StarCluster mailing list
> StarCluster_at_mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
>
>
Received on Fri Jun 05 2015 - 11:47:50 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject