StarCluster - Mailing List Archive

Re: working with EBS volumes

From: Justin Riley <no email>
Date: Thu, 14 Oct 2010 16:58:34 -0400

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Adam,

Sorry for the late response.

There is some magic that occurs when creating users in order to avoid
having to chmod /home folders which might contain hundreds of
gigabytes of data. Basically, StarCluster inspects the top level
folders under the /home folder and if the CLUSTER_USER's home folder
already exists, then the CLUSTER_USER is created with the same uid/gid
as the existing home folder to avoid a recursive chmod. Otherwise,
StarCluster looks at the uid/gid of the other directories in /home and
chooses the highest gid/uid combo plus 1 to be the uid/gid for the
CLUSTER_USER. If that calculation ends up with a uid/gid less than
1000 then it defaults to 1000 for the gid/uid of CLUSTER_USER.

A couple questions that might help me to understand what happened:

1. I'm assuming you must have had MOUNT_PATH=/home for the volume in
your cluster template's VOLUME list, correct?

2. Did your volume already contain a 'sgeadmin' folder at the root of
the volume?

3. What does "ls -l" look like on the root of the volume that exhibits
this behavior?

Also, you will find useful information about the uid/gid chosen by
StarCluster for the CLUSTER_USER in your debug file:

/tmp/starcluster-debug-<your_username>.log

(if you're on mac, this file will be in the directory returned by
"python -c 'import tempfile; print tempfile.gettempdir()'")

Just grepping for gid or uid in your log file(s) should print out the
relevant messages: "grep -ri gid /tmp/starcluster-*"

~Justin


On 10/3/10 5:19 PM, Adam Marsh wrote:
>
> I've had some challenges to getting SC running correctly when EBS
> volumes are mounted to the head node during configuration. I
> initially setup the EBS volumes with database files by first
> configuring and mounting the volume on any available EC2 VM I had
> running at the time. By default, most of the time I was working as
> user 'ubuntu'. However, whenever an EBS volume with files or folders
> having 'ubuntu' as the owner and group were included in the VOLUMES
> list of the SC config file and were mounted during setup to the head
> node, two odd things occurred:
> 1. when the cluster_user account was setup by SC (like 'sgeadmin'),
> the owner and user of the /sgeadmin folder under /home was 'ubuntu';
> 2. connecting via ssh to the sgeadmin account always defaulted to
> logging in to the 'ubuntu' user account.
>
> I worked around the problem by changing own/grp settings on all EBS
> folders/files to the cluster_user name used in the config file.
> All works fine now.
>
> Is this just a rare instance of SC system behavior? If not, is there
> a better way to prepare EBS volumes for use with SC to avoid own/grp
> conflicts?
>
> Thanks,
>
> <Adam
>
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAky3bvoACgkQ4llAkMfDcrnhgQCeNx/PPR9pg01D626krxXQcv8L
M9cAn2vXyBmjMUMHqGU0PT94+ffR2xm4
=VX9F
-----END PGP SIGNATURE-----
Received on Thu Oct 14 2010 - 16:58:36 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject