Thanks Justin, I got it
The trick is to put that empty " " before AWS_ACCESS_KEY_ID
/AWS_SECRET_ACCCESS_KEY,
and what tripped me up, was no space before the value of AWS_USER_ID.
Not sure why its needed but with it things seem to work and without it they
don't
-A
PS The only other person online I found with the problem attributed it to
character set encoding issues w python 2.7
Hi Archie,
Are you using quotes in your AWS_ACCESS_KEY_ID/AWS_SECRET_ACCCESS_KEY?
Also please be sure to double check you're not missing any characters
at the end or leaving any characters out. You can copy your access key
and secret access key directly from this page:
http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key
You will need to click the "Show" link next to your "Acess Key ID" in
order to access your secret access key.
They should look something like the example keys shown here:
http://s3.amazonaws.com/mturk/tools/pages/aws-access-identifiers/access_key_data.png
When you paste them into your config file the result should look
something like (no quotes!):
[aws info]
AWS_ACCESS_KEY_ID = 2S44JYKSZZ3D44ETWF02
AWS_SECRET_ACCESS_KEY = kl1GhCR9L9m0pw5bYR0+GyC1jsy5Fd0g/B\xjM/A
Hope that helps,
~Justin
On Wed, Feb 2, 2011 at 3:19 AM, <starcluster-request_at_mit.edu> wrote:
> Send StarCluster mailing list submissions to
> starcluster_at_mit.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.mit.edu/mailman/listinfo/starcluster
> or, via email, send a message with subject or body 'help' to
> starcluster-request_at_mit.edu
>
> You can reach the person managing the list at
> starcluster-owner_at_mit.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of StarCluster digest..."
>
>
> Today's Topics:
>
> 1. Re: help starting up cluster (Justin Riley)
> 2. Re: Privilege level Amazon GPU AMI. (Justin Riley)
> 3. Re: ERROR - "createvolume" failure due to missing AMI in
> eu-west zone (Justin Riley)
> 4. Re: Starcluster and elastic load balancing
> (Kyeong Soo (Joseph) Kim)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 1 Feb 2011 12:36:20 -0500
> From: Justin Riley <justin.t.riley_at_gmail.com>
> Subject: Re: [StarCluster] help starting up cluster
> To: starcluster_at_mit.edu
> Message-ID:
> <AANLkTikD5hQ7g4U9iLKWUpii+a3E7wBQLE_WhmwBNFAJ_at_mail.gmail.com<AANLkTikD5hQ7g4U9iLKWUpii%2Ba3E7wBQLE_WhmwBNFAJ_at_mail.gmail.com>
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi Archie,
>
> Are you using quotes in your AWS_ACCESS_KEY_ID/AWS_SECRET_ACCCESS_KEY?
> Also please be sure to double check you're not missing any characters
> at the end or leaving any characters out. You can copy your access key
> and secret access key directly from this page:
>
>
> http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key
>
> You will need to click the "Show" link next to your "Acess Key ID" in
> order to access your secret access key.
>
> They should look something like the example keys shown here:
>
>
> http://s3.amazonaws.com/mturk/tools/pages/aws-access-identifiers/access_key_data.png
>
> When you paste them into your config file the result should look
> something like (no quotes!):
>
> [aws info]
> AWS_ACCESS_KEY_ID = 2S44JYKSZZ3D44ETWF02
> AWS_SECRET_ACCESS_KEY = kl1GhCR9L9m0pw5bYR0+GyC1jsy5Fd0g/B\xjM/A
>
> Hope that helps,
>
> ~Justin
>
> On Mon, Jan 31, 2011 at 9:37 PM, Archie Russell <archier_at_gmail.com> wrote:
> >
> > I get this error when I startup
> > % starcluster -d start myfirstcluster
> > StarCluster - (http://web.mit.edu/starcluster)
> > Software Tools for Academics and Researchers (STAR)
> > Please submit bug reports to starcluster_at_mit.edu
> > config.py:281 - DEBUG - Loading config
> > config.py:99 - DEBUG - Loading file: /users/archie/.starcluster/config
> >>>> Using default cluster template: smallcluster
> > awsutils.py:46 - DEBUG - creating self._conn w/ connection_authenticator
> > kwargs = {'path': '/', 'region': None, 'port': None, 'is_secure': True}
> >>>> Validating cluster template settings...
> > awsutils.py:46 - DEBUG - creating self._conn w/ connection_authenticator
> > kwargs = {'path': '/', 'region': None, 'port': None, 'is_secure': True}
> > cluster.py:766 - ERROR - Invalid AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY
> > combination.
> > cli.py:243 - ERROR - settings for cluster template "smallcluster" are not
> > valid
> > AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY are in my config file and they
> > appear to be OK.
> >
> > I *can* start using ec2-run-instances on the commandline, ?e.g.
> > ec2-run-instances -K /mumble/privatekey.pem -C /mumble/certificate.pem
> > ami-4bf71e22 -z us-east-1d -k mumble-keypair -t c1.xlarge -n 1 2>&1
> > Has anyone seen this before?
> >
> >
> >
> > _______________________________________________
> > StarCluster mailing list
> > StarCluster_at_mit.edu
> > http://mailman.mit.edu/mailman/listinfo/starcluster
> >
> >
>
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 1 Feb 2011 12:39:06 -0500
> From: Justin Riley <justin.t.riley_at_gmail.com>
> Subject: Re: [StarCluster] Privilege level Amazon GPU AMI.
> To: starcluster_at_mit.edu
> Message-ID:
> <AANLkTikMGmp73A0JfvLj6DrtRfx_c1XktLyc2JOLL1mz_at_mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi,
>
> What is the exact error you get when trying to run OpenCL as a
> non-root user? Do you have the same issue using CUDA? I'll try to
> launch an instance and see what the deal is in the next couple days.
>
> ~Justin
>
> On Mon, Jan 17, 2011 at 7:37 AM, Progga <proggaprogga_at_gmail.com> wrote:
> > Hi,
> > I was trying to run some OpenCL code yesterday on Amazon EC2 GPU
> > cluster using the Star cluster AMI. ?Everything works fine if I am
> > logged in as root, but it fails when I am a non-root user. ?I googled
> > and found that /dev/nvidia* needs to have 0666 permission. ?But Star
> > cluster already have these, but still I have to run everything as
> > root. ?Is this a known issue? ?Thanks.
> >
> >
> > [0] http://forums.nvidia.com/index.php?showtopic=167152
> > [1] http://forums.nvidia.com/index.php?showtopic=156836
> > [2] http://forums.nvidia.com/index.php?showtopic=81071
> > _______________________________________________
> > StarCluster mailing list
> > StarCluster_at_mit.edu
> > http://mailman.mit.edu/mailman/listinfo/starcluster
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 1 Feb 2011 13:25:08 -0500
> From: Justin Riley <justin.t.riley_at_gmail.com>
> Subject: Re: [StarCluster] ERROR - "createvolume" failure due to
> missing AMI in eu-west zone
> To: "Kyeong Soo (Joseph) Kim" <kyeongsoo.kim_at_gmail.com>
> Cc: starcluster_at_mit.edu
> Message-ID:
> <AANLkTinXD6RcnBSW58=67XhzrPHju9j8KjnJE=ZT6M3O_at_mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi Kyeong,
>
> Sorry you've run into this issue. This is happening because the
> us-east-1 StarCluster AMI is hard-coded in the createvolume command in
> StarCluster 0.91.2 and cannot be modified via the config or an option
> to the createvolume command. The latest github code, however, allows
> you to pass an --image-id flag that allows you to switch to another
> AMI when creating new volumes. In general any Linux AMI should work in
> theory provided it has the standard sfdisk/mkfs.ext3 commands that are
> commonly available on most Linux distros. You can also change the mkfs
> command used by the createvolume command via the --mkfs-cmd flag if
> needed. If you're interested in testing the latest code please note
> that you will need to shutdown all clusters previously launched using
> the 0.91* release before using the development version. Also any
> volumes created can only be used with the development version. Any
> volumes created with the development version will be compatible with
> the upcoming 0.92 release.
>
> Hope that helps,
>
> ~Justin
>
>
> On Wed, Jan 12, 2011 at 11:41 AM, Kyeong Soo (Joseph) Kim
> <kyeongsoo.kim_at_gmail.com> wrote:
> > Dear All,
> > This is to report the said failure of "createvolume" in the eu-west zone.
> > Per the starcluster documentation, I have set environment variable and
> > configuration file as follows:
> > * Environment variable
> > EC2_URL=https://ec2.eu-west-1.amazonaws.com
> > * Configuration file
> > =============================================
> > [aws_info]
> > ...
> > AWS_REGION_NAME = eu-west-1
> > AWS_REGION_HOS?=?ec2.eu-west-1.amazonaws.com
> > ...
> >
> > [cluster smallcluster]
> > ...
> > # AMI for cluster nodes.
> > # US region
> > # - The base i386 StarCluster AMI is ami-d1c42db8
> > # - The base x86_64 StarCluster AMI is ami-a5c42dcc
> > # EU region
> > # - The base i386 StarCluster AMI is ami-a4d7fdd0
> > # - The base x86_64 StarCluster AMI is ami-38d5ff4c
> > NODE_IMAGE_ID = ami-a4d7fdd0
> > NODE_INSTANCE_TYPE = m1.small
> > ...
> > =============================================
> > Because the default AMIs suggested in the template (i.e., "ami-d1c42db8"
> and
> > "ami-a5c42dcc")?are not available in the eu-west region, I changed it to
> one
> > available (based on "listpublic" command).?Then, I got the following
> error
> > when trying to create a volume:
> > =============================================
> > ~$ starcluster createvolume 30 eu-west-1a
> > StarCluster - (http://web.mit.edu/starcluster)
> > Software Tools for Academics and Researchers (STAR)
> > Please submit bug reports to starcluster_at_mit.edu
> > cli.py:1079 - ERROR - AMI ami-d1c42db8 does not exist
> > =============================================
> > It seems that the setting of NODE_IMAGE_ID has no impact on the AMI used
> for
> > volume creation.
> > With Regards,
> > Joseph
> > --
> > Kyeong Soo (Joseph) Kim, Ph.D.
> > Senior Lecturer in Networking
> > Room 112, Digital Technium
> > Multidisciplinary Nanotechnology Centre, College of Engineering
> > Swansea University, Singleton Park, Swansea SA2 8PP, Wales UK
> > TEL: +44 (0)1792 602024
> > EMAIL: k.s.kim_at_swansea.ac.uk
> > HOME: http://iat-hnrl.swan.ac.uk/ (group)
> > ? ? ?? ? ? ?http://iat-hnrl.swan.ac.uk/~kks/ (personal)
> >
> > _______________________________________________
> > StarCluster mailing list
> > StarCluster_at_mit.edu
> > http://mailman.mit.edu/mailman/listinfo/starcluster
> >
> >
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 2 Feb 2011 11:19:32 +0000
> From: "Kyeong Soo (Joseph) Kim" <kyeongsoo.kim_at_gmail.com>
> Subject: Re: [StarCluster] Starcluster and elastic load balancing
> To: Rajat Banerjee <rbanerj_at_fas.harvard.edu>
> Cc: starcluster_at_mit.edu, archier_at_gmail.com
> Message-ID:
> <AANLkTikYs2SKvHFxcggbDDs1dd48=w=RKeiPxtBDecvF_at_mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Rajat,
>
> Great thanks for the updates and the new graphs!
>
> You must be quite busy now working on the final copy.
> Of course, I do look forward to seeing either version soon.
>
> In the meanwhile, I will try the ELB myself and get back to the list, if
> there anything to report.
>
> Regards,
> Joseph
> --
> Kyeong Soo (Joseph) Kim, Ph.D.
> Senior Lecturer in Networking
> Room 112, Digital Technium
> Multidisciplinary Nanotechnology Centre, College of Engineering
> Swansea University, Singleton Park, Swansea SA2 8PP, Wales UK
> TEL: +44 (0)1792 602024
> EMAIL: k.s.kim_at_swansea.ac.uk
> HOME: http://iat-hnrl.swan.ac.uk/ (group)
> http://iat-hnrl.swan.ac.uk/~kks/ (personal)
>
>
> On Tue, Feb 1, 2011 at 4:23 AM, Rajat Banerjee <rbanerj_at_fas.harvard.edu
> >wrote:
>
> > Hello Joseph,
> > I really appreciate your words. I am pleased that the load balancer will
> be
> > put to good use.
> >
> > I have written my masters thesis on this topic and it is very near
> > completion. The final copy is due within the next two weeks, after which
> I
> > will post a PDF to this list for everyone to view. Due to some onerous
> > requirements, the document is over 50 pages. I hope to create an abridged
> > version soon.
> >
> > In the meantime, ELB is fully operational (just like the 2nd death star).
> I
> > hope to clean up the visualizer next week so others can use it. Justin
> has
> > already given me useful suggestions that I simply need to implement.
> >
> > Here are some graphs that I've been toying with for the thesis
> experiment:
> > http://dl.dropbox.com/u/224960/sc/index.html
> >
> > Best,
> > Rajat
> >
> >
> > On Mon, Jan 31, 2011 at 4:09 AM, Kyeong Soo (Joseph) Kim <
> > kyeongsoo.kim_at_gmail.com> wrote:
> >
> >> Hello Rajat,
> >>
> >> I am very interested in your work on the elastic load balancing; I do
> >> remember that you posted some graphs on early results in the past and
> that
> >> you were working on your MSc thesis.
> >>
> >> In fact, this new feature will be critical for my current research
> >> requiring about 3~400 independent simulation runs and I do highly
> appreciate
> >> your great contribution to the StarCluster.
> >>
> >> By the way, I wonder whether you have published your work in any
> >> conferences/journals yet.
> >>
> >> Regards,
> >> Joseph
> >> --
> >> Kyeong Soo (Joseph) Kim, Ph.D.
> >> Senior Lecturer in Networking
> >> Room 112, Digital Technium
> >> Multidisciplinary Nanotechnology Centre, College of Engineering
> >> Swansea University, Singleton Park, Swansea SA2 8PP, Wales UK
> >> TEL: +44 (0)1792 602024
> >> EMAIL: k.s.kim_at_swansea.ac.uk
> >> HOME: http://iat-hnrl.swan.ac.uk/ (group)
> >> http://iat-hnrl.swan.ac.uk/~kks/<
> http://iat-hnrl.swan.ac.uk/%7Ekks/>(personal)
> >>
> >>
> >>
> >> On Fri, Jan 28, 2011 at 6:31 PM, Rajat Banerjee <
> rbanerj_at_fas.harvard.edu>wrote:
> >>
> >>> Hi Archie,
> >>> Yes, there is ELB built into the latest releases of StarCluster. I
> wrote
> >>> it, so feel free to write me (+ the list) with any questions.
> >>>
> >>> The docs on
> >>> http://web.mit.edu/stardev/cluster/docs/index.html
> >>>
> >>> haven't been updated in a while. There is a documentation page on
> >>> starcluster in the code base, see
> >>> /starcluster/StarCluster/docs/sphinx/load_balancer.rst
> >>>
> >>> That doc should have all of the information you need, and is readable
> in
> >>> plain text.
> >>>
> >>> Typically, this is how I fire up the load balancer:
> >>> starcluster bal <cluster_tag> -m <MAX_NODES you want> -n <MIN_NODES you
> >>> want>
> >>>
> >>> It will poll the cluster every 60 seconds and make decisions. The
> >>> decisions are described in load_balancer.rst. There is a visualizer
> which
> >>> makes 6 graphs with matplotlib to show you how many nodes are working,
> how
> >>> many jobs are running, queued, avg load, etc, but the visualizer still
> needs
> >>> a little bit of work.
> >>>
> >>> Hope that helps, and feel free to send back questions.
> >>> Rajat Banerjee
> >>>
> >>>
> >>> On Fri, Jan 28, 2011 at 12:29 PM, <starcluster-request_at_mit.edu> wrote:
> >>>
> >>>> Send StarCluster mailing list submissions to
> >>>> starcluster_at_mit.edu
> >>>>
> >>>> To subscribe or unsubscribe via the World Wide Web, visit
> >>>> http://mailman.mit.edu/mailman/listinfo/starcluster
> >>>> or, via email, send a message with subject or body 'help' to
> >>>> starcluster-request_at_mit.edu
> >>>>
> >>>> You can reach the person managing the list at
> >>>> starcluster-owner_at_mit.edu
> >>>>
> >>>> When replying, please edit your Subject line so it is more specific
> >>>> than "Re: Contents of StarCluster digest..."
> >>>>
> >>>> Today's Topics:
> >>>>
> >>>> 1. Starcluster and elastic load balancing (Archie Russell)
> >>>>
> >>>>
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Archie Russell <archier_at_gmail.com>
> >>>> To: starcluster_at_mit.edu
> >>>> Date: Thu, 27 Jan 2011 11:40:00 -0800
> >>>> Subject: [StarCluster] Starcluster and elastic load balancing
> >>>>
> >>>> Hi,
> >>>>
> >>>> Online it says Starcluster has Elastic Load Balancing built into the
> >>>> latest code
> >>>> version at Github. How would I go about using this? How does
> >>>> it work, e.g.
> >>>> when does it fire up new nodes and when does it shut them down?
> >>>>
> >>>> Thanks,
> >>>> Archie
> >>>>
> >>>> _______________________________________________
> >>>> StarCluster mailing list
> >>>> StarCluster_at_mit.edu
> >>>>
> >>>> http://mailman.mit.edu/mailman/listinfo/starcluster
> >>>>
> >>>>
> >>>
> >>>
> >>> _______________________________________________
> >>> StarCluster mailing list
> >>> StarCluster_at_mit.edu
> >>> http://mailman.mit.edu/mailman/listinfo/starcluster
> >>>
> >>>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> http://mailman.mit.edu/pipermail/starcluster/attachments/20110202/c754ce73/attachment.htm
>
> ------------------------------
>
> _______________________________________________
> StarCluster mailing list
> StarCluster_at_mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
>
>
> End of StarCluster Digest, Vol 18, Issue 2
> ******************************************
>
Received on Thu Feb 03 2011 - 12:40:15 EST
This archive was generated by
hypermail 2.3.0.