StarCluster - Mailing List Archive

crash report

From: Cedar McKay <no email>
Date: Fri, 9 May 2014 10:14:32 -0700

I'm not really sure what happened here. I changed a working config, by adding one more user to the createusers plugin.

I changed this, which worked:
usernames = jaci, clara

to this which didn't work:
usernames = jaci, clara, gabrielle


changing it back fixed the problem. Here is the crash report


Thanks!

Cedar






---------- SYSTEM INFO ----------
StarCluster: 0.95.5
Python: 2.7.6 (default, Nov 12 2013, 13:26:39) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))]
Platform: Darwin-12.5.0-x86_64-i386-64bit
boto: 2.27.0
paramiko: 1.13.0
Crypto: 2.6.1

---------- CRASH DETAILS ----------
Command: starcluster start -c micro_v4 v4b

2014-05-09 09:46:05,051 PID: 9082 sshutils.py:112 - DEBUG - connecting to host ec2-54-187-202-106.us-west-2.compute.amazonaws.com on port 22 as user root
2014-05-09 09:46:05,780 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:06,116 PID: 9082 sshutils.py:204 - DEBUG - creating sftp connection
2014-05-09 09:46:06,781 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:07,782 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:08,784 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:09,786 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:10,787 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:11,789 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:12,791 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:13,792 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:14,794 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:15,796 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:16,021 PID: 9082 sshutils.py:112 - DEBUG - connecting to host ec2-54-187-203-222.us-west-2.compute.amazonaws.com on port 22 as user root
2014-05-09 09:46:16,797 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:17,799 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:18,800 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:19,801 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:20,803 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:21,804 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:22,806 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:23,808 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:24,809 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:25,810 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:26,812 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:27,813 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:28,815 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:29,816 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:30,818 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:31,109 PID: 9082 sshutils.py:112 - DEBUG - connecting to host ec2-54-187-203-222.us-west-2.compute.amazonaws.com on port 22 as user root
2014-05-09 09:46:31,819 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:31,893 PID: 9082 sshutils.py:204 - DEBUG - creating sftp connection
2014-05-09 09:46:32,821 PID: 9082 utils.py:118 - INFO - Waiting for cluster to come up took 1.263 mins
2014-05-09 09:46:32,821 PID: 9082 cluster.py:1668 - INFO - The master node is ec2-54-187-203-222.us-west-2.compute.amazonaws.com
2014-05-09 09:46:32,821 PID: 9082 cluster.py:1669 - INFO - Configuring cluster...
2014-05-09 09:46:32,902 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {u'i-54b6e95c': <Node: v4b-master (i-54b6e95c)>, u'i-55b6e95d': <Node: v4b-node001 (i-55b6e95d)>}
2014-05-09 09:46:32,902 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-54b6e95c in self._nodes
2014-05-09 09:46:32,902 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-55b6e95d in self._nodes
2014-05-09 09:46:32,903 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:46:32,903 PID: 9082 cluster.py:1714 - INFO - Running plugin starcluster.clustersetup.DefaultClusterSetup
2014-05-09 09:46:32,903 PID: 9082 clustersetup.py:121 - INFO - Configuring hostnames...
2014-05-09 09:46:32,908 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:32,953 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && hostname -F /etc/hostname
2014-05-09 09:46:32,956 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && hostname -F /etc/hostname
2014-05-09 09:46:32,988 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && hostname -F /etc/hostname':

2014-05-09 09:46:33,006 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && hostname -F /etc/hostname':

2014-05-09 09:46:33,928 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && fdisk -l 2>/dev/null
2014-05-09 09:46:33,959 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && fdisk -l 2>/dev/null':

Disk /dev/xvda1: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

2014-05-09 09:46:34,028 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && cat /proc/partitions
2014-05-09 09:46:34,054 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && cat /proc/partitions':
major minor #blocks name

202 1 8388608 xvda1
2014-05-09 09:46:34,119 PID: 9082 clustersetup.py:192 - INFO - Creating cluster user: cedar (uid: 1001, gid: 1001)
2014-05-09 09:46:34,120 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:34,160 PID: 9082 clustersetup.py:207 - DEBUG - user cedar exists on node v4b-master, no action
2014-05-09 09:46:34,163 PID: 9082 clustersetup.py:207 - DEBUG - user cedar exists on node v4b-node001, no action
2014-05-09 09:46:35,122 PID: 9082 clustersetup.py:238 - INFO - Configuring scratch space for user(s): cedar
2014-05-09 09:46:35,124 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:35,153 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && chown -R cedar:cedar /mnt/cedar
2014-05-09 09:46:35,153 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && chown -R cedar:cedar /mnt/cedar
2014-05-09 09:46:35,184 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && chown -R cedar:cedar /mnt/cedar':

2014-05-09 09:46:35,196 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && chown -R cedar:cedar /mnt/cedar':

2014-05-09 09:46:36,126 PID: 9082 clustersetup.py:247 - INFO - Configuring /etc/hosts on each node
2014-05-09 09:46:36,127 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:36,166 PID: 9082 sshutils.py:307 - DEBUG - new /etc/hosts after removing regex (v4b-master|v4b-node001) matches:
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
172.31.39.105 v3c-master

2014-05-09 09:46:36,167 PID: 9082 sshutils.py:307 - DEBUG - new /etc/hosts after removing regex (v4b-master|v4b-node001) matches:
127.0.0.1 localhost

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
172.31.39.105 v3c-master

2014-05-09 09:46:37,129 PID: 9082 node.py:703 - INFO - Starting NFS server on v4b-master
2014-05-09 09:46:37,147 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && /etc/init.d/portmap start
2014-05-09 09:46:37,207 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && /etc/init.d/portmap start':
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service portmap start

Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start portmap
2014-05-09 09:46:37,275 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs/
2014-05-09 09:46:37,303 PID: 9082 sshutils.py:582 - DEBUG - (ignored) remote command 'source /etc/profile && mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs/' failed with status 32:
mount: mount point /var/lib/nfs/rpc_pipefs/ does not exist
2014-05-09 09:46:37,322 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mkdir -p /etc/exports.d
2014-05-09 09:46:37,386 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && mkdir -p /etc/exports.d':

2014-05-09 09:46:37,404 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mkdir -p /dummy_export_for_broken_init_script
2014-05-09 09:46:37,470 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && mkdir -p /dummy_export_for_broken_init_script':

2014-05-09 09:46:37,563 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && /etc/init.d/nfs start
2014-05-09 09:46:38,478 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && /etc/init.d/nfs start':
* Exporting directories for NFS kernel daemon...
...done.
* Starting NFS kernel daemon
...done.
2014-05-09 09:46:38,547 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && rm -f /etc/exports.d/dummy.exports
2014-05-09 09:46:38,574 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && rm -f /etc/exports.d/dummy.exports':

2014-05-09 09:46:38,593 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && rm -rf /dummy_export_for_broken_init_script
2014-05-09 09:46:38,657 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && rm -rf /dummy_export_for_broken_init_script':

2014-05-09 09:46:38,726 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && exportfs -fra
2014-05-09 09:46:38,753 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && exportfs -fra':

2014-05-09 09:46:38,754 PID: 9082 node.py:667 - DEBUG - Cleaning up potentially stale NFS entries
2014-05-09 09:46:38,837 PID: 9082 sshutils.py:307 - DEBUG - new /etc/exports after removing regex (/home v4b-node001) matches:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#

2014-05-09 09:46:38,970 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && exportfs -fra
2014-05-09 09:46:38,999 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && exportfs -fra':

2014-05-09 09:46:38,999 PID: 9082 node.py:670 - INFO - Configuring NFS exports path(s):
/home
2014-05-09 09:46:39,136 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && exportfs -fra
2014-05-09 09:46:39,165 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && exportfs -fra':

2014-05-09 09:46:39,165 PID: 9082 clustersetup.py:347 - INFO - Mounting all NFS export path(s) on 1 worker node(s)
2014-05-09 09:46:39,166 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 1
2014-05-09 09:46:39,183 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && /etc/init.d/portmap start
2014-05-09 09:46:39,245 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && /etc/init.d/portmap start':
Rather than invoking init scripts through /etc/init.d, use the service(8)
utility, e.g. service portmap start

Since the script you are attempting to invoke has been converted to an
Upstart job, you may also use the start(8) utility, e.g. start portmap
2014-05-09 09:46:39,263 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mount -t devpts none /dev/pts
2014-05-09 09:46:39,332 PID: 9082 sshutils.py:582 - DEBUG - (ignored) remote command 'source /etc/profile && mount -t devpts none /dev/pts' failed with status 32:
mount: none already mounted or /dev/pts busy
mount: according to mtab, devpts is already mounted on /dev/pts
2014-05-09 09:46:39,350 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mount
2014-05-09 09:46:39,420 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && mount':
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
s3fs on /mnt/rocap_databases type fuse.s3fs (rw,nosuid,nodev,allow_other)
2014-05-09 09:46:39,509 PID: 9082 sshutils.py:307 - DEBUG - new /etc/fstab after removing regex ( /home ) matches:
LABEL=cloudimg-rootfs / ext4 defaults 0 0

2014-05-09 09:46:39,609 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && mount /home
2014-05-09 09:46:39,673 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && mount /home':

2014-05-09 09:46:40,168 PID: 9082 utils.py:118 - INFO - Setting up NFS took 0.051 mins
2014-05-09 09:46:40,168 PID: 9082 clustersetup.py:259 - INFO - Configuring passwordless ssh for root
2014-05-09 09:46:40,235 PID: 9082 node.py:509 - DEBUG - Using existing key: /root/.ssh/id_rsa
2014-05-09 09:46:40,634 PID: 9082 sshutils.py:307 - DEBUG - new /root/.ssh/known_hosts after removing regex (v4b-node001|ip-172-31-11-42.us-west-2.compute.internal|ip-172-31-11-42|ec2-54-187-202-106.us-west-2.compute.amazonaws.com|v4b-master|ip-172-31-11-43.us-west-2.compute.internal|ip-172-31-11-43|ec2-54-187-203-222.us-west-2.compute.amazonaws.com) matches:
ec2-54-186-247-159.us-west-2.compute.amazonaws.com,54.186.247.159 ssh-rsa XXX
ip-172-31-39-105,172.31.39.105 ssh-rsa XXX
v3c-master,172.31.39.105 ssh-rsa xxx
ec2-54-187-195-189.us-west-2.compute.amazonaws.com,54.187.195.189 ssh-rsa xxx
ip-172-31-39-105.us-west-2.compute.internal,172.31.39.105 ssh-rsa xxx

2014-05-09 09:46:41,227 PID: 9082 clustersetup.py:267 - INFO - Configuring passwordless ssh for cedar
2014-05-09 09:46:42,059 PID: 9082 node.py:538 - DEBUG - adding auth_key_contents
2014-05-09 09:46:42,069 PID: 9082 node.py:546 - DEBUG - adding conn_pubkey_contents
2014-05-09 09:46:42,349 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {u'i-54b6e95c': <Node: v4b-master (i-54b6e95c)>, u'i-55b6e95d': <Node: v4b-node001 (i-55b6e95d)>}
2014-05-09 09:46:42,349 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-54b6e95c in self._nodes
2014-05-09 09:46:42,349 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-55b6e95d in self._nodes
2014-05-09 09:46:42,349 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:46:42,349 PID: 9082 cluster.py:1714 - INFO - Running plugin createusers
2014-05-09 09:46:42,349 PID: 9082 users.py:68 - INFO - Creating 3 cluster users
2014-05-09 09:46:42,403 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:42,421 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && echo -n 'jaci:U09aYKIX:1002:1002:Cluster user account jaci:/home/jaci:/bin/bash
clara:gNkecySh:1003:1003:Cluster user account clara:/home/clara:/bin/bash
' | newusers
2014-05-09 09:46:42,421 PID: 9082 sshutils.py:561 - DEBUG - executing remote command: source /etc/profile && echo -n 'jaci:U09aYKIX:1002:1002:Cluster user account jaci:/home/jaci:/bin/bash
clara:gNkecySh:1003:1003:Cluster user account clara:/home/clara:/bin/bash
' | newusers
2014-05-09 09:46:42,511 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && echo -n 'jaci:U09aYKIX:1002:1002:Cluster user account jaci:/home/jaci:/bin/bash
clara:gNkecySh:1003:1003:Cluster user account clara:/home/clara:/bin/bash
' | newusers':

2014-05-09 09:46:42,571 PID: 9082 sshutils.py:585 - DEBUG - output of 'source /etc/profile && echo -n 'jaci:U09aYKIX:1002:1002:Cluster user account jaci:/home/jaci:/bin/bash
clara:gNkecySh:1003:1003:Cluster user account clara:/home/clara:/bin/bash
' | newusers':

2014-05-09 09:46:43,405 PID: 9082 users.py:77 - INFO - Configuring passwordless ssh for 3 cluster users
2014-05-09 09:46:45,353 PID: 9082 node.py:538 - DEBUG - adding auth_key_contents
2014-05-09 09:46:45,362 PID: 9082 node.py:546 - DEBUG - adding conn_pubkey_contents
2014-05-09 09:46:47,889 PID: 9082 node.py:538 - DEBUG - adding auth_key_contents
2014-05-09 09:46:47,899 PID: 9082 node.py:546 - DEBUG - adding conn_pubkey_contents
2014-05-09 09:46:48,159 PID: 9082 cluster.py:1724 - ERROR - Error occured while running plugin 'createusers':
2014-05-09 09:46:48,159 PID: 9082 cli.py:307 - ERROR - Unhandled exception occured
Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cli.py", line 274, in main
    sc.execute(args)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/commands/start.py", line 244, in execute
    validate_running=validate_running)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1628, in start
    return self._start(create=create, create_only=create_only)
  File "<string>", line 2, in _start
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/utils.py", line 112, in wrap_f
    res = func(*arg, **kargs)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1651, in _start
    self.setup_cluster()
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1660, in setup_cluster
    self._setup_cluster()
  File "<string>", line 2, in _setup_cluster
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/utils.py", line 112, in wrap_f
    res = func(*arg, **kargs)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1672, in _setup_cluster
    self.run_plugins()
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1690, in run_plugins
    self.run_plugin(plug, method_name=method_name, node=node)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/cluster.py", line 1715, in run_plugin
    func(*args)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/plugins/users.py", line 82, in run
    auth_conn_key=True)
  File "/Library/Python/2.7/site-packages/StarCluster-0.95.5-py2.7.egg/starcluster/node.py", line 499, in generate_key_for_user
    home_folder = user.pw_dir
AttributeError: 'NoneType' object has no attribute 'pw_dir'
2014-05-09 09:45:13,767 PID: 9082 config.py:567 - DEBUG - Loading config
2014-05-09 09:45:13,768 PID: 9082 config.py:138 - DEBUG - Loading file: /Users/cedar/.starcluster/config
2014-05-09 09:45:13,776 PID: 9082 awsutils.py:75 - DEBUG - creating self._conn w/ connection_authenticator kwargs = {'proxy_user': None, 'proxy_pass': None, 'proxy_port': None, 'proxy': None, 'is_secure': True, 'path': '/', 'region': RegionInfo:us-west-2, 'validate_certs': True, 'port': None}
2014-05-09 09:45:13,908 PID: 9082 cluster.py:1803 - INFO - Validating cluster template settings...
2014-05-09 09:45:14,128 PID: 9082 sshutils.py:859 - DEBUG - rsa private key fingerprint (/Users/cedar/Documents/Amazon_AWS/cmckay-key-pair-oregon.pem): xxx
2014-05-09 09:45:14,203 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {}
2014-05-09 09:45:14,203 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = []
2014-05-09 09:45:14,347 PID: 9082 cluster.py:1130 - DEBUG - Launch map: v4b-node001 (ami: ami-5d75036d, type: t1.micro)...
2014-05-09 09:45:14,353 PID: 9082 cluster.py:911 - DEBUG - Userdata size in KB: 0.72
2014-05-09 09:45:14,353 PID: 9082 cluster.py:1821 - INFO - Cluster template settings are valid
2014-05-09 09:45:14,353 PID: 9082 cluster.py:1641 - INFO - Starting cluster...
2014-05-09 09:45:14,353 PID: 9082 cluster.py:1157 - INFO - Launching a 2-node cluster...
2014-05-09 09:45:14,353 PID: 9082 cluster.py:1130 - DEBUG - Launch map: v4b-node001 (ami: ami-5d75036d, type: t1.micro)...
2014-05-09 09:45:14,354 PID: 9082 cluster.py:1182 - DEBUG - Launching v4b-master (ami: ami-5d75036d, type: t1.micro)
2014-05-09 09:45:14,354 PID: 9082 cluster.py:1182 - DEBUG - Launching v4b-node001 (ami: ami-5d75036d, type: t1.micro)
2014-05-09 09:45:14,424 PID: 9082 awsutils.py:295 - INFO - Creating security group _at_sc-v4b...
2014-05-09 09:45:15,872 PID: 9082 cluster.py:672 - INFO - Opening tcp port range 80-80 for CIDR 0.0.0.0/0
2014-05-09 09:45:16,034 PID: 9082 cluster.py:911 - DEBUG - Userdata size in KB: 0.72
2014-05-09 09:45:16,136 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {}
2014-05-09 09:45:16,136 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = []
2014-05-09 09:45:16,196 PID: 9082 awsutils.py:495 - DEBUG - Forcing delete_on_termination for AMI: ami-5d75036d
2014-05-09 09:45:16,851 PID: 9082 cluster.py:968 - INFO - Reservation:r-638bc56b
2014-05-09 09:45:16,852 PID: 9082 awsutils.py:553 - INFO - Waiting for instances to propagate...
2014-05-09 09:45:17,068 PID: 9082 cluster.py:1442 - INFO - Waiting for cluster to come up... (updating every 15s)
2014-05-09 09:45:17,171 PID: 9082 cluster.py:1399 - INFO - Waiting for all nodes to be in a 'running' state...
2014-05-09 09:45:17,250 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {}
2014-05-09 09:45:17,250 PID: 9082 cluster.py:767 - DEBUG - adding node i-54b6e95c to self._nodes list
2014-05-09 09:45:17,815 PID: 9082 cluster.py:767 - DEBUG - adding node i-55b6e95d to self._nodes list
2014-05-09 09:45:18,274 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:45:33,365 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {u'i-54b6e95c': <Node: v4b-master (i-54b6e95c)>, u'i-55b6e95d': <Node: v4b-node001 (i-55b6e95d)>}
2014-05-09 09:45:33,365 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-54b6e95c in self._nodes
2014-05-09 09:45:33,365 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-55b6e95d in self._nodes
2014-05-09 09:45:33,365 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:45:48,622 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {u'i-54b6e95c': <Node: v4b-master (i-54b6e95c)>, u'i-55b6e95d': <Node: v4b-node001 (i-55b6e95d)>}
2014-05-09 09:45:48,622 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-54b6e95c in self._nodes
2014-05-09 09:45:48,622 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-55b6e95d in self._nodes
2014-05-09 09:45:48,622 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:45:48,622 PID: 9082 cluster.py:1427 - INFO - Waiting for SSH to come up on all nodes...
2014-05-09 09:45:48,705 PID: 9082 cluster.py:759 - DEBUG - existing nodes: {u'i-54b6e95c': <Node: v4b-master (i-54b6e95c)>, u'i-55b6e95d': <Node: v4b-node001 (i-55b6e95d)>}
2014-05-09 09:45:48,705 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-54b6e95c in self._nodes
2014-05-09 09:45:48,705 PID: 9082 cluster.py:762 - DEBUG - updating existing node i-55b6e95d in self._nodes
2014-05-09 09:45:48,705 PID: 9082 cluster.py:775 - DEBUG - returning self._nodes = [<Node: v4b-master (i-54b6e95c)>, <Node: v4b-node001 (i-55b6e95d)>]
2014-05-09 09:45:48,709 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:48,788 PID: 9082 sshutils.py:87 - DEBUG - loading private key /Users/cedar/Documents/Amazon_AWS/cmckay-key-pair-oregon.pem
2014-05-09 09:45:48,788 PID: 9082 sshutils.py:94 - DEBUG - specified key does not end in either rsa or dsa, trying both
2014-05-09 09:45:48,789 PID: 9082 sshutils.py:185 - DEBUG - Using private key /Users/cedar/Documents/Amazon_AWS/cmckay-key-pair-oregon.pem (RSA)
2014-05-09 09:45:48,789 PID: 9082 sshutils.py:112 - DEBUG - connecting to host ec2-54-187-203-222.us-west-2.compute.amazonaws.com on port 22 as user root
2014-05-09 09:45:48,877 PID: 9082 sshutils.py:87 - DEBUG - loading private key /Users/cedar/Documents/Amazon_AWS/cmckay-key-pair-oregon.pem
2014-05-09 09:45:48,877 PID: 9082 sshutils.py:94 - DEBUG - specified key does not end in either rsa or dsa, trying both
2014-05-09 09:45:48,878 PID: 9082 sshutils.py:185 - DEBUG - Using private key /Users/cedar/Documents/Amazon_AWS/cmckay-key-pair-oregon.pem (RSA)
2014-05-09 09:45:48,878 PID: 9082 sshutils.py:112 - DEBUG - connecting to host ec2-54-187-202-106.us-west-2.compute.amazonaws.com on port 22 as user root
2014-05-09 09:45:49,712 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:50,714 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:51,715 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:52,717 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:53,719 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:54,719 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:55,720 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:56,722 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:57,723 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:58,725 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:45:59,726 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:00,727 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:01,729 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:02,730 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:03,732 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
2014-05-09 09:46:04,733 PID: 9082 threadpool.py:168 - DEBUG - unfinished_tasks = 2
Received on Fri May 09 2014 - 13:14:38 EDT
This archive was generated by hypermail 2.3.0.

Search:

Sort all by:

Date

Month

Thread

Author

Subject