Error Restarting Stopped Cluster
Hi All,
I see no prior posts in the archive about this failure mode.
I received the following error info in my debug.log while trying to restart
(start -x) a stopped cluster:
2013-03-11 10:55:19,897 PID: 18690 cluster.py:1539 - INFO - Validating
cluster template settings...
2013-03-11 10:55:21,536 PID: 18690 cluster.py:926 - DEBUG - Launch map:
node001 (ami: ami-92fe62fb, type: t1.micro)...
2013-03-11 10:55:21,536 PID: 18690 cluster.py:1555 - INFO - Cluster
template settings are valid
2013-03-11 10:55:21,536 PID: 18690 cluster.py:1427 - INFO - Starting
cluster...
2013-03-11 10:55:21,824 PID: 18690 cluster.py:664 - DEBUG - existing nodes:
{u'i-3b7c9a49': <Node: master (i-3b7c9a49)>, u'i-397c9a4b': <Node: node001
(i-397c9a4b)>}
2013-03-11 10:55:21,824 PID: 18690 cluster.py:667 - DEBUG - updating
existing node i-3b7c9a49 in self._nodes
2013-03-11 10:55:21,824 PID: 18690 cluster.py:667 - DEBUG - updating
existing node i-397c9a4b in self._nodes
2013-03-11 10:55:21,825 PID: 18690 cluster.py:680 - DEBUG - returning
self._nodes = [<Node: master (i-3b7c9a49)>, <Node: node001 (i-397c9a4b)>]
2013-03-11 10:55:21,825 PID: 18690 cluster.py:1433 - INFO - Starting
stopped node: master
2013-03-11 10:55:22,461 PID: 18690 cli.py:257 - ERROR -
InvalidParameterValue: Invalid value 'i-3b7c9a49' for instanceId. Instance
does not have a volume attached at root (/dev/sda1)
The preceding stop command had indeed reported that it was detaching *all*
volumes from all nodes.
Thanks for any feedback, if anyone else has overcome this.
Best,
Lyn
Received on Mon Mar 11 2013 - 14:07:43 EDT
This archive was generated by
hypermail 2.3.0.