Seems like you needed to alter your custom VPC so that it allows access to
an internet gateway and/or there is a public IP/ or Elastic IP associated
to instance that internet traffic is being routed through (like a NAT). If
you aren't using a NAT server to route traffic through, your Master node
needs to be associated with a subnet that has a route table that allows
access to the internet gateway and the instance needs to be assigned a
Public IP or Elastic IP address. Amazon created the default VPC's so that
the typical user could launch their instances in the VPC and use them
without having to know all the details of setting up subnets, route tables,
NACLs and the like.
>From the account you deleted the default VPC, you still have access to
other default VPCs in different regions as long as you didn't delete those
also. You could look at their setup to help you fix your custom VPC to
better match the settings of the default VPC that you deleted. Regions are
geographic and don't share resources. So if you want a resource (AMI, EBS
volume), available in a different region (in the same account) -- locate or
take a recent "snapshot" of it. From the AWS console, go to EC2 Services
then select "Snapshots" and find the snapshot you want to make available to
a different region and select it. Then select "Actions" and select "Copy"
-- from there you can select your destination region so it that snapshot
will be available in a different region.
As for making an AMI or Snapshot available to a different account you have
to login to AWS console, get to EC2 Services, select "Snapshots" and then
select snapshot you want to make available to another account, then select
"Actions" and select "Modify Snapshot Permissions" and add permission to
your other AWS account and click "save". For AMI's it's a similar process,
select "AMIs", select the AMI you want to make available, and select
"Modify Image Permissions" etc.
It is a little unclear to me exactly what issue you are having from what
you described in your email, there are many ways things could have gone
wrong. First thing is check that the EBS volume you attached to your master
node is actually fully attached to the volume from the AWS console -
specifically its State "in-use" and Attachment Information is "attached".
The instance you want to attach to AND your EBS volume need to be in the
same availability zone for you to make the attachment. After you are
certain it is attached via the AWS console, login to the master node and
run the command "lsblk" if the volume is truly attached you should see it
in the list (if you attached volume as /dev/sdp -- you should see it as
/dev/xvdp). Next you need to mount the volume with command "sudo mount
<device name> <volume name>" where device name is how you attached the
volume and volume name is the mount point you are using. Lets say I
attached my volume as /dev/sdp and my my mount point is /mntPt. So I would
issue command "sudo mount /dev/xvdp /mntPt" to mount the device. For my
example, if you do ls -lh /mntPt you should see all information on mounted
volume. If you want to make the EBS volume available via NFS to all nodes
in your cluster, there is more that needs to be done. Easiest way to do
this is to use Starcluster config and configure the EBS volume to be used
with the Starcluster Cluster (see here
<
http://star.mit.edu/cluster/docs/latest/manual/configuration.html#amazon-ebs-volumes>
for details). Other issues you might have are if you created volume from
scratch and you need to format the volume to have a file system prior to
mounting it. Also on occasion, I have had issue mounting SSD volumes if
there is a partition involved (e.g. use /dev/xvdp1 instead of /dev/xvdp as
device name) .
Good Luck,
-Jennifer
Received on Mon Feb 23 2015 - 18:18:10 EST