Thanks Jennifer and Jin!
You are definitely right. The svdaa is one of the two ssds and it is mounted to /mnt. The other ssd (/dev/svdab) is in the list, but not mounted to anywhere. I am wondering if there is a way to add the other svdab disk to /mnt to make a 37*2 GB disk, (as Jin mentioned). really appreciated your help.
________________________________
From: Jennifer Staab <jstaab_at_cs.unc.edu>
To: Jian Feng <freedafeng_at_yahoo.com>
Cc: "starcluster_at_mit.edu" <starcluster_at_mit.edu>
Sent: Tuesday, December 9, 2014 8:24 AM
Subject: Re: [StarCluster] need scratch space
Submit command "lsblk" and you will see the instance/ephemeral storage. In my experience the instance/ephemeral storage will be smaller in size ( for me 40 GB is typically 37.5/37 GB in size) and usually one disk is automatically mounted as "/mnt". Seems like in your case it is likely "/dev/xvdaa" is one of the instance/ephemeral disks. If you don't see the other with "lsblk" command, it is likely when you created the EC2 you forgot to indicate you wanted to use both instance storage disks when you added storage to your EC2.
Good Luck.
- Jennifer
On 12/9/14 9:48 AM, Jin Yu wrote:
You can use these two 40G SSD to make a raid0 volume of 80G, and then mount it to /scratch.
>
>
>
>-Jin
>
>
>On Mon, Dec 8, 2014 at 1:55 PM, Jian Feng <freedafeng_at_yahoo.com> wrote:
>
>Dear starcluster community,
>>
>>
>>I created an ec2 cluster using m3.xlarge instance (2*40GB SSD). I did not see any scratch space or /scratch folder at all. Here is the disk space layout.
>>
>>
>>root_at_node001:~# df -h
>>Filesystem Size Used Avail Use% Mounted on
>>/dev/xvda1 20G 5.6G 14G 30% /
>>udev 7.4G 8.0K 7.4G 1% /dev
>>tmpfs 3.0G 176K 3.0G 1% /run
>>none 5.0M 0 5.0M 0% /run/lock
>>none 7.4G 0 7.4G 0% /run/shm
>>/dev/xvdaa 37G 177M 35G 1% /mnt
>>master:/home 20G 5.6G 14G 30% /home
>>master:/opt/sge6 20G 5.6G 14G 30% /opt/sge6
>>
>>
>>In my application, I need a scatch folder on each node that has about 50G space. Is there a way to do that? I don't really need /home or /opt/sge6 stuff. And I don't run mpi applications.
>>
>>
>>Maybe I should recreate an AMI?
>>
>>
>>Thanks!
>>
>>
>>
>>
>>
>>
>>_______________________________________________
>>StarCluster mailing list
>>StarCluster_at_mit.edu
>>http://mailman.mit.edu/mailman/listinfo/starcluster
>>
>>
>>
>>
>>
>>
>
>
>
>
>
>_______________________________________________
StarCluster mailing list StarCluster_at_mit.edu
http://mailman.mit.edu/mailman/listinfo/starcluster
Received on Tue Dec 09 2014 - 12:42:55 EST
This archive was generated by
hypermail 2.3.0.