StarCluster - Mailing List Archive

Re: Using a hi-I/O instance as the master node in a StarCluster Cluster

From: Ron Chen <no email>
Date: Mon, 7 Jan 2013 23:27:35 -0800 (PST)

Let us know how it goes, and as AWS has more instance types that have SSDs and/or high I/O performance ability, I am very interested to know how to provision those SSDs as well.

For example, I was wondering how the disks of the hs1.8xlarge is organized. It has 24 HDDs and so if we run a S3 based AMI (I couldn't find that it can't), then is one of the disks used as root file system, leaving only 23 HDDs for software RAID?

Lastly, AWS will have the cr1.8xlarge, but the docs are not clear whether those SSD-based  (hi1.4xlarge and cr1.8xlarge) instances have a HDD for S3 based storage (for the root FS). But as the SSDs are not mounted by default, yet they support S3 AMIs, then a local HDD is needed to hold the root file system.


Open Grid Scheduler:

From: "Oppe, Thomas C ERDC-RDE-ITL-MS Contractor" <>
To: Paolo Di Tommaso <>; Dustin Machi <>
Cc: "" <>
Sent: Saturday, January 5, 2013 9:04 PM
Subject: Re: [StarCluster] Using a hi-I/O instance as the master node in a StarCluster Cluster

Paolo and Dustin,

Thank you for the information, especially for the new commands "fdisk -l" and "mount -l" and the link to the discussion of this question.  I found the SSD volumes and was able to format and mount them.  I am eager to try this instance as the master node in my HYCOM serial I/O runs once I am granted permission to use AWS again.

The link also clarifies whether I should use "ext2", "ext3", or "ext4" for formatting the drives.  Use "mount -l" to find out.  I was previously guessing and trying various formats.

Tom Oppe


From: Paolo Di Tommaso []
Sent: Saturday, January 05, 2013 4:05 AM
To: Dustin Machi
Cc: Oppe, Thomas C ERDC-RDE-ITL-MS Contractor;
Subject: Re: [StarCluster] Using a hi-I/O instance as the master node in a StarCluster Cluster

Hi there, 

The two SSD disks are available as ephemeral volumes, BUT unlike other instance types they are NOT mounted by default. 

This means that you have to format and mount by yourself. 

You may be interested to this thread


On Jan 4, 2013, at 5:44 PM, Dustin Machi <> wrote:

I've never messed with them on AWS, but I assume that these get mounted
>wherever the local disks get mounted normally.../mnt.  The hi1.4xlarge
>should have two 1tb volumes mounted
>or available to mount (fdisk -l).  The don't think the IOPS provisioning
>will matter much here (for the the read/writes to those volumes anyway),
>though they may impact the performance of the EBS volume itself of
>On 4 Jan 2013, at 11:06, Oppe, Thomas C ERDC-RDE-ITL-MS Contractor
>Dear Sir:
>>I was wondering if anyone has tried using a high-performance I/O
>>instance (e.g., "hi1.4xlarge") as the master node in a StarCluster
>>cluster, with the other nodes being Sandy Bridge "cc2.8xlarge"
>>instances.  When I bring up a single "hi1.4xlarge" instance outside of
>>StarCluster, there is an option to attach one or two 1-TB SSD disks,
>>but when I bring up a cluster with "hi1.4xlarge" as the master node,
>>the SSD disks are nowhere to be found.  Is a plug-in necessary to ask
>>for the SSD disks to be available?  I have a code that needs the
>>fastest I/O available for writes with a combined size of 100GB during
>>the run.  I have tried single Standard and Provisioned IOPS EBS
>>volumes, but the I/O performance to these volumes is poor, even with
>>"pre-warming".  Has anyone written a plug-in for using striped volumes
>>in StarCluster?  I would appreciate any comments or pointers to
>>Tom Oppe
>>StarCluster mailing list
>StarCluster mailing list

StarCluster mailing list    
Received on Tue Jan 08 2013 - 02:27:37 EST
This archive was generated by hypermail 2.3.0.


Sort all by: