Job Storage
Last updated
Last updated
IDEA gives you the flexibility to customize your storage backend based on your requirements
You can customize the root partition size
You can provision a local scratch partition
You can deploy standard SSD (gp2/gp3) or IO Optimized SSD (io1,ios2) volumes
IDEA automatically leverages instance store disk(s) as scratch partition when applicable
to learn more about EBS volumes.
/data is a partition mounted on all hosts. This contains the home directory of your LDAP users ($HOME = /data/home/<USERNAME>
). This partition is persistent. Avoid using this partition if your simulation is disk I/O intensive (use /scratch instead)
/apps is a partition mounted on all hosts. This partition is designed to host all your CFD/FEA/EDA/Mathematical applications. This partition is persistent. Avoid using this partition if your simulation is disk I/O intensive (use /scratch instead)
IDEA supports FSx natively.
Below are the storage options you can configure at an instance level for your jobs. If needed, add/remove/modify the storage logic by editing UserCustomization
script to match your requirements.
By default IDEA provisions a 10GB EBS disk for the root partition. This may be an issue if you are using a custom AMI configured with a bigger root disk size or if you simply want to allocate additional storage for the root partition. To expand the size of the volume, submit a simulation using -l root_size=<SIZE_IN_GB>
parameter.
Result: Root partition is now 25GB
Request a /scratch partition with SSD disk
During job submission, specify -l scratch_size=<SIZE_IN_GB>
to provision a new EBS disk (/dev/sdj
) mounted as /scratch
Result: a 150 GB /scratch partition is available on all nodes
Request a /scratch partition with IO optimized disk
Looking at the EBS bash, the disk type is now "io1" and the number of IOPS match the value specified at job submission.
Scale-Out Computing on AWS automatically detects instance store disk and will use them as /scratch unless you specify -l scratch_size
parameter for your job. In this case, Scale-Out Computing on AWS honors the user request and ignore the instance store volume(s).
When node has 1 instance store volume
For this example, I will use a "c5d.9xlarge" instance which is coming with a 900GB instance store disk.
Result: Default /scratch partition has been provisioned automatically using local instance storage
When node has more than 1 instance store volumes
In this special case, ComputeNode.sh
script will create a raid0 partition using all instance store volumes available.
For this example, I will use a "m5dn.12xlarge" instance which is shipped with a 2 * 900GB instance store disks (total 1.8Tb).
Result: /scratch is a 1.7TB raid0 partition (using 2 instance store volumes)
You can combine parameters as needed. For example, qsub -l root_size=150 -l scratch_size=200 -l nodes=2
will provision 2 nodes with 150GB / and 200GB SSD /scratch.
Refer to Queue Profiles to automatically assign customroot_size
/ scratch_size
for a given queue.
To verify the type of your EBS disk, simply go to your AWS bash > EC2 > Volumes and verify your EBS type is "gp2" (SSD). for more information about the various EBS types available.
To request an optimized SSD disk, use -l scratch_iops=<IOPS>
along with -l scratch_size=<SIZE_IN_GB>
. to get more details about burstable/IO EBS disks.
. An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer and is removed as soon as the node is deleted.