Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

I just had a similar experience! My C5.xlarge instance detects an EBS as nvme1n1. I have added this line in fstab. ``` /dev/nvme1n1 /data ext4 discard,defaults,nofail 0 2 ``` After a couple of rebooting, it looked working. It kept running for weeks. But today, I just got alert that instance was unable to be connected. I tried rebooting it from AWS console, no luck looks the culprit is the fstab. The disk mount is failed. I raised the ticket to AWS support, no feedback yet. I have to start a new instance to recover my service. In another test instance, I try to use UUID(get by command blkid) instead of /dev/nvme1n1. So far looks still working... will see if it cause any issue. I will update here if any AWS support feedback. ================ EDIT with my fix =========== AWS doesn't give me feedback yet, but I found the issue. Actually, in fstab, whatever you mount /dev/nvme1n1 or UUID, it doesn't matter. My issue is, my ESB has some errors in file system. I attached it to an instance then run ``` fsck.ext4 /dev/nvme1n1 ``` After fixes a couple of file system error, put it in fstab, reboot, no problem anymore!

You may find useful new EC2 instance family equipped with local NVMe storage: **C5d**. See announcement blog post:

I just had a similar experience! My C5.xlarge instance detects an EBS as nvme1n1. I have added this line in fstab. ``` /dev/nvme1n1 /data ext4 discard,defaults,nofail 0 2 ``` After a couple of rebooting, it looked working. It kept running for weeks. But today, I just got alert that instance was unable to be connected. I tried rebooting it from AWS console, no luck looks the culprit is the fstab. The disk mount is failed. I raised the ticket to AWS support, no feedback yet. I have to start a new instance to recover my service. In another test instance, I try to use UUID(get by command blkid) instead of /dev/nvme1n1. So far looks still working... will see if it cause any issue. I will update here if any AWS support feedback. ================ EDIT with my fix =========== AWS doesn't give me feedback yet, but I found the issue. Actually, in fstab, whatever you mount /dev/nvme1n1 or UUID, it doesn't matter. My issue is, my ESB has some errors in file system. I attached it to an instance then run ``` fsck.ext4 /dev/nvme1n1 ``` After fixes a couple of file system error, put it in fstab, reboot, no problem anymore!

I have been using "c5" type instances since almost a month, mostly "c5d.4xlarge" with nvme drives. So, here's what has worked for me on Ubuntu instances: first get the location nvme drive is located at: ``` lsblk ``` mine was always mounted at `nvme1n1`. Then check if it is an empty volume and doens't has any file system, (it mostly doesn't, unless you are remounting). the output should be `/dev/nvme1n1: data` for empty drives: ``` sudo file -s /dev/nvme1n1 ``` Then do this to format(if from last step you learned that your drive had file system and isn't an empty drive. skip this and go to next step): ``` sudo mkfs -t xfs /dev/nvme1n1 ``` Then create a folder in current directory and mount the nvme drive: ``` sudo mkdir /data sudo mount /dev/nvme1n1 /data ``` you can now even check it's existence by running: ``` df -h ```

You may find useful new EC2 instance family equipped with local NVMe storage: **C5d**. See announcement blog post:

I have been using "c5" type instances since almost a month, mostly "c5d.4xlarge" with nvme drives. So, here's what has worked for me on Ubuntu instances: first get the location nvme drive is located at: ``` lsblk ``` mine was always mounted at `nvme1n1`. Then check if it is an empty volume and doens't has any file system, (it mostly doesn't, unless you are remounting). the output should be `/dev/nvme1n1: data` for empty drives: ``` sudo file -s /dev/nvme1n1 ``` Then do this to format(if from last step you learned that your drive had file system and isn't an empty drive. skip this and go to next step): ``` sudo mkfs -t xfs /dev/nvme1n1 ``` Then create a folder in current directory and mount the nvme drive: ``` sudo mkdir /data sudo mount /dev/nvme1n1 /data ``` you can now even check it's existence by running: ``` df -h ```

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

Ah, I'm having the same problem! Which C series did you pick?

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.