Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The type of instance doesn't change any of this ? If I have dedicated or reserved instances, my NVME partition will still get lost ? Thanks again for your insights !

Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start.

That's correct, it doesn't change. A reserved instance is a billing construct that applies to any one matching instance each hour. If it is tied to a specific availability zone, it also guarantees that if you don't have an instance running that matches the reservation, there's always one available for you to launch, which is why you pay whether it is running or not. Dedicated allows you to control which physical host, but not which guest slot on the host. Stopping an instance wipes its internal disks, but not its EBS volumes, since that is network-attached storage.

The NVMe SSD on the `i3` instance class is an example of an [Instance Store Volume](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html), also known as an _Ephemeral_ [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an [Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) volume or an [Elastic File System (EFS)](https://aws.amazon.com/efs/), both of which survive instance stop/start, hardware failures, and maintenance.

No I wasn't, so do you imply I should keep my EC2 instance 100% up ? Or are there better alternatives ?

I created i3.large with NVME disk on each nodes, here was my process : 1. lsblk -> nvme0n1 (check if nvme isn't yet mounted) 2. sudo mkfs.ext4 -E nodiscard /dev/nvme0n1 3. sudo mount -o discard /dev/nvme0n1 /mnt/my-data 4. /dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2 5. sudo mount -a (check if everything is OK) 6. sudo reboot So all of this works, I can connect back to the instance. I have 500 GiB on my new partition. But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked) When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand) I don't have the AWS logs exactly, but I got some lines of it : > Bad magic number in super-block while trying to open > then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: > /dev/fd/9: line 2: plymouth: command not found

The type of instance doesn't change any of this ? If I have dedicated or reserved instances, my NVME partition will still get lost ? Thanks again for your insights !

No I wasn't, so do you imply I should keep my EC2 instance 100% up ? Or are there better alternatives ?

That's correct, it doesn't change. A reserved instance is a billing construct that applies to any one matching instance each hour. If it is tied to a specific availability zone, it also guarantees that if you don't have an instance running that matches the reservation, there's always one available for you to launch, which is why you pay whether it is running or not. Dedicated allows you to control which physical host, but not which guest slot on the host. Stopping an instance wipes its internal disks, but not its EBS volumes, since that is network-attached storage.

The NVMe SSD on the `i3` instance class is an example of an [Instance Store Volume](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html), also known as an _Ephemeral_ [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an [Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) volume or an [Elastic File System (EFS)](https://aws.amazon.com/efs/), both of which survive instance stop/start, hardware failures, and maintenance.

Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start.

I created i3.large with NVME disk on each nodes, here was my process : 1. lsblk -> nvme0n1 (check if nvme isn't yet mounted) 2. sudo mkfs.ext4 -E nodiscard /dev/nvme0n1 3. sudo mount -o discard /dev/nvme0n1 /mnt/my-data 4. /dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2 5. sudo mount -a (check if everything is OK) 6. sudo reboot So all of this works, I can connect back to the instance. I have 500 GiB on my new partition. But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked) When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand) I don't have the AWS logs exactly, but I got some lines of it : > Bad magic number in super-block while trying to open > then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: > /dev/fd/9: line 2: plymouth: command not found

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.