The one-day event for cloud execs ready to lead the AI era. June 17 · San Jose · Online
Sign Up

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

i3.large

EC2 Instance

Storage-optimized instance with 2 vCPUs, 15.25 GiB memory, and 1x475GB NVMe SSD. Designed for high I/O workloads and NoSQL databases.

Coming Soon...

icon
Pricing of
i3.large

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
i3.large

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
i3.large
FeatureSpecification
icon
Storage features of
i3.large
FeatureSpecification
icon
Networking features of
i3.large
FeatureSpecification
icon
Operating Systems Supported by
i3.large
Operating SystemSupported
icon
Security features of
i3.large
FeatureSupported
icon
General Information about
i3.large
FeatureSpecification
icon
Benchmark Test Results for
i3.large
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 119.4MB
AES-256 CBC 84.6MB
MD5 802.0MB
SHA256 332.1MB
SHA512 439.9MB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3161 3161
Average 3125 3123
Deviation 31.03 32.44
Min 3093 3091

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
i3.large
AI-summarized insights
filter icon
Filter by:
All

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The type of instance doesn't change any of this ? If I have dedicated or reserved instances, my NVME partition will still get lost ? Thanks again for your insights !

Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start.

That's correct, it doesn't change. A reserved instance is a billing construct that applies to any one matching instance each hour. If it is tied to a specific availability zone, it also guarantees that if you don't have an instance running that matches the reservation, there's always one available for you to launch, which is why you pay whether it is running or not. Dedicated allows you to control which physical host, but not which guest slot on the host. Stopping an instance wipes its internal disks, but not its EBS volumes, since that is network-attached storage.

The NVMe SSD on the `i3` instance class is an example of an [Instance Store Volume](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html), also known as an _Ephemeral_ [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an [Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) volume or an [Elastic File System (EFS)](https://aws.amazon.com/efs/), both of which survive instance stop/start, hardware failures, and maintenance.

No I wasn't, so do you imply I should keep my EC2 instance 100% up ? Or are there better alternatives ?

I created i3.large with NVME disk on each nodes, here was my process : 1. lsblk -> nvme0n1 (check if nvme isn't yet mounted) 2. sudo mkfs.ext4 -E nodiscard /dev/nvme0n1 3. sudo mount -o discard /dev/nvme0n1 /mnt/my-data 4. /dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2 5. sudo mount -a (check if everything is OK) 6. sudo reboot So all of this works, I can connect back to the instance. I have 500 GiB on my new partition. But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked) When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand) I don't have the AWS logs exactly, but I got some lines of it : > Bad magic number in super-block while trying to open > then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: > /dev/fd/9: line 2: plymouth: command not found

The type of instance doesn't change any of this ? If I have dedicated or reserved instances, my NVME partition will still get lost ? Thanks again for your insights !

No I wasn't, so do you imply I should keep my EC2 instance 100% up ? Or are there better alternatives ?

That's correct, it doesn't change. A reserved instance is a billing construct that applies to any one matching instance each hour. If it is tied to a specific availability zone, it also guarantees that if you don't have an instance running that matches the reservation, there's always one available for you to launch, which is why you pay whether it is running or not. Dedicated allows you to control which physical host, but not which guest slot on the host. Stopping an instance wipes its internal disks, but not its EBS volumes, since that is network-attached storage.

The NVMe SSD on the `i3` instance class is an example of an [Instance Store Volume](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html), also known as an _Ephemeral_ [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an [Elastic Block Store (EBS)](https://aws.amazon.com/ebs/) volume or an [Elastic File System (EFS)](https://aws.amazon.com/efs/), both of which survive instance stop/start, hardware failures, and maintenance.

Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start.

I created i3.large with NVME disk on each nodes, here was my process : 1. lsblk -> nvme0n1 (check if nvme isn't yet mounted) 2. sudo mkfs.ext4 -E nodiscard /dev/nvme0n1 3. sudo mount -o discard /dev/nvme0n1 /mnt/my-data 4. /dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2 5. sudo mount -a (check if everything is OK) 6. sudo reboot So all of this works, I can connect back to the instance. I have 500 GiB on my new partition. But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked) When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand) I don't have the AWS logs exactly, but I got some lines of it : > Bad magic number in super-block while trying to open > then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: > /dev/fd/9: line 2: plymouth: command not found

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Load More
Similar Instances to
i3.large

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.