Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

i4i.32xlarge

EC2 Instance

Storage-optimized instance with 128 vCPUs, 1,024 GiB memory, and 8x3750GB NVMe SSD. Highest capacity in I4i family with maximum storage performance.

Coming Soon...

icon
Pricing of
i4i.32xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
i4i.32xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
i4i.32xlarge
FeatureSpecification
icon
Storage features of
i4i.32xlarge
FeatureSpecification
icon
Networking features of
i4i.32xlarge
FeatureSpecification
icon
Operating Systems Supported by
i4i.32xlarge
Operating SystemSupported
icon
Security features of
i4i.32xlarge
FeatureSupported
icon
General Information about
i4i.32xlarge
FeatureSpecification
icon
Benchmark Test Results for
i4i.32xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 418335 Average
Average Deviation 842.74
Deviation 415651 N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
i4i.32xlarge
AI-summarized insights
filter icon
Filter by:
All

Cost Explorer takes [up to] 24 hours to set up, so it's not a good answer to support questions about billing.

29-11-2017
cost_savings

AWS is incredibly complex. Are you complaining that their billing can get complex?

29-11-2017
cost_savings

The local NVMe storage for i3.metal is the same as i3.16xlarge. There are 8 NVMe PCI devices. For i3.16xlarge those PCI devices are assigned to the instance running under the Xen hypervisor. When running i3.metal, there simply isn't a hypervisor and the PCI devices are accessed directly.- There is no hot swap for the NVMe storage.- The 8 NVMe devices are discrete, there is no hardware RAID controller- Anyone can get I/O performance stats on i3.16xlarge as a baseline. Intel VT-d can introduce some overhead from the handling (and caching) of DMA remapping requests in the IOMMU and interrupt delivery so I/O performance may be a bit higher on i3.metal, with a few microseconds lower latency.

29-11-2017
benchmarking

For all this progress the billing on AWS is so damn confusing to figure out if some machine is left on unused that I won’t use AWS again. GCE and Azure miles ahead here.

29-11-2017
cost_savings

Is this what khuey is referring to?:

29-11-2017

Thanks.I guess some other open questions:- If one of those drives fails, will Amazon hotswap them out, or do you need to migrate to a new instance (moving TBs of data to a new box without causing outages can be painful.)- Is there a hardware RAID controller for those drives, or is it software only?- Can anyone with access to one of these boxes produce some IO performance stats on them? Bonus points for stats on single drive vs concurrent across all drives (i.e is there any throttling). More points for RAID10 performance across the whole 8.

29-11-2017
benchmarking

It's exactly the same as with the i3.16xlarge instance type. There are eight 1900 GB drives. In an i3.16xlarge, those eight drives are passed through to the instance with PCIe passthrough but for the i3.metal instance, you avoid going through a hypervisor and IOMMU and have direct access.

29-11-2017

Storage – 15.2 terabytes of local, SSD-based NVMe storage.That's probably the most interesting aspect for me.Does anyone know how that's provisioned? i.e 8x just under 2TB volumes, or something else?

29-11-2017

Im assuming rr is only unavailable for multithreaded apps? How frequently is rr available for your use?

29-11-2017

rr works fine on multithreaded (and multiprocess) applications. It does emulate a single core machine though, so depending on your workload and how much parallelism your application actually has it might be painful.

29-11-2017

I have two use cases:- General performance analysis. For this more counters is generally incrementally better.- Running . This requires the retired-branch-counter to be available (and accurate - sometimes virtualization messes that up)The second one I actually care more about, because I've pretty much stopped trying to debug software when rr is not available, too painful ;). Feel free to email me (email is in my profile) for gory details.

29-11-2017
benchmarking

Seconding paulie_a, We're running a Xen stack right now and I haven't heard of this. We've worked around a few nasty bugs with Xen and linux doms already, but I'm wondering if we have this problem you're referring to and don't even know it.

29-11-2017

For the benefit of anyone reading this, KVM and VMWare virtualization generally work. Xen has problems because of a stupid Xen workaround for a stupid Intel hardware bug from a decade ago. I can provide more details about that via email (in my profile) if desired.

29-11-2017

Can you please just post the info. Intel deserves to be shamed

29-11-2017

One of the things the performance monitoring unit (PMU) is capable of doing is triggering an interrupt (the PMI) when a counter overflows. When combined with the ability to write to the counters, this lets you program the PMU to interrupt after a certain number of counted events. Nehalem supposedly had a bug where the PMI fires not on overflow but instead whenever the counter is zero. Xen added a workaround to set the value to 1 whenever it would instead be 0. Later this was observed on microarchitectures other than Nehalem and Xen broadened the workaround to run on every x86 CPU. Intel never provided any help in narrowing it down and there don't seem to be official errata for this behavior too.This behavior is ok for statistically profiling frequent events but if you depend on _exact_ counts (as rr does) or are profiling infrequent events it can mess up your day.[https://lists.xen.org/archives/html/xen-devel/2017-07/msg022...](https://lists.xen.org/archives/html/xen-devel/2017-07/msg02242.html) goes a little deeper and has citations.

29-11-2017
memory_usage, benchmarking

I have two use cases:- General performance analysis. For this more counters is generally incrementally better.- Running . This requires the retired-branch-counter to be available (and accurate - sometimes virtualization messes that up)The second one I actually care more about, because I've pretty much stopped trying to debug software when rr is not available, too painful ;). Feel free to email me (email is in my profile) for gory details.

29-11-2017
benchmarking

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking

I think many people, but not you obviously, have a naive view of AWS hardware provisioning. They think that if they suddenly deploy 50 EC2 instances, that Amazon will rush out and buy more servers and have them installed in order to meet instant demand. Of course, when you think about it, Amazon rely on having fairly high levels of usage in order to maximise profits, and having idle hardware lying around just in case someone wants it is a way to lose money, hence the spot pricing scheme to make at least some money from idle hardware, they can get their hardware back for exclusive use at any time by simply bidding the price up!

2025-10-03 00:00:00
cost_savings

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking

I think many people, but not you obviously, have a naive view of AWS hardware provisioning. They think that if they suddenly deploy 50 EC2 instances, that Amazon will rush out and buy more servers and have them installed in order to meet instant demand. Of course, when you think about it, Amazon rely on having fairly high levels of usage in order to maximise profits, and having idle hardware lying around just in case someone wants it is a way to lose money, hence the spot pricing scheme to make at least some money from idle hardware, they can get their hardware back for exclusive use at any time by simply bidding the price up!

2025-10-03 00:00:00
cost_savings

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking
Load More
Similar Instances to
i4i.32xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.