Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

i3.metal

EC2 Instance

Bare metal storage-optimized instance with 72 vCPUs, 512 GiB memory, and 8x1900GB NVMe SSD. Direct hardware access with no virtualization overhead for specialized workloads.

Coming Soon...

icon
Pricing of
i3.metal

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
i3.metal

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
i3.metal
FeatureSpecification
icon
Storage features of
i3.metal
FeatureSpecification
icon
Networking features of
i3.metal
FeatureSpecification
icon
Operating Systems Supported by
i3.metal
Operating SystemSupported
icon
Security features of
i3.metal
FeatureSupported
icon
General Information about
i3.metal
FeatureSpecification
icon
Benchmark Test Results for
i3.metal
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
i3.metal
AI-summarized insights
filter icon
Filter by:
All

Do .mental instances keep accruing compute charges when they are shutdown?

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The local NVMe storage for i3.metal is the same as i3.16xlarge. There are 8 NVMe PCI devices. For i3.16xlarge those PCI devices are assigned to the instance running under the Xen hypervisor. When running i3.metal, there simply isn't a hypervisor and the PCI devices are accessed directly.- There is no hot swap for the NVMe storage.- The 8 NVMe devices are discrete, there is no hardware RAID controller- Anyone can get I/O performance stats on i3.16xlarge as a baseline. Intel VT-d can introduce some overhead from the handling (and caching) of DMA remapping requests in the IOMMU and interrupt delivery so I/O performance may be a bit higher on i3.metal, with a few microseconds lower latency.

This will expose virtualization? As in I can run my own virtualization stack on these instances (KVM, etc)?

I'm looking forward to testing out FreeBSD on these... and also bhyve, for a fully BSD virtualization stack.

Thanks.I guess some other open questions:- If one of those drives fails, will Amazon hotswap them out, or do you need to migrate to a new instance (moving TBs of data to a new box without causing outages can be painful.)- Is there a hardware RAID controller for those drives, or is it software only?- Can anyone with access to one of these boxes produce some IO performance stats on them? Bonus points for stats on single drive vs concurrent across all drives (i.e is there any throttling). More points for RAID10 performance across the whole 8.

It's exactly the same as with the i3.16xlarge instance type. There are eight 1900 GB drives. In an i3.16xlarge, those eight drives are passed through to the instance with PCIe passthrough but for the i3.metal instance, you avoid going through a hypervisor and IOMMU and have direct access.

That's probably the most interesting aspect for me.Does anyone know how that's provisioned? i.e 8x just under 2TB volumes, or something else?

EC2 has actually exposed a subset even on the Xen instances for some of the more recent instance types.Brendan Gregg wrote about them at [http://www.brendangregg.com/blog/2017-05-04/the-pmcs-of-ec2....](http://www.brendangregg.com/blog/2017-05-04/the-pmcs-of-ec2.html)

rr works fine on multithreaded (and multiprocess) applications. It does emulate a single core machine though, so depending on your workload and how much parallelism your application actually has it might be painful.

Im assuming rr is only unavailable for multithreaded apps? How frequently is rr available for your use?

Is this what khuey is referring to?:

One of the things the performance monitoring unit (PMU) is capable of doing is triggering an interrupt (the PMI) when a counter overflows. When combined with the ability to write to the counters, this lets you program the PMU to interrupt after a certain number of counted events. Nehalem supposedly had a bug where the PMI fires not on overflow but instead whenever the counter is zero. Xen added a workaround to set the value to 1 whenever it would instead be 0. Later this was observed on microarchitectures other than Nehalem and Xen broadened the workaround to run on every x86 CPU. Intel never provided any help in narrowing it down and there don't seem to be official errata for this behavior too.This behavior is ok for statistically profiling frequent events but if you depend on _exact_ counts (as rr does) or are profiling infrequent events it can mess up your day.[https://lists.xen.org/archives/html/xen-devel/2017-07/msg022...](https://lists.xen.org/archives/html/xen-devel/2017-07/msg02242.html) goes a little deeper and has citations.

Can you please just post the info. Intel deserves to be shamed

Seconding paulie_a, We're running a Xen stack right now and I haven't heard of this. We've worked around a few nasty bugs with Xen and linux doms already, but I'm wondering if we have this problem you're referring to and don't even know it.

Thanks! Would love to hear more about the counters that your interested in. We've exposed more in C5 than in previous instance types and we are trying to make more available over time in a safe way.

I have two use cases:- General performance analysis. For this more counters is generally incrementally better.- Running . This requires the retired-branch-counter to be available (and accurate - sometimes virtualization messes that up)The second one I actually care more about, because I've pretty much stopped trying to debug software when rr is not available, too painful ;). Feel free to email me (email is in my profile) for gory details.

For the benefit of anyone reading this, KVM and VMWare virtualization generally work. Xen has problems because of a stupid Xen workaround for a stupid Intel hardware bug from a decade ago. I can provide more details about that via email (in my profile) if desired.

I'm really, really, happy about this. I've been complaining about the lack of cloud servers with exposed performance counters to any cloud vendor that'll listen (though of course nothing ever came of that). Kudos AWS, this is really cool.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Do .mental instances keep accruing compute charges when they are shutdown?

Load More
Similar Instances to
i3.metal

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.