Learn how Palo Alto Networks is Transforming Platform Engineering with AI Agents. Register here

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5.metal

EC2 Instance

Bare metal compute-optimized instance with 96 vCPUs and 192 GiB memory. Direct hardware access to Intel Xeon Platinum processors.

Coming Soon...

icon
Pricing of
c5.metal

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5.metal

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5.metal
FeatureSpecification
icon
Storage features of
c5.metal
FeatureSpecification
icon
Networking features of
c5.metal
FeatureSpecification
icon
Operating Systems Supported by
c5.metal
Operating SystemSupported
icon
Security features of
c5.metal
FeatureSupported
icon
General Information about
c5.metal
FeatureSpecification
icon
Benchmark Test Results for
c5.metal
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5.metal
AI-summarized insights
filter icon
Filter by:
All

an EC2 instance is already virtualized, so you generally cannot run another hypervisor in one.

19-07-2024

I want to enable the "hardware virtualization" function on an ec2 instance's BIOs. Please help. Thanks.

19-07-2024

If you are looking to run a hypervisor on an EC2 instance, and then run VM guests inside the EC2 instance then generally that's not possible.

19-07-2024

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

What I learned is that if a metal instance does not work in the region you had chosen, try launching a new instance in a different data center region. For example, I had a c5.metal instance at us-east-2a. Following John's directions, I launched an instance at us-east-2c and after about 8 minutes the instance was ready for use.

2020-12-06 00:00:00

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

Do .mental instances keep accruing compute charges when they are shutdown?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

To reproduce your situation, I did the following: * Launched an Amazon EC2 instance in Ohio: * Instance Type: `c5.metal` * AMI: _Ubuntu Server 18.04 LTS (HVM), SSD Volume Type_ * Network: In my _Default VPC_ so that it uses a Public Subnet * Security Group: Default settings, which grants port 22 access from the Internet * Instance entered **running** state very quickly, Status Checks showed as **Initializing** It took about 8 minutes until the status checks were showing `2/2 checks` (it might have been faster, but I was testing other things in the meantime). I was able to successfully login to the instance:

What I learned is that if a metal instance does not work in the region you had chosen, try launching a new instance in a different data center region. For example, I had a c5.metal instance at us-east-2a. Following John's directions, I launched an instance at us-east-2c and after about 8 minutes the instance was ready for use.

I have tried several times in the last two weeks to log on to a c5.metal instance. Each time I get "Initializing" in the status checks field, but after 10 minutes it is still "Initializing" and I'm not able to log on.

Thank you soooo much for this detailed answer!!!. To add in my query, when it comes to ethical hacking, VMware workstation(or virtualbox etc) is a must!. If i follow the third option you gave me, wouldn't provide me the facility of VMware workstation inside. And the second option is quite complex for students to follow (as they also need to know things get setup). Any other suggestion e.g if I look for some other cloud services, etc. Big thanks to this awesome community! I was not expecting such fast answers! Please help me further as well.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

What I learned is that if a metal instance does not work in the region you had chosen, try launching a new instance in a different data center region. For example, I had a c5.metal instance at us-east-2a. Following John's directions, I launched an instance at us-east-2c and after about 8 minutes the instance was ready for use.

To reproduce your situation, I did the following: * Launched an Amazon EC2 instance in Ohio: * Instance Type: `c5.metal` * AMI: _Ubuntu Server 18.04 LTS (HVM), SSD Volume Type_ * Network: In my _Default VPC_ so that it uses a Public Subnet * Security Group: Default settings, which grants port 22 access from the Internet * Instance entered **running** state very quickly, Status Checks showed as **Initializing** It took about 8 minutes until the status checks were showing `2/2 checks` (it might have been faster, but I was testing other things in the meantime). I was able to successfully login to the instance:

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

I have tried several times in the last two weeks to log on to a c5.metal instance. Each time I get "Initializing" in the status checks field, but after 10 minutes it is still "Initializing" and I'm not able to log on.

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

The EC2 User Guide includes documentation on how to avoid these issues:

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

The EC2 User Guide includes documentation on how to avoid these issues:

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

Skylake should be more energy efficient and denser (cores/rack) than Broadwell, so it makes sense for them to be cheaper. Also, I wouldn't assume Amazon pays anything close to list price, or even necessarily runs publicly announced parts.

Interesting that Amazon is charging less for Skylake than Broadwell while the list prices for Skylake are higher. I guess Intel really wants you to move to the cloud.

Really happy to see even small versions of these instances having large NICs.

Pricing seems to be about 15% cheaper than C4 with slightly more RAM and the same ECU equivalence

No pricing info on c5 or whether c4 has changed in either the blog post or

The pricing is out: Note, that you will need to select Northern Virginia, Oregon or Ireland to see C5 prices. By default (at least for me) the pricing page loads Ohio, which doesn't have C5 yet.

This is based on a cursory examination of [1], so I may be off-base. But Xen appears to rely on the host for many things that can be, and often are, hardware accelerated in KVM. Searching for CONFIG_XEN_PVHVM, it doesn't appear to cover much. [1]

Is that true with the newer modes? [https://wiki.xen.org/wiki/Understanding_the_Virtualization_S...](https://wiki.xen.org/wiki/Understanding_the_Virtualization_Spectrum)

KVM can take better advantage of hardware acceleration than Xen. Xen requires a modified guest, which traps back to the host more frequently.

Intertwined with this announcement of an new instance type, is that it uses a non-Xen hypervisor. Has anyone booted one yet? Is it KVM or something from scratch that AWS wrote?

What’s the benefit of KVM over Xen these days?

I am also curious. I found this article which might help a little bit: [https://www.theregister.co.uk/2017/01/30/google_cloud_kicked...](https://www.theregister.co.uk/2017/01/30/google_cloud_kicked_qemu_to_the_kerb_to_harden_kvm/)

It's difficult to make an simple comparison because neither hypervisor is used "out of the box" by EC2. For example, a lot of hypervisor benchmark comparisons focus on I/O performance, but the latest generation EC2 instances offload networking and storage processing to hardware. This is the case for both for instances that use Xen and C5 that uses the new KVM-based hypervisor. Since both hypervisors have very good support for hardware virtualization technology, it becomes more of an architecture decision. Practically speaking, it is a little bit easier to build a very small footprint hypervisor with the core KVM code than Xen.

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

Very interesting. Has anyone noticed any difference in boot times? If they are announcing things like a brand new hypervisor ahead of re:Invent one wonders what they are saving for the main event... is it too much to hope for managed Kubernetes?

" Q. What is the underlying hypervisor on C5 instances? C5 instances use a new EC2 hypervisor that is based on core KVM technology. "

There must be something we can test Maybe find a new regression Should we try it on a db shard? Or exercise some discretion... instance sizes aren’t powers of two The discount's just 15 percent We can hand wave that away It shouldn’t cause a downtime event Let’s be adults here I say... Everything xen, everything xen I don't think so Everything xen, everything xen I don't think so— Bush, Everything Xen

It's based on KVM: (sorry for crappy link, can't figure out how to link directly to the appropriate question. Search for "What is the underlying hypervisor on C5 instances" question).

C5 2xl!

Load More
Similar Instances to
c5.metal

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.