Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5.2xlarge

EC2 Instance

Compute-optimized instance with 8 vCPUs and 16 GiB memory. Suitable for gaming servers, scientific modeling, and batch processing.

Coming Soon...

icon
Pricing of
c5.2xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5.2xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5.2xlarge
FeatureSpecification
icon
Storage features of
c5.2xlarge
FeatureSpecification
icon
Networking features of
c5.2xlarge
FeatureSpecification
icon
Operating Systems Supported by
c5.2xlarge
Operating SystemSupported
icon
Security features of
c5.2xlarge
FeatureSupported
icon
General Information about
c5.2xlarge
FeatureSpecification
icon
Benchmark Test Results for
c5.2xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 482.4MB
AES-256 CBC 342.3MB
MD5 1.9GB
SHA256 1.3GB
SHA512 1.7GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max Average Deviation
Average Read Write
Deviation 16544 16533
Min 16494 16498

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5.2xlarge
AI-summarized insights
filter icon
Filter by:
All

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

Well, after trying practically them all, I believe the "C" series is the best as most of our server issues arise from the CPU getting overloaded.

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

C5 2xl!

C5 2xl!

Ah, I'm having the same problem! Which C series did you pick?

Well, after trying practically them all, I believe the "C" series is the best as most of our server issues arise from the CPU getting overloaded.

Both of these are able to cope quickly with various csv formats, whether headers are included or not, and to read them quickly.

If limited transformation of the input is required then you might use something like Python. Otherwise, I would say use a product intended for data analysis like R or pandas.

What are you doing with this large CSV file anyway? I do hope you are not building some datastructure incrementally after every line read. Are you buffering appropriately? Which other parts of your application can continue running without waiting for this CSV input? Have you considered some sort of non-blocking I/O?

If you know the characteristics of the csv file exactly, and you are prepared to invest the necessary time, then you would probably get the best results using, say, C++.

Now, are you reading from a spinning disk or a solid state disk? How big a chunk of data can your machine gulp at once? Which filesystem do you even have on the disk? Could the file be fragmented? (welcome to the world of disk seeks!) Wait, are you even reading the file over a network? How fast is the network?

It is also not recommended to use legacy or previous generation instance types: latest generation of instance types provide more performance with same or less cost

You will need to find balance between all these options and costs. Higher IOPS costs more money, Memory or CPU units increase will also costs additional money.

You need to answer the following questions in order to choose the instance type. 1.What are the storage needs? 2.What are the security needs? 3.What will be the growth rate? 5.What will be the approximate flow of traffic? 6.What are your application needs that you are hosting on the instance i.e. is it compute or memory orI/O intensive. 7.What is the environment for which the instance is required?

The speed of reading a "large CSV file" would depend more on your hardware than the programming language.

You need to understand what your application requirements: CPU or Memory to choose some instance type from memory optimized or compute optimized series. Monitor your instance performance and increase\decrease instance type within series according to measurements (AWS also shows when instance is overprovisioned or underprovisioned in AWS EC2 console).

Disk: in most cases you will need high IOPS disk for web application. EBS storage type can provide you some options here to increase iops, but in some cases you will need instance store volumes.

Why disk is important: if you choose low IOPS volume your application can create queues to disk and will constantly additionally eat your CPU. Application will also be slow because of low volume performance

I want to note, that AWS does not monitor memory in Cloudwatch, so you will need to implement some additional monitoring here

Second, try to put your backend behind CDN: use AWS CloudFront for both dynamic and static content. Due to its nature, CloudFront will increase content delivery speed and reduce load on your instance. You also will be able to use CloudFront with AWS Shield (and WAF) to protect your app from attacks

You should think in complex, not only about single backend performance First, consider using horizontal scaling for your application: split traffic between several instances (in different availability zones to increase high availability of your application), add instances when load grows and remove it when it lowers.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

The EC2 User Guide includes documentation on how to avoid these issues:

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

The EC2 User Guide includes documentation on how to avoid these issues:

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

Skylake should be more energy efficient and denser (cores/rack) than Broadwell, so it makes sense for them to be cheaper. Also, I wouldn't assume Amazon pays anything close to list price, or even necessarily runs publicly announced parts.

Interesting that Amazon is charging less for Skylake than Broadwell while the list prices for Skylake are higher. I guess Intel really wants you to move to the cloud.

Really happy to see even small versions of these instances having large NICs.

Pricing seems to be about 15% cheaper than C4 with slightly more RAM and the same ECU equivalence

No pricing info on c5 or whether c4 has changed in either the blog post or

The pricing is out: Note, that you will need to select Northern Virginia, Oregon or Ireland to see C5 prices. By default (at least for me) the pricing page loads Ohio, which doesn't have C5 yet.

This is based on a cursory examination of [1], so I may be off-base. But Xen appears to rely on the host for many things that can be, and often are, hardware accelerated in KVM. Searching for CONFIG_XEN_PVHVM, it doesn't appear to cover much. [1]

Is that true with the newer modes? [https://wiki.xen.org/wiki/Understanding_the_Virtualization_S...](https://wiki.xen.org/wiki/Understanding_the_Virtualization_Spectrum)

KVM can take better advantage of hardware acceleration than Xen. Xen requires a modified guest, which traps back to the host more frequently.

Intertwined with this announcement of an new instance type, is that it uses a non-Xen hypervisor. Has anyone booted one yet? Is it KVM or something from scratch that AWS wrote?

What’s the benefit of KVM over Xen these days?

I am also curious. I found this article which might help a little bit: [https://www.theregister.co.uk/2017/01/30/google_cloud_kicked...](https://www.theregister.co.uk/2017/01/30/google_cloud_kicked_qemu_to_the_kerb_to_harden_kvm/)

It's difficult to make an simple comparison because neither hypervisor is used "out of the box" by EC2. For example, a lot of hypervisor benchmark comparisons focus on I/O performance, but the latest generation EC2 instances offload networking and storage processing to hardware. This is the case for both for instances that use Xen and C5 that uses the new KVM-based hypervisor. Since both hypervisors have very good support for hardware virtualization technology, it becomes more of an architecture decision. Practically speaking, it is a little bit easier to build a very small footprint hypervisor with the core KVM code than Xen.

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

Very interesting. Has anyone noticed any difference in boot times? If they are announcing things like a brand new hypervisor ahead of re:Invent one wonders what they are saving for the main event... is it too much to hope for managed Kubernetes?

" Q. What is the underlying hypervisor on C5 instances? C5 instances use a new EC2 hypervisor that is based on core KVM technology. "

There must be something we can test Maybe find a new regression Should we try it on a db shard? Or exercise some discretion... instance sizes aren’t powers of two The discount's just 15 percent We can hand wave that away It shouldn’t cause a downtime event Let’s be adults here I say... Everything xen, everything xen I don't think so Everything xen, everything xen I don't think so— Bush, Everything Xen

It's based on KVM: (sorry for crappy link, can't figure out how to link directly to the appropriate question. Search for "What is the underlying hypervisor on C5 instances" question).

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

C5 2xl!

Ah, I'm having the same problem! Which C series did you pick?

Well, after trying practically them all, I believe the "C" series is the best as most of our server issues arise from the CPU getting overloaded.

C5 2xl!

Ah, I'm having the same problem! Which C series did you pick?

Well, after trying practically them all, I believe the "C" series is the best as most of our server issues arise from the CPU getting overloaded.

C5 2xl!

Ah, I'm having the same problem! Which C series did you pick?

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

How many hours a day does it need to be powered on for? Not part of the answer but another train of thought.

It depends, if there are more C5 instances running, the Savings Plan will prioritize the instances with the highest saving amount, which may not include that specific instance.

YES, if you buy that EC2 Savings Plan (SP) for C5 within the region where your instance is running with the hourly commitment 0.15$, then it will cover your running instance and you'll effectively be paying the discounted rate instead.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

c5 is compute optimized instance type and m5 is general instance type.

C5 2xl!

Ah, I'm having the same problem! Which C series did you pick?

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

Well, after trying practically them all, I believe the "C" series is the best as most of our server issues arise from the CPU getting overloaded.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Load More
Similar Instances to
c5.2xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.