Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5.large

EC2 Instance

Compute-optimized instance with 2 vCPUs and 4 GiB memory. Powered by 3.0 GHz Intel Xeon Platinum processors for compute-intensive workloads.

Coming Soon...

icon
Pricing of
c5.large

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5.large

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5.large
FeatureSpecification
icon
Storage features of
c5.large
FeatureSpecification
icon
Networking features of
c5.large
FeatureSpecification
icon
Operating Systems Supported by
c5.large
Operating SystemSupported
icon
Security features of
c5.large
FeatureSupported
icon
General Information about
c5.large
FeatureSpecification
icon
Benchmark Test Results for
c5.large
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 156.5MB
AES-256 CBC 111.1MB
MD5 1.0GB
SHA256 436.9MB
SHA512 582.0MB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max Average Deviation
Average Read Write
Deviation 1016 1016
Min 1014 1012

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5.large
AI-summarized insights
filter icon
Filter by:
All

In our example, consuming `7.53 instance-months` means we should buy a maximum of `7` `c5.large` EC2 instances in `us-east-1`.

Concurrency Labs Ltd
19-03-2025
cost_savings

Calculate Baseline EC2 Instance Hours per Month

Concurrency Labs Ltd
19-03-2025
cost_savings

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

Tried the c5a.large instance. It had very similar CPU performance to a t3.small instance. Still hitting the 100% utilization cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

Tried the c5a.large instance. It had very similar CPU performance to a t3.small instance. Still hitting the 100% utilization cap.

2020-02-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

c5.large instance: Slightly better performance than c5a.large, but still hitting the CPU cap.

2020-02-10 00:00:00
benchmarking

My question is: would a t3.medium be faster than a t3.small?

2020-01-10 00:00:00
benchmarking

Is there any other documentation that compares performance between different generations (c5/c6/c7) and processor types (c6i vs c6g, for example)? The on-demand pricing per hour for a c6i.large (or a c5.large) instance is 25% more than for a c6g.large, which is not insignificant. And why does documentation for the AWS FGT 7.0 recommend c6i or c6g, whereas 7.2 and 7.4 recommend only c5?

Just to add to this thread, I have been running two instances for hosting a lot of websites and email and they are using c5xlarge and c5large.

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

We are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB.

Just to add to this thread, I have been running two instances for hosting a lot of websites and email and they are using c5xlarge and c5large.

I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB. You should remember that if this is your 1st full splunk instance after UF then you must add those props and transforms there not in then indexers to take those into use!

Is there any other documentation that compares performance between different generations (c5/c6/c7) and processor types (c6i vs c6g, for example)? The on-demand pricing per hour for a c6i.large (or a c5.large) instance is 25% more than for a c6g.large, which is not insignificant. And why does documentation for the AWS FGT 7.0 recommend c6i or c6g, whereas 7.2 and 7.4 recommend only c5?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

when I run `scp` to copy a large local zip file (~800MB) to the instance, the upload speed is ~160KB/s.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

when I run `scp` to copy a large local zip file (~800MB) to the instance, the upload speed is ~160KB/s.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

The EC2 User Guide includes documentation on how to avoid these issues:

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

For seq read/write I've observed up to around 15000/2900 MB/s on i3.16xl and 2750/1400 MB/s on GCE w/4xlocal SSD

GCE local SSD was the fastest thing around when first released in 2014, but hasn't improved and has since fallen behind. GCE local SSD 4k read IOPs max out at around 200k/volume and 800k/instance w/4 volumes. I haven't measured c5 local nvme yet, but on i3, local nvme volumes run at 100-400k/volume and up to 3.3 million IOPs on the largest instance type (i3.16xl).

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Ah yeah, the article was about local NVMe, so the concern is probably not relevant here.

According to the bug here: It was actually filed about EBS block disks using NVME (on these new instances, there is a hardware card that presents network EBS volumes as a PCI-E NVME device). In certain failure cases since this is a network block store, they can fail for some period of time exceeding this timeout.

The EC2 User Guide includes documentation on how to avoid these issues:

Yes, the root volume on i3.metal is exposed as NVMe and is EBS-backed.

EBS is exposed as nvme devices on c5 and m5 as well, which is what I assume otterley is talking about.

The nvme disks are local, not remote EBS. Latency will be the PCI bus.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Ubuntu has a specific kernel for AWS, and partners with AWS to optimise the kernel for AWS environments. Part of that is fixing issues exactly like this. That issue was fixed as per the bug that you linked.

Depending on which kernel version you are, C5 (and M5) instances can be real sources of pain. The disk is exposed as a /dev/nvme* block device, and as such I/O goes through a separate driver. The earlier versions of the driver had a hard limit of 255 seconds before I/O operation times out. When the timeout triggers, it is treated as a _hard failure_ and the filesystem gets remounted read-only. Meaning: if you have anything that writes intensively to an attached volume, C5/M5 instances are dangerous.

Skylake should be more energy efficient and denser (cores/rack) than Broadwell, so it makes sense for them to be cheaper. Also, I wouldn't assume Amazon pays anything close to list price, or even necessarily runs publicly announced parts.

Interesting that Amazon is charging less for Skylake than Broadwell while the list prices for Skylake are higher. I guess Intel really wants you to move to the cloud.

Really happy to see even small versions of these instances having large NICs.

Pricing seems to be about 15% cheaper than C4 with slightly more RAM and the same ECU equivalence

No pricing info on c5 or whether c4 has changed in either the blog post or

The pricing is out: Note, that you will need to select Northern Virginia, Oregon or Ireland to see C5 prices. By default (at least for me) the pricing page loads Ohio, which doesn't have C5 yet.

This is based on a cursory examination of [1], so I may be off-base. But Xen appears to rely on the host for many things that can be, and often are, hardware accelerated in KVM. Searching for CONFIG_XEN_PVHVM, it doesn't appear to cover much. [1]

Is that true with the newer modes? [https://wiki.xen.org/wiki/Understanding_the_Virtualization_S...](https://wiki.xen.org/wiki/Understanding_the_Virtualization_Spectrum)

KVM can take better advantage of hardware acceleration than Xen. Xen requires a modified guest, which traps back to the host more frequently.

Intertwined with this announcement of an new instance type, is that it uses a non-Xen hypervisor. Has anyone booted one yet? Is it KVM or something from scratch that AWS wrote?

What’s the benefit of KVM over Xen these days?

I am also curious. I found this article which might help a little bit: [https://www.theregister.co.uk/2017/01/30/google_cloud_kicked...](https://www.theregister.co.uk/2017/01/30/google_cloud_kicked_qemu_to_the_kerb_to_harden_kvm/)

It's difficult to make an simple comparison because neither hypervisor is used "out of the box" by EC2. For example, a lot of hypervisor benchmark comparisons focus on I/O performance, but the latest generation EC2 instances offload networking and storage processing to hardware. This is the case for both for instances that use Xen and C5 that uses the new KVM-based hypervisor. Since both hypervisors have very good support for hardware virtualization technology, it becomes more of an architecture decision. Practically speaking, it is a little bit easier to build a very small footprint hypervisor with the core KVM code than Xen.

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

Very interesting. Has anyone noticed any difference in boot times? If they are announcing things like a brand new hypervisor ahead of re:Invent one wonders what they are saving for the main event... is it too much to hope for managed Kubernetes?

" Q. What is the underlying hypervisor on C5 instances? C5 instances use a new EC2 hypervisor that is based on core KVM technology. "

There must be something we can test Maybe find a new regression Should we try it on a db shard? Or exercise some discretion... instance sizes aren’t powers of two The discount's just 15 percent We can hand wave that away It shouldn’t cause a downtime event Let’s be adults here I say... Everything xen, everything xen I don't think so Everything xen, everything xen I don't think so— Bush, Everything Xen

It's based on KVM: (sorry for crappy link, can't figure out how to link directly to the appropriate question. Search for "What is the underlying hypervisor on C5 instances" question).

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Just to add to this thread, I have been running two instances for hosting a lot of websites and email and they are using c5xlarge and c5large.

C5 2xl!

Just to add to this thread, I have been running two instances for hosting a lot of websites and email and they are using c5xlarge and c5large.

I am going to update them to c6gxlarge and x6glarge respectively for cheaper prices and better performance.

C5 2xl!

The M5/C5 instances do not use burstable credits. For example, a 4 vCPU instance in C5 can run 24x7 at 100% CPU while a T3 4 vCPU can only use 30% of the CPU over the same period before it is throttled.

when I run `scp` to copy a large local zip file (~800MB) to the instance, the upload speed is ~160KB/s.

I am curious, what kind of write characteristics can manage to saturate a 255s timeout on a storage device that does 10k+ iops and gigabytes per second throughput? Normally writes slowing down leads to backpressure because the syscalls issueing them take longer to return.

Sorry for that. The timeout behavior on earlier kernels is a bit of a pain. There's a lot to love about NVMe and timeouts are not actually part of the NVMe specification itself but rather a Linux driver construct. Unfortunately, early versions of the driver used an unsigned char for the timeout value and also have a pretty short timeout for network-based storage.

Oh yes we did. To make sure we didn't just suffer from a single fluke, we read through docs and changed the value to 255 after the first time. Then waited. Didn't have to wait for long, the thing broke again in less than 3 weeks. The workload was a pretty aggressively tuned prometheus. At that point we went into compaction after two weeks, so it would have been doing _very_ heavy I/O for a few days.

Interesting - I’ve been using NVMe devices on Linux for a couple years now and never run into this problem. And an I/O timeout of 255 seconds seems really high to begin with. Is there frequently that much latency in the EBS storage backplane? (We also run c5.9xl instances and have not yet experienced the phenomenon you discuss.)

Load More
Similar Instances to
c5.large

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.