Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5a.xlarge

EC2 Instance

AMD-based compute-optimized instance with 4 vCPUs and 8 GiB memory. Cost-effective compute performance for CPU-intensive workloads.

Coming Soon...

icon
Pricing of
c5a.xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5a.xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5a.xlarge
FeatureSpecification
icon
Storage features of
c5a.xlarge
FeatureSpecification
icon
Networking features of
c5a.xlarge
FeatureSpecification
icon
Operating Systems Supported by
c5a.xlarge
Operating SystemSupported
icon
Security features of
c5a.xlarge
FeatureSupported
icon
General Information about
c5a.xlarge
FeatureSpecification
icon
Benchmark Test Results for
c5a.xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 846.9MB
AES-256 CBC 619.6MB
MD5 2.0GB
SHA256 5.4GB
SHA512 1.4GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3100 3099
Average 3096 3093
Deviation 3.62 5.27
Min 3085 3082

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5a.xlarge
AI-summarized insights
filter icon
Filter by:
All

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

I've just tried to benchmark, but I'm not sure how. I tried using dd to measure sequential speed and this was giving pretty good results on Google Cloud: ... But I'm perplexed by why it cuts off at 64 GB. On Amazon I couldn't figure out how to get decent speeds at all. Comparable to my Google Cloud box, I set up an i3.xlarge, which had 4 CPUs and 1 SSD. While the dd operation was working fine, it slowed down to a paltry 430 MB/s quickly and never recovered. I doubt this is the true sequential performance of the drive, so I gave up.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Mostly Intel hardware though if I'm not mistaken?

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

Our Account Manager from AWS suggested we use c5a for cPanel.

Ah, I'm having the same problem! Which C series did you pick?

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

Load More
Similar Instances to
c5a.xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.