Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5a.4xlarge

EC2 Instance

AMD-based compute-optimized instance with 16 vCPUs and 32 GiB memory. High performance for compute-intensive applications.

Coming Soon...

icon
Pricing of
c5a.4xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5a.4xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5a.4xlarge
FeatureSpecification
icon
Storage features of
c5a.4xlarge
FeatureSpecification
icon
Networking features of
c5a.4xlarge
FeatureSpecification
icon
Operating Systems Supported by
c5a.4xlarge
Operating SystemSupported
icon
Security features of
c5a.4xlarge
FeatureSupported
icon
General Information about
c5a.4xlarge
FeatureSpecification
icon
Benchmark Test Results for
c5a.4xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 668.2MB
AES-256 CBC 503.0MB
MD5 1.8GB
SHA256 4.3GB
SHA512 1.8GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3102 3102
Average 3099 3099
Deviation 0.64 1.02
Min 3098 3095

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5a.4xlarge
AI-summarized insights
filter icon
Filter by:
All

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

In AWS at first, we were using c5a.2xlarge for both of the servers. But had to switch to c5a.4xlarge for the bigger server due to frequent overload issue.

In AWS at first, we were using c5a.2xlarge for both of the servers. But had to switch to c5a.4xlarge for the bigger server due to frequent overload issue.

In AWS at first, we were using c5a.2xlarge for both of the servers. But had to switch to c5a.4xlarge for the bigger server due to frequent overload issue.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Mostly Intel hardware though if I'm not mistaken?

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

In AWS at first, we were using c5a.2xlarge for both of the servers. But had to switch to c5a.4xlarge for the bigger server due to frequent overload issue.

i am sure its all active because i looked in ec2 global and instances running 34000 instances and my aws bill reached 200k$

My account was hacked by someone, by mistake I accidentally uploaded my access key on GitHub but now the problem is solved but my question is how come the hacker made 34,000 ec2 instances of type c5a.4xlarge (16 CPUs) in 17 regions using ami-0ee23bfc74a881de5 while my account only has a limit of 512 in 3 regions (Virginia, Ohio, Oregon) a few days later my friend also experienced the same thing, his account was also hacked and ec2 he was used by hackers and experienced the exact same thing as me does this ami-0ee23bfc74a881de5 have a hacker? because the 2 accounts that were hacked by ec2 always use the ami

It's very likely that it was just displayed on the screen and not actually activated. If activated beyond the limit, the EC2 dashboard may show.

but he can create 34000 c5a.4xlarge instances in 1 aws account. while the limit is only 512 . just logically 512 : 16 = 32 instances is there a possibility that he has a special trick by exploiting existing loopholes?

Although, later we found the culprit was mysql tmpdir.

Our Account Manager from AWS suggested we use c5a for cPanel.

In AWS at first, we were using c5a.2xlarge for both of the servers. But had to switch to c5a.4xlarge for the bigger server due to frequent overload issue.

Load More
Similar Instances to
c5a.4xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.