Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c5a.8xlarge

EC2 Instance

AMD-based compute-optimized instance with 32 vCPUs and 64 GiB memory. Excellent for high performance computing workloads.

Coming Soon...

icon
Pricing of
c5a.8xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c5a.8xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c5a.8xlarge
FeatureSpecification
icon
Storage features of
c5a.8xlarge
FeatureSpecification
icon
Networking features of
c5a.8xlarge
FeatureSpecification
icon
Operating Systems Supported by
c5a.8xlarge
Operating SystemSupported
icon
Security features of
c5a.8xlarge
FeatureSupported
icon
General Information about
c5a.8xlarge
FeatureSpecification
icon
Benchmark Test Results for
c5a.8xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c5a.8xlarge
AI-summarized insights
filter icon
Filter by:
All

Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

2017-07-11 00:00:00
benchmarking

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Mostly Intel hardware though if I'm not mistaken?

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

I encountered Insufficient Capacity Error whilst launching 3x c5a.8xlarge in Sydney region (AZ apse2-az2). After some re-trying I managed to launch c5a.12xlarge in the same AZ.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

The EC2 instance memory is not available to me. It shows that 2 GB is in use and that only 400 MB is available to me.

Do metal instances launch at the same time with Intel or is it normal to launch a little later?

Disclosure: I work at AWS building cloud compute infrastructureHistorically we have both launched .metal instances at the same time as virtualized instances, and also launched them a little later. My goal is to launch them at the same time, but sometimes there are some final details to work through before we can make them available.

Is there a timeline for _soon_ on the metal instances?

That's a good point. They used to be the same across accounts, but then they had consistent capacity problems in us-east-1a so they introduced the shuffle which is determined on account creation.You can actually look in the "Resource Access Manager" to determine which allocation you've got, it maps the names you see in the rest of the console to e.g. use1-az1, use1-az2, use1-az3, so you can use that information to "colocate" things if you really need to.

Something to be aware of, sub regions (a/b/c etc) do not map between accounts. Your a might be my c, etc.

Even though the announcement says it’s available in US-East, it seems like you can only launch in us-east-1a and us-east-1b. I wasn’t able to launch c5a instances in other availability zones.

Disclosure: I work for AWS building cloud infrastructureMy experience working at Amazon has changed my thinking about how to deliver great customer experiences. When we make a new EC2 instance available, customers expect to be able to use it at scale that is indistinguishable from limitless (which is a challenge in a world that has physical constraints in accordance to laws of physics), instantaneous availability, and high quality.

Yeah, we'll have to see how quickly the various providers are able to get EPYC Milan GA'ed once it's available.

There's a catch though. We primarily use GCP - About a month and a half ago we were planning to run a large compute job, since GCP already had these great performance-to-cost AMD EPYC 2nd gen Rome VMs available then, we decided to use them, only to find out later that there was a service quota of 24 vCPUs for these instances. The worse part of it was that it wasn't even clear in the service quota page that there was no point in requesting anything beyond 24, as for whatever number we put 512, 256, 128, 32, etc. for all cases we would get automatically rejected.

Mostly Intel hardware though if I'm not mistaken?

AWS behind Azure, GCP, and even Oracle Cloud in this release. Azure had EPYC Rome available 6 months ago. Surprising, AWS had been among the fastest to roll out new hardware in the past few years.

Amazon defines one ARM vCPU as one core and one x86 vCPU as one SMT-thread of one core, so the Graviton 2 instances has twice as many cores as the x86 ones it's compared against, and as such it would be a kind of a failure if it wasn't faster in some of the most heavily threaded workloads.

This benchmark is kind of useless as it is comparing c5a to m5 instead of c5. The c5 and m5 are both the same kind of Xeon, but the sustainable and turbo clock speed of the m5 is notably slower than the c5. If you were considering a Xeon for your use case, you would want to see the c5a compared with the c5.

The Xeon M5 instances still had a minor advantage with the PostgreSQL database server but the Rome instances were quite close behind and much faster than the previous-generation EPYC instances even with the same core/thread counts.

interesting that the ARM processor cmae out fastest in at least three of the benchmarks. I knew ARM was good, but to beat Xeons and EPYC is fantastic.

With the simple 7-Zip compression test, the Amazon Graviton2 instances led the race but EPYC C5A was faster than the Xeon M5 and certainly much faster than the previous-generation M5A instances.

With the Apache Cassandra performance, Amazon Graviton2 led -- keeping in mind that with the M6G instances each vCPU is backed by a physical core compared to the Intel/AMD instances being a combination of physical and HT/SMT logical cores. But in any case, the C5A instance for 16xlarge was faster than Xeon and dramatically faster than the former M5A EPYC instance while for the 8xlarge instance is where the Xeon M5 came out slightly ahead of the similarly equipped C5A.

I am now using c5a series (usually c5a.8xlarge) instead of c5 to save on cost. However if another instance series is more reliable (i.e. not getting ICE'd) I would consider changing to a different series.

The EPYC C5A instances won in the respective comparisons for the sizes tested, though in the case of Graviton2's lower performance for the compile kernel test keep in mind there are differing modules/options enabled between building the Linux kernel for x86_64 and ARMv8.

I encountered Insufficient Capacity Error whilst launching 3x c5a.8xlarge in Sydney region (AZ apse2-az2). After some re-trying I managed to launch c5a.12xlarge in the same AZ. A month or two ago I encountered the same issue with c5a.8xlarge in Mumbai region.

Our Account Manager from AWS suggested we use c5a for cPanel.

Ah, I'm having the same problem! Which C series did you pick?

Although there was just one instance running at the point in time I was starting up this instance, AWS saw multiple instances running for some reason. This accounted to vCPU being distributed across all the instances running & hence the total allocated vcpu limit to me was reached according to AWS support. This has been resolved since.

When I choose any of c5a.8xlarge, c5a.12xlarge or c5a.16xlarge, the memory allocated matches the tabulation below while the vcpu count remains at 16 in all the 3 cases.

Although there was just one instance running at the point in time I was starting up this instance, AWS saw multiple instances running for some reason. This accounted to vCPU being distributed across all the instances running & hence the total allocated vcpu limit to me was reached according to AWS support. This has been resolved since.

I encountered Insufficient Capacity Error whilst launching 3x c5a.8xlarge in Sydney region (AZ apse2-az2).

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

I encountered Insufficient Capacity Error whilst launching 3x c5a.8xlarge in Sydney region (AZ apse2-az2). After some re-trying I managed to launch c5a.12xlarge in the same AZ. A month or two ago I encountered the same issue with c5a.8xlarge in Mumbai region.

I am now using c5a series (usually c5a.8xlarge) instead of c5 to save on cost. However if another instance series is more reliable (i.e. not getting ICE'd) I would consider changing to a different series.

Load More
Similar Instances to
c5a.8xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.