Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c6a.large

EC2 Instance

AMD-based compute-optimized instance with 2 vCPUs and 4 GiB memory. Powered by 3rd Gen AMD EPYC processors with an all-core turbo frequency of 3.6 GHz.

Coming Soon...

icon
Pricing of
c6a.large

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c6a.large

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c6a.large
FeatureSpecification
icon
Storage features of
c6a.large
FeatureSpecification
icon
Networking features of
c6a.large
FeatureSpecification
icon
Operating Systems Supported by
c6a.large
Operating SystemSupported
icon
Security features of
c6a.large
FeatureSupported
icon
General Information about
c6a.large
FeatureSpecification
icon
Benchmark Test Results for
c6a.large
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 285.6MB
AES-256 CBC 252.4MB
MD5 950.1MB
SHA256 2.7GB
SHA512 812.0MB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c6a.large
AI-summarized insights
filter icon
Filter by:
All

Amazon C6a instances are powered by 3rd generation AMD EPYC processors and are designed for compute-intensive workloads.

28-10-2024
benchmarking

Ah, I'm having the same problem! Which C series did you pick?

Overall Score: 61

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

The grades below are those of the most recent trial run for this plan.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

This plan was tested 1 times at vpsbenchmarks.com.

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

Ah, I'm having the same problem! Which C series did you pick?

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Overall Score: 61

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

The grades below are those of the most recent trial run for this plan.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

This plan was tested 1 times at vpsbenchmarks.com.

I cannot find AWS documentation or any information about AWS EC 2 - Instance Estimated Power Consumption IDLE and 100% for the following instances: c6a.large

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

Load More
Similar Instances to
c6a.large

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.