Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

Depending upon your needs and deployment approach, the C6g instances do only scale up to c6g.16xlarge at 64 vCPUs with Graviton2 only allowing 64 cores per server while the C6a Zen 3 instances can go beyond that with 96, 128, and 192 vCPU options.

The C6g instances do carry the advantage of each vCPU being backed by a physical Neoverse N1 core compared to the AMD/Intel EC2 instances relying on their vCPUs as a mix of physical cores and the sibling SMT (HT) thread.

The Graviton2 C6g instances can perform incredibly well if the workload is properly tuned for AArch64 while in other cases can deliver comparable (or sometimes worse) performance than the C6a.

It largely comes down to the workload though whether C6g instances make sense -- primarily how well optimized your in-use software is for AArch64 (or in cases of proprietary software, if AArch64 is even an option).

The C6g.8xlarge instance offers 32 vCPUs and 64GB of RAM like the C6a.8xlarge. The C6g.8xlarge though does enjoy significantly lower cloud costs thanks to being Amazon's in-house processor.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've been using c6i instances in production for a couple months now, in the us-east-1 region. I replaced c5 instances. I'm currently looking at PHP performance on the c6a and we might switch to those if they perform comparably to the c6i. The c5a instances are not great for applications that are sensitive to memory latency, but do work well with more throughput oriented things like Kafka. Edit: looks like c6i is going to be the better option over c6a.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.