Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c6g.2xlarge

EC2 Instance

ARM-based compute-optimized instance with 8 vCPUs and 16 GiB memory. Suitable for batch processing and application servers.

Coming Soon...

icon
Pricing of
c6g.2xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c6g.2xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c6g.2xlarge
FeatureSpecification
icon
Storage features of
c6g.2xlarge
FeatureSpecification
icon
Networking features of
c6g.2xlarge
FeatureSpecification
icon
Operating Systems Supported by
c6g.2xlarge
Operating SystemSupported
icon
Security features of
c6g.2xlarge
FeatureSupported
icon
General Information about
c6g.2xlarge
FeatureSpecification
icon
Benchmark Test Results for
c6g.2xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 614.1MB
AES-256 CBC 456.2MB
MD5 1.1GB
SHA256 4.2GB
SHA512 1.1GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c6g.2xlarge
AI-summarized insights
filter icon
Filter by:
All

I am running a network-intensive task that sends several thousand pings/traceroute packets per second to external IP addresses. I've noticed that the network throughput is high soon after creating the instance, but drops off exponentially after the ping process has been running for more than an hour or so. I've tried this on a variety of instance sizes. Even on a c6.2xlarge (whose stated baseline network bandwidth is 2.5 Gbps), it drops down to around 200 packets a minute. Any ideas why this might be happening? Is there any way I can reserve more bandwidth for my instance?

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

Overall we are happy with performance, compared to old stack we are at around 10% of the cost and I think our savings was more than 2x compared to x86 after locking in some rates. R6gd.metal (16x) vs R5d.metal (24x)

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

For those of you encoding on c5 instances today, Graviton2 based C6g instances provide a very compelling 36% price/performance benefit.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

For those of you encoding on c5 instances today, Graviton2 based C6g instances provide a very compelling 36% price/performance benefit.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

The other difference is memory, your t2.micro instance has 1GB memory whereas the c6g.medium has 2GB of memory allocated, which also increases the price. Then there is the CPU architecture which is ARM, which won't be able to run x86 compiled applications natively and some applications will need to be recompiled specifically to run successfully.

Load More
Similar Instances to
c6g.2xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.