Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

c3.8xlarge

EC2 Instance

Previous generation compute-optimized instance with 32 vCPUs and 60 GiB memory. Highest compute capacity in the C3 family for large-scale computation.

Coming Soon...

icon
Pricing of
c3.8xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
c3.8xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
c3.8xlarge
FeatureSpecification
icon
Storage features of
c3.8xlarge
FeatureSpecification
icon
Networking features of
c3.8xlarge
FeatureSpecification
icon
Operating Systems Supported by
c3.8xlarge
Operating SystemSupported
icon
Security features of
c3.8xlarge
FeatureSupported
icon
General Information about
c3.8xlarge
FeatureSpecification
icon
Benchmark Test Results for
c3.8xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max N/A N/A
Average N/A N/A
Deviation N/A N/A
Min N/A N/A

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
c3.8xlarge
AI-summarized insights
filter icon
Filter by:
All

While the chart above is a good start, there’s more than simply considering “Reserved vs. On Demand”. So let’s take a closer look at all the options…

19-03-2025
cost_savings

These huge servers might not be the best solution. An edge cluster can be many m1-small instances.

22-01-2014
cost_savings

Coming out of Phoronix today for helping you measure cloud performance are benchmarks of all the new C3 instance types and compared to some bare metal systems running locally.

I benchmarked all these new instances: c3.large, c3.xlarge, c3.2xlarge, c3.4xlarge, and c3.8xlarge.

When I test my own '10 Gigabit' instances (c3.8xlarge) with iperf I won't see transfer rates exceeding 1.73 Gbps.

The spot prices for some of these instances can be rather important to note as well... I use them a fair bit when doing 3D rendering for non time critical tasks. For instance, the C3.8xlarge costs $0.53/hour at the current spot price, and I can 'bid' to use it and specify a max cost of $0.8 per hour.

This limit is only seen when testing against an instance’s public IP. When testing against an instance’s private IP, which of course cannot be tested form outside Amazons environment, you will not see cap.

Because of the virtualization layer the networking layer can't use DMA directly and CPU has to copy data back and forth spending time doing softirq.

AWS Support admit that 10 GbE speeds can only be achieved between instances on the private subnet network. It requires that the private IP is used as opposed to the public IP which in my case always maxes out at 1.73 Gbps.

Amazon has finally recognized that something caps the throughput at 1.73 Gbps.

I have now created an Amazon support ticket and will update this thread if Amazon finds a solution.

The RTMP Load Test Tool is very good solution for estimating server capacity.

The maximum bandwidth for an c3.8xlarge instance seems to be 1.7 Gbps.

I will simulate 4494 simultaneous connections to 1166,6 Kbps streams, to see if a c3.8xlarge can handle a load of 5242880 Kbps == 5120 Mbps == 5 Gbps.

If one high performance instance like the c3.8xlarge can handle 4494 concurrent connections (see the calculations in #6), that is actually enough for us. But can it?

Why is that? Because of the fact that the Java limitation prevents the saturation of 10 GbE?

Java has a limitation of 5 Gbps

Java has a limitation of 5 Gbps so this would be a limitation as to how many concurrent connections you can have which will vary based on the bitrate of the streams being viewed.

What can the interface 10 Gigabit*4 handle?

Many smaller edges, if managed well, might be more cost effective, and is certainly more flexible. It is best to have uniform edge cluster, all edges the same size and approx throughput

These huge servers might not be the best solution. An edge cluster can be many m1-small instances.

Wowza does not have a hard limit to the cumber of connections it can handle, this is based on the hardware and bandwidth available to the server.

Can you help me estimate how many connected users Amazons most powerful instance can handle?

I have a recurring EMR job that I recently switched from using cc2.8xlarge servers to c3.8xlarge servers. On my first run with the new configuration, one of my map-reduce jobs that normally takes 2-3 minutes was stuck spending over 9 hours copying the data from the mappers to the sole reducer.

Coming out of Phoronix today for helping you measure cloud performance are benchmarks of all the new C3 instance types and compared to some bare metal systems running locally.

Coming out of Phoronix today for helping you measure cloud performance are benchmarks of all the new C3 instance types and compared to some bare metal systems running locally.

EC2 instances are priced according to instance type, regardless of the number of CPUs enabled. Disabling vCPUs does not change the cost of the instance type.

Load More
Similar Instances to
c3.8xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.