Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


Just did a quick test. It booted up in about 11s vs around 19s for Xen. I did notice that it took a while for the status check to go green though. There was a warning message saying that it couldn't connect to the instance. I was able to SSH just fine though.

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Even though compiling Linux C5D is a very disk intensive operation, the CPU was the ultimate bottleneck of the task in our test case.

The network measures are essentially the same for C5D and C5 instances, as expected, but significantly improved over the T2 instance.

The C5D vastly outperforms the C5 and T2 in File IO due to the nvme disk, with nearly 4 times the read and write capability of a C5 instance.

I find it is unlikely that every VM on that host is consistently in resource contention (including Randy's), but it is not impossible — just unlikely to consistently be the issue.

The disk latency measure (IOping) shows that the nvme disk has half the latency of the C5. This is a dramatic difference, and has significant implications for latency-sensitive workloads such as real-time analytics.

Also note that if you upgraded from _t3.micro_ to _c5d.large_ you're now running a lot more powerful instance. No wonder that you see a lower latency!

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

It reads and writes files using the instance storage. In synthetic tests it looked fishy: dd if=/dev/zero of=/mnt/testfile bs=1G count=1 Day 1: 500MB/s Day 2: 120MB/s Day 3 - 4: 40MB/s

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

For Localzone NYC, the available instance types are limited and the cheapest among them is t3.medium with $0.052 on demand hourly rate. 'c5d.2xlarge' comes with $0.48 on hourly basis and there are only two more instance types that are cheaper than C5D.

Our application is highly depended upon low latency for a successful user experience. To achieve this, we have made some decisions to consume local regions in AWS. However, the NYC local region has a small subset of ec2 instances available relative to other local regions. The smallest server type available in the NYC region for the c5d class is c5d.2xlarge which is a significant cost increase compared to our other local regions utilizing the c5.large instance type. Is there a way to reduce this cost or choose a smaller version of the c5 or c5d instance type?

You may find useful new EC2 instance family equipped with local NVMe storage: **C5d**. See announcement blog post:

You may find useful new EC2 instance family equipped with local NVMe storage: **C5d**. See announcement blog post:

CPU credits only apply to T2/T3 instances. Each T2/T3 instance accumulates some CPU credits per second and also when it's in use (i.e. not "idle") it spends these CPU credits. When it runs out of credits it either slows down to the baseline performance (T2 default) or keeps running at full speed with you paying for the extra credits needed (T3 default and T2 "unlimited mode").

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

Our application is highly depended upon low latency for a successful user experience. To achieve this, we have made some decisions to consume local regions in AWS. However, the NYC local region has a small subset of ec2 instances available relative to other local regions. The smallest server type available in the NYC region for the c5d class is c5d.2xlarge which is a significant cost increase compared to our other local regions utilizing the c5.large instance type. Is there a way to reduce this cost or choose a smaller version of the c5 or c5d instance type?

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Ah, I'm having the same problem! Which C series did you pick?

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

Ah, I'm having the same problem! Which C series did you pick?

I've been paranoid that the websocket feed I was listening to on a `t3.micro` instance was being inhibited by cpu steal time from other instances under the same hypervisor. So I switched over to a `c5d.large` instance and definitely noticed less latency.

Ah, I'm having the same problem! Which C series did you pick?

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application — maybe scientific modelling, intensive machine learning, or multiplayer gaming — these instances are a good choice.

Also note that if you upgraded from _t3.micro_ to _c5d.large_ you're now running a lot more powerful instance. No wonder that you see a lower latency!

t3.micro instances run in unlimited mode by default, meaning that no throttling is taking place: if you exceed the CPU credits allocated to your instance, you simply wind up paying for more credits automatically (if this happens all the time, it will be actually more expensive than running a higher class `m` instance). It is unlikely that "CPU steal" is the cause of your performance problem, it is much more likely that the bigger (and costlier) `c5d.large` can just run your code faster.