Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


the T series is more suitable for non-performance-verified test environments

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

- Can be used as Dev/ Test server

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

A t2.2xlarge instance with 8 vCPUs stuck in a 100% CPU grind, will cost you $67.20 a week. At that point, an m5.2xlarge may be a better choice.

T2 instances do not have Unlimited mode turned on by default. Without Unlimited mode turned on, once the CPU credits have been exhausted, the server goes into a shallow resource usage state. Its CPU performance and network performance are lessened considerably until the CPU credits have accumulated again. We've seen this first hand on quite a few occasions now, even causing production outages.

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

You can count t2 as upgrade of t1. In general t2 offer faster access to memory and disk compared to t1.

Why use t2 instances of t2.large or larger? The price difference to go to m5 instances is negligible[1], and with m5 you don't have to worry about CPU credits and you get better networking.

We pre-purchase the instance, and we also have one large instance, which is shared amongst about 9 different projects to save costs.

The main issue here is COST. It is incredibly difficult to predict the COST of all these services on AWS and this can lead to some pretty insane bills as the little things add up quickly.

DO has most of that I think - DNS, Object Storage, reuse snapshots, API, floating IPs, load balancers, reliability on par with AWS, and simpler pricing, but no auto-scaling that I'm aware of.

I would add route 53, S3 and easy of setting up security groups as killer features. Plus the ability to take a snapshot and boot an instance from it, awesome API, elastic IPs and reliability. I’ve had 8 instances across 3 data centres, for 5 years, with a single hardware failure where I’ve had to take action. Their shit just works.

I agree that AWS pricing and documentation can be incredibly obtuse in some cases and that it's often overkill for small personal projects.

If you are not looking for an exotic instance type or instance type flexibility Vultr provides superior servers at every comparable EC2 price point.

Maybe I'm just an oddball, but I've found significant use in setting up shared infrastructure including a mailserver, gitlab, VPN server and similar, with things like CodeDeploy replaced by a simple & reliable git hook.

I do use Digital Ocean for some projects, and have been looking at Vultr recently as well.

The post raises some good points, but in reality, I've been using T2 instances on my projects for a long time with good results.

If your load is still consistently high enough, it is possible to be caught in an ASG cycling situation with t2s that can affect your application.

Proper architecture and planning up front will save all kinds of headaches.

If I remember correctly, just setting autoscaling based on CPU usages already works fine and does the right thing with T2 instances as well - if you don't have enough credits to drive your load your CPU is going to be stuck at 100%, irrespective of credit balance or instance size.

Great points on the T2 Unlimited and an interesting solution for enabling T2 Unlimited and alerting.

And CloudFormation users can specify it here:

Terraform just released support for the unlimited flag via the 'credit_specification' field

You will likely find it is difficult or impossible to access the server to take any measures to solve the issue until the CPU credits have accrued.

T2 instances are low-cost, General Purpose instance type that provide a baseline level of CPU performance with the ability to burst above the baseline.

I’ve found them really great for a memory-bound Rails app that needs very little CPU.

I do turn on t2 unlimited mode in case some horrible bug pegs the CPU.

In situations where cost is more of a concern and the application has "bursty" requirements for performance, I still believe the T2 instance has it's place.

I definitely agree, for high traffic applications a compute or memory optimized instance are more optimal choices due to the consistent performance, especially when price isn't as much of a concern.

As I myself learned the hard way, T2 are awful for production websites with sustained high traffic.

Why use t2 instances of t2.large or larger? The price difference to go to m5 instances is negligible[1], and with m5 you don't have to worry about CPU credits and you get better networking.

I would add route 53, S3 and easy of setting up security groups as killer features. Plus the ability to take a snapshot and boot an instance from it, awesome API, elastic IPs and reliability. I’ve had 8 instances across 3 data centres, for 5 years, with a single hardware failure where I’ve had to take action. Their shit just works.

We pre-purchase the instance, and we also have one large instance, which is shared amongst about 9 different projects to save costs.

The main issue here is COST. It is incredibly difficult to predict the COST of all these services on AWS and this can lead to some pretty insane bills as the little things add up quickly.

DO has most of that I think - DNS, Object Storage, reuse snapshots, API, floating IPs, load balancers, reliability on par with AWS, and simpler pricing, but no auto-scaling that I'm aware of.

I agree that AWS pricing and documentation can be incredibly obtuse in some cases and that it's often overkill for small personal projects.

Maybe I'm just an oddball, but I've found significant use in setting up shared infrastructure including a mailserver, gitlab, VPN server and similar, with things like CodeDeploy replaced by a simple & reliable git hook.

I do use Digital Ocean for some projects, and have been looking at Vultr recently as well.

If you are not looking for an exotic instance type or instance type flexibility Vultr provides superior servers at every comparable EC2 price point.

The post raises some good points, but in reality, I've been using T2 instances on my projects for a long time with good results.

If your load is still consistently high enough, it is possible to be caught in an ASG cycling situation with t2s that can affect your application.

If I remember correctly, just setting autoscaling based on CPU usages already works fine and does the right thing with T2 instances as well - if you don't have enough credits to drive your load your CPU is going to be stuck at 100%, irrespective of credit balance or instance size.

Proper architecture and planning up front will save all kinds of headaches.

Great points on the T2 Unlimited and an interesting solution for enabling T2 Unlimited and alerting.

And CloudFormation users can specify it here:

Terraform just released support for the unlimited flag via the 'credit_specification' field

You will likely find it is difficult or impossible to access the server to take any measures to solve the issue until the CPU credits have accrued.

I do turn on t2 unlimited mode in case some horrible bug pegs the CPU.

I’ve found them really great for a memory-bound Rails app that needs very little CPU.

In situations where cost is more of a concern and the application has "bursty" requirements for performance, I still believe the T2 instance has it's place.

I definitely agree, for high traffic applications a compute or memory optimized instance are more optimal choices due to the consistent performance, especially when price isn't as much of a concern.

T2 instances are low-cost, General Purpose instance type that provide a baseline level of CPU performance with the ability to burst above the baseline.

As I myself learned the hard way, T2 are awful for production websites with sustained high traffic.

Why use t2 instances of t2.large or larger? The price difference to go to m5 instances is negligible[1], and with m5 you don't have to worry about CPU credits and you get better networking.

This is false information. Instance retirement event is just host maintenance event. NOT THE replacement process.

So, there is no specific date to end support for older instance type, they are retired gradually and we are only notified through scheduled events?

For detailed information on the process and implications of instance retirement, please refer to the following resource: [Understanding Instance Retirement on AWS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html). Additional [Scheduled events for your instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)

That's a great news! Thank you for the link

DO has most of that I think - DNS, Object Storage, reuse snapshots, API, floating IPs, load balancers, reliability on par with AWS, and simpler pricing, but no auto-scaling that I'm aware of.

We pre-purchase the instance, and we also have one large instance, which is shared amongst about 9 different projects to save costs.

I agree that AWS pricing and documentation can be incredibly obtuse in some cases and that it's often overkill for small personal projects.

The main issue here is COST. It is incredibly difficult to predict the COST of all these services on AWS and this can lead to some pretty insane bills as the little things add up quickly.

I would add route 53, S3 and easy of setting up security groups as killer features. Plus the ability to take a snapshot and boot an instance from it, awesome API, elastic IPs and reliability. I’ve had 8 instances across 3 data centres, for 5 years, with a single hardware failure where I’ve had to take action. Their shit just works.

Maybe I'm just an oddball, but I've found significant use in setting up shared infrastructure including a mailserver, gitlab, VPN server and similar, with things like CodeDeploy replaced by a simple & reliable git hook.

I do use Digital Ocean for some projects, and have been looking at Vultr recently as well.

If your load is still consistently high enough, it is possible to be caught in an ASG cycling situation with t2s that can affect your application.

If you are not looking for an exotic instance type or instance type flexibility Vultr provides superior servers at every comparable EC2 price point.

The post raises some good points, but in reality, I've been using T2 instances on my projects for a long time with good results.

If I remember correctly, just setting autoscaling based on CPU usages already works fine and does the right thing with T2 instances as well - if you don't have enough credits to drive your load your CPU is going to be stuck at 100%, irrespective of credit balance or instance size.

Proper architecture and planning up front will save all kinds of headaches.

Great points on the T2 Unlimited and an interesting solution for enabling T2 Unlimited and alerting.

And CloudFormation users can specify it here:

Terraform just released support for the unlimited flag via the 'credit_specification' field

You will likely find it is difficult or impossible to access the server to take any measures to solve the issue until the CPU credits have accrued.

I have a very simple web app that is, when being used (rarely) is fairly computationally demanding (in a relative sense). That kind of load suits the T2 instance really well in my experience.

What would be the purpose (for the layman)? Handling spikey event ingress?

They're still good for: - non production dev servers- executing recurring scheduled events with known estimate for cpu usage (cron jobs)- potentially for non-overly busy build serversI wouldn't use them for anything that needs production reliability though.

I would say the same about t2s for the most part.

In situations where cost is more of a concern and the application has "bursty" requirements for performance, I still believe the T2 instance has it's place.

I definitely agree, for high traffic applications a compute or memory optimized instance are more optimal choices due to the consistent performance, especially when price isn't as much of a concern.

As I myself learned the hard way, T2 are awful for production websites with sustained high traffic.

As sktan mentioned, T2, T3a and T3 are burstable, which means you don't get to use 100% of the CPU all the time. If your job is CPU-bound, I doubt a burstable instance will be cost effective.

The difference between t2 and c6g instances is the burstable nature of the t2 instance. A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically.

As sktan mentioned, T2, T3a and T3 are burstable, which means you don't get to use 100% of the CPU all the time. If your job is CPU-bound, I doubt a burstable instance will be cost effective.

The difference between t2 and c6g instances is the burstable nature of the t2 instance. A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically.

As sktan mentioned, T2, T3a and T3 are burstable, which means you don't get to use 100% of the CPU all the time. If your job is CPU-bound, I doubt a burstable instance will be cost effective.

The difference between t2 and c6g instances is the burstable nature of the t2 instance. A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically.

As sktan mentioned, T2, T3a and T3 are burstable, which means you don't get to use 100% of the CPU all the time. If your job is CPU-bound, I doubt a burstable instance will be cost effective.

The difference between t2 and c6g instances is the burstable nature of the t2 instance. A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically.

As sktan mentioned, T2, T3a and T3 are burstable, which means you don't get to use 100% of the CPU all the time. If your job is CPU-bound, I doubt a burstable instance will be cost effective.

The difference between t2 and c6g instances is the burstable nature of the t2 instance. A t2.micro is cheap because of the way the credit system works, where you can't always use 100% of your CPU and can only burst there periodically.