Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


the T series is more suitable for non-performance-verified test environments

the T series is more suitable for non-performance-verified test environments

T2 is a burstable instance type If you run out of CPU credits, CPU is throttled and performance degrades

I know that t2.x instances are burstable which is why i tried the other machines. And I know that this behavior shouldn’t happen on any of them while in Germany it works with all of the same types. To clarify: i still have cpu credits available. More than enough. The performance is different in both regions.

We are experiencing serious issues with the server performance in India, Mumbai (ap-south-1). We set up a dev environment in Germany and managed to get 50 server FPS capped (good Server tick rate), which delivers a smooth multiplayer experience. We used EC2 t2.medium On-Demand instances to run the game server on it. Now when we transferred the game server setup to India, using as well t2.medium On-Demand instances, we got a **much** worse performance than in Germany. We got on average 30 server FPS which is a huge difference that creates CPU lag when playing.

I have a PostgreSQL RDS v 11.22 on a db.t2.medium. I have to upgrade the database version and the instance type and the cert. I'm trying to do this but I'm getting the following error.

I have a PostgreSQL RDS v 11.22 on a db.t2.medium. I have to upgrade the database version and the instance type and the cert. I'm trying to do this but I'm getting the following error.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

T2 instances do not have Unlimited mode turned on by default. Without Unlimited mode turned on, once the CPU credits have been exhausted, the server goes into a shallow resource usage state. Its CPU performance and network performance are lessened considerably until the CPU credits have accumulated again. We've seen this first hand on quite a few occasions now, even causing production outages.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

As other posts flagged this is hard to predict It depends on the load caused to the instances by the requests.

Realistically, there\'s no way to tell because it depends on what your code/page/environment is doing on that server. The best way to figure it out is to test it.

It's hard to answer this question without knowing what requests are doing exactly. Generally speaking, 150 requests per second (rps) are quite a bunch, and unless they are really easy to serve (or cached), such a small instance might struggle.

You can count t2 as upgrade of t1. In general t2 offer faster access to memory and disk compared to t1.

Similarly, if an application or database needs lots of CPU to serve individual requests with low latency, but idles between requests, then a T2 is advantageous too.

They're perfectly adequate for many cases where your just need _a_ server (or HA pair) continuously up. The 2 hours of full burst that a t2.micro can sustain (or ~5 for small/medium) are plenty of time to react automatically to sustained load. They are also perfect where workloads are moderate and disk or network I/O bound. t2.micro/medium hosts especially make the best Jenkins servers.

Same concept applies to t2.medium. Also using smaller instances allows for tigther auto (or manual) scaling which allows one to save money.

Your server will be able to do more work in parallel. If it's just serving static web pages I wouldn't expect much difference. If it's executing php on a loaded server then I would expect some improvement.

As other posts flagged this is hard to predict It depends on the load caused to the instances by the requests.

It's hard to answer this question without knowing what requests are doing exactly. Generally speaking, 150 requests per second (rps) are quite a bunch, and unless they are really easy to serve (or cached), such a small instance might struggle.

Realistically, there\'s no way to tell because it depends on what your code/page/environment is doing on that server. The best way to figure it out is to test it.

As other posts flagged this is hard to predict It depends on the load caused to the instances by the requests.

My application takes 1 request per minute. For instance, if it is used over 10,000 devices then 10,000 requests per minute are made to the server. My query to you is whether EC2 T2.medium can handle the above request.

It's hard to answer this question without knowing what requests are doing exactly. Generally speaking, 150 requests per second (rps) are quite a bunch, and unless they are really easy to serve (or cached), such a small instance might struggle.

Realistically, there\'s no way to tell because it depends on what your code/page/environment is doing on that server. The best way to figure it out is to test it.

I can\'t seem to find info on what\'s considered \"idle\" for a t2 instance? Like if it really does nothing - except running the OS as usual, or if it is below it\'s allocated CPU usage %. And is it counded hourly, or by the minute, or seconds. Like if I have one visitor per minute on my website, that needs 2-3 seconds of burst perfomance, will I get credits at all?

Great article, thanks for writing this up. Would be curious to see your benchmark against the instance types it supposedly replaces as well, specifically the m1.medium and m3.medium.

Your server will be able to do more work in parallel. If it's just serving static web pages I wouldn't expect much difference. If it's executing php on a loaded server then I would expect some improvement.

(of course, t2.medium you have to pay extra for EBS and/or bandwidth charges). Anyway what I\'d like to see if a network throughput comparing t2\'s micro\'s and DO/linode...

Sorry for commenting on this rather old post, but it has a high pagerank when you google ec2 t2.medium benchmarks and hence I think you should correct an important detail: Your \"baseline\" designation is somewhat misleading - the t2 CPU credit principle does not operate with burst performance and baseline performance; you just consume or earn credits at different paces.

you'll only get that instance's peak performance if (on average) it's less than 40% utilised

Benchmark your application on both and determine the right fit for you. That's the only way to know for sure.

One benchmark shows that t2.medium in bursting mode actually beats a c3.large (let alone the m3.medium, which is less than half as powerful, at 3 ECU vs 7).

For longer sessions, more data traffic, or higher numbers of concurrent users, I'd consider the M3.

If your T2 instance has a zero CPU Credit balance, performance will remain at baseline CPU performance.

If your application requires sustained high CPU performance, we recommend our Fixed Performance Instances, such as M3, C3, and R3.

Although the "hardware" specs look similar for the T2.medium instance and the M3.medium instance, the difference is when you consider Burstable vs. Fixed Performance.

Basically: "don't worry about it, use the cheapest and easiest thing that will run it."

Make sure you host all static assets in S3 and use caching well, and even the smaller AWS instances can handle hundreds of requests per second.

If you don't need the power to completely configure your server, shared hosting or a platform-as-a-service solution will be easier.

30000 hits per month is on average a visitor every 90 seconds. Unless your site is highly atypical, load on the server is likely to be invisibly small. Bursting will handle spikes up to hundreds (or thousands, with some optimizations) of visitors.

If they can generate 20% CPU load continuously with a single visitor per 90 seconds, then they need to look at their code seriously.

As other posts flagged this is hard to predict It depends on the load caused to the instances by the requests.

t2.medium allows for burst-able performance whereas m3.medium doesn't. t2.medium even has more vCPU (1 vs 2) and memory (3.75 vs 4) than the m3.medium. The only performance gain is the SSD w/a m3.medium, which I recognize could be significant if I'm doing heavy I/O.

It's hard to answer this question without knowing what requests are doing exactly. Generally speaking, 150 requests per second (rps) are quite a bunch, and unless they are really easy to serve (or cached), such a small instance might struggle.

Realistically, there\'s no way to tell because it depends on what your code/page/environment is doing on that server. The best way to figure it out is to test it. Bring the server up; throw requests at it and measure.

My application takes 1 request per minute. For instance, if it is used over 10,000 devices then 10,000 requests per minute are made to the server. My query to you is whether EC2 T2.medium can handle the above request. Else do you suggest us any other instances..?

For example, here\'re some use cases that we\'ve found work really well on T2 instances: * Our email-sending service uses a **lot** of CPU on-the-hour, every hour. We send out daily traffic reports in different timezones but it sits mostly idle in-between those times.

* Most of the instances serving our front-end applications (everything you see on [gosquared.com](https://gosquared.com)) – these instances typically spend most of their time either idle between requests or waiting on network activity to other services.

For example, consider running one service that maxes out the whole CPU but only does it 10% of the time, in that situation a t2.micro instance is perfect.

If not run out of credits t2 instance single thread performance same, I have tested t2.micro & t2.medium. And see the screenshot above, t2.medium faster than c4.large when bursting while t2.medium is also half the price of c4.large, the \"pricing logic\" not fit here.

A t2.medium is half the price of an m4.large. Could you try the tests again on a t2.large and m4.large (as per the diagram)?

This is false information. Instance retirement event is just host maintenance event. NOT THE replacement process.

So, there is no specific date to end support for older instance type, they are retired gradually and we are only notified through scheduled events?

For detailed information on the process and implications of instance retirement, please refer to the following resource: [Understanding Instance Retirement on AWS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html). Additional [Scheduled events for your instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html)

That's a great news! Thank you for the link

As other posts flagged this is hard to predict It depends on the load caused to the instances by the requests.

It's hard to answer this question without knowing what requests are doing exactly. Generally speaking, 150 requests per second (rps) are quite a bunch, and unless they are really easy to serve (or cached), such a small instance might struggle.

My application takes 1 request per minute. For instance, if it is used over 10,000 devices then 10,000 requests per minute are made to the server. My query to you is whether EC2 T2.medium can handle the above request.