Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


the T series is more suitable for non-performance-verified test environments

the T series is more suitable for non-performance-verified test environments

T2 is a burstable instance type If you run out of CPU credits, CPU is throttled and performance degrades

This blog post explains AWS EC2 instance types and categories as well as provides come recommendations to help you make the right decision when you need to run an instance in the Amazon cloud.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

Ideal for lightweight development environments or testing applications before deploying on more powerful instances.

- Can be used as Dev/ Test server

When you have a distributed system software, it is best to test it in an as granular and distributed manner as possible, i.e. by using many small nodes, to identify when communication becomes a bottleneck, how the failure modes happen, what is the speed and loss of each communication style, like gossip.

Suitable for lightweight monitoring agents or logging services for applications running on other instances.

Can host APIs for applications with low traffic demands, allowing for cost-effective backend services.

Can run scheduled tasks or cron jobs that require minimal processing power, such as data backups or simple automation scripts.

Great for students or individuals learning cloud computing, Linux, or specific programming languages and frameworks.

Useful for quickly prototyping applications or services without incurring significant costs.

Can be used as a gateway or processing node in small IoT applications that do not require heavy computation.

Appropriate for small databases, such as SQLite or lightweight instances of MySQL or PostgreSQL, with minimal concurrent users.

Can serve as a backend for lightweight microservices, particularly in development or low-traffic scenarios.

Suitable for hosting small static websites or simple dynamic web applications that do not require extensive resources.

The AWS EC2 t2.nano instance is a low-cost option designed for applications that require minimal computing power. Here are some good uses for the t2.nano instance:

T2.nano instances are the best option to run microservices.

If you want to run your application on Elastic Beanstalk, you can use t2.nano instances to host your application.

T2.nano instances can be used for the applications needs low memory and inconsistent high level of CPU loads on average.

Bastion servers are used for the secure communication with instances launched in private subnets. You can use T2.nano instances as a bastion server that sits on the public subnet.

You can use T2.nano instances for the testing purpose. If you want to test the application with the distributed load, then you can save your cost by using t2.nano instances.

T2.nano instances are useful in so many cases. Some of those are: * If you are running a website(non-critical) that has a low user interaction(less load) and wants to decrease the cost of maintaining minimal performance, then T2.instances are the best option.

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

T2 instances do not have Unlimited mode turned on by default. Without Unlimited mode turned on, once the CPU credits have been exhausted, the server goes into a shallow resource usage state. Its CPU performance and network performance are lessened considerably until the CPU credits have accumulated again. We've seen this first hand on quite a few occasions now, even causing production outages.

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

This is super well documented by aws themselves and if you understood how they work before creating the article then you probably would not have written it. Please do research before writing scare articles just for clicks. That’s just lame brother.

Pretty much anything I currently use a t2.micro for. I\'m looking forward to creating \"nanoservices\" - just like micoervices, but more edgy.

I use them for scraping. I had designed my nano servers as master slave architecture. Now want to scrape 50 million entries? Sure lets do it.

I run a few personal servers which don\'t get much traffic or use much ram but still cost me the same as a t2.micro. Makes me choose between consolidating multiple apps on one server or laying for full t2.micros which I don\'t use.

I just use them for jumpboxes (a box to ssh into and then the other servers have that IP whitelisted for incoming access).

For starters, you should consider T2 any time that you would previously have used T1 (micro) or M1 (small or medium) but want better performance/cost. According to AWS: \"Many applications such as web servers, developer environments and small databases don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when they need them. T2 instances are engineered specifically for these use cases.\"

- Can be used as Dev/ Test server - Can be used to host blogs or web applications with small user base - Can be used for your learning purposes such as install Kali linux and do all sorts of security testing - Can be used as Web DMZ - Can be used as miscellaneous server (ex for executing all custom scripts from this box such as ping if various daemons are running or not)

When you have a distributed system software, it is best to test it in an as granular and distributed manner as possible, i.e. by using many small nodes, to identify when communication becomes a bottleneck, how the failure modes happen, what is the speed and loss of each communication style, like gossip. So you can simulate a big system by down-scaling the nodes, but having a realistic nodes count. This is how you test your distributed algorithms and software, and see how the rigidities (e.g. outages, errors, bottlenecks and thread-starvation) propagate through the system, thus confirming or in

The AWS EC2 t2.nano instance is a low-cost option designed for applications that require minimal computing power. Here are some good uses for the t2.nano instance: 1. Development and Testing: - Ideal for lightweight development environments or testing applications before deploying on more powerful instances. 2. Web Hosting: - Suitable for hosting small static websites or simple dynamic web applications that do not require extensive resources. 3. Microservices: - Can serve as a backend for lightweight microservices, particularly in development or low-traffic scenarios. 4. Small Databases: - Appropr

T2.nano instances are useful in so many cases. Some of those are: * If you are running a website(non-critical) that has a low user interaction(less load) and wants to decrease the cost of maintaining minimal performance, then T2.instances are the best option. * You can use T2.nano instances for the testing purpose. If you want to test the application with the distributed load, then you can save your cost by using t2.nano instances. * Bastion servers are used for the secure communication with instances launched in private subnets. You can use T2.nano instances as a bastion server that sits on the public

You can count t2 as upgrade of t1. In general t2 offer faster access to memory and disk compared to t1.

Reserved Instances are just a _billing construct_. AWS will try to match the purchased reserved instances against your running instances at the billing time. I.e. you don’t assign RIs to your actual EC2 instances, you get the discount automatically. Reserved Instances capacity doesn’t have to match the running instances. The price for `t2.medium` is the same as for 2x `t2.small` or 8x `t2.nano`. So if you purchase **32x t2.nano** it would fully cover the price of **1x t2.xlarge**.

Can you share a screenshot of the recommendation? Normally it will recommend enough of the smallest size in an instance family to cover being amalgamated into whatever you are running. So 54 t2.nanos may be to cover a t2.xlarge (32 nanos), a t2.large (16 nanos) and 3 t2.micros (2 nanos each so 6 nanos) or some other combination. It shouldn't be recommending t2.nano reserved instances for m5 instance types because they can't be used that way.

Can you share a screenshot of the recommendation? Normally it will recommend enough of the smallest size in an instance family to cover being amalgamated into whatever you are running. So 54 t2.nanos may be to cover a t2.xlarge (32 nanos), a t2.large (16 nanos) and 3 t2.micros (2 nanos each so 6 nanos) or some other combination. It shouldn't be recommending t2.nano reserved instances for m5 instance types because they can't be used that way.

Reserved Instances are just a _billing construct_. AWS will try to match the purchased reserved instances against your running instances at the billing time. I.e. you don’t assign RIs to your actual EC2 instances, you get the discount automatically. Reserved Instances capacity doesn’t have to match the running instances. The price for `t2.medium` is the same as for 2x `t2.small` or 8x `t2.nano`. So if you purchase **32x t2.nano** it would fully cover the price of **1x t2.xlarge**.

The general idea behind this is instead of provisioning one large server for peak load, you have a larger number of smaller servers that scale up and down automatically to meet load. You put your servers behind an application load balancer. This also gives you redundancy, in case something goes wrong with one server. 54 t2.nano is an odd recommendation. Maybe it\'s optimal but it\'s not intuitive. It also means each server has very little RAM, which might not work for the application. t instances can also run out of CPU credits, so I wouldn\'t use them behind a load balancer.

For a performance test that uses CloudFront, high volume and a small instance type, assign sufficient ramp-up time before reaching the full number of concurrent users.

I wouldn’t recommend a t2.nano for running an important blog that serves hundreds of concurrent users, but I would consider using one to power a low-traffic or non-critical internal site. In this test, the t2.nano alone could reliably handle 10 concurrent users with a think time of 30-45 seconds between requests.

I don’t recommend using a t2.nano to power a critical, high-traffic WordPress site. In such a case, I recommend using a combination of large instance types, ELB, Autoscaling and RDS with read replicas.

It is possible to serve thousands of visitors a month using a t2.nano EC2 instance and a CloudFront distribution.

The t2.nano is a very useful instance capable of running workloads such as a low-traffic WordPress blog or a JMeter load generator.

By introducing CloudFront, I increased the number of concurrent users 10x to 100 with a sustainable CPU consumption of less than 5% (which will not deplete my instance’s CPU credits). CloudFront also delivers a much better experience to visitors by reducing response times significantly (in this experiment, from ~160ms to ~20ms)

The intention of this test is to see what type of traffic a t2.nano instance can handle reliably without using CloudFront.

I set up a Wordpress blog in a t2.nano instance with a 8GB EBS volume (SSD).

In this test, the t2.nano alone could reliably handle 10 concurrent users with a think time of 30-45 seconds between requests.

Probably so AWS free tier customers pay more once they’re off the free tier in 12 months (I used to use t2.nano a lot thinking it was covered by the free tier and it wasn’t).

The t2.nano instance is not included in the Free Tier for a few reasons: 1. Resource Allocation: The t2.nano instance has very low resource allocation (1 vCPU and 0.5 GiB of memory). AWS likely opted for the t2.micro instance, which offers 1 vCPU and 1 GiB of memory, as it provides a more meaningful experience for users who are testing or learning about EC2.

Good point - the sum of a lot of little instances isn\'t the same as a single larger instance. OP needs to figure out their application\'s memory/cpu minimums, which will indicate a minimum instance size.

The general idea behind this is instead of provisioning one large server for peak load, you have a larger number of smaller servers that scale up and down automatically to meet load. You put your servers behind an application load balancer. This also gives you redundancy, in case something goes wrong with one server. 54 t2.nano is an odd recommendation. Maybe it\'s optimal but it\'s not intuitive. It also means each server has very little RAM, which might not work for the application. t instances can also run out of CPU credits, so I wouldn\'t use them behind a load balancer. If you turn on the option on T instance to buy extra credits it costs more than using a non-T instance. The m5.xlarge isn\'t a particularly large server, so it\'s more difficult to split it up. I would stay with m series, the smaller is m5.large so you would probably scale between 1 and 3 of them. If it\'s a fairly steady state application and the cost isn\'t a problem the easiest option is to stay with your m5.xlarge.

I think this is much closer to being the correct answer than the above one (that currently has more votes) but I\'ve never seen it recommend buying reserved instances in another family type, particularly not a previous generation. It suggests to me that the OP is receiving a recommendation to buy reserved instances for other t2 instances that sum to 54 nanos and this is unrelated to the m5 instance. I\'ve asked the OP to share a screenshot of the recommendation to check this.

There\'s a couple of things to understand: 1. Reserved Instances are just a _billing construct_. AWS will try to match the purchased reserved instances against your running instances at the billing time. I.e. you don’t assign RIs to your actual EC2 instances, you get the discount automatically. 2. Reserved Instances capacity doesn’t have to match the running instances. The price for `t2.medium` is the same as for 2x `t2.small` or 8x `t2.nano`. So if you purchase **32x t2.nano** it would fully cover the price of **1x t2.xlarge**. From the billing perspective it’s the same. On the other hand **t2._anything_** won\'t be applied against **m5._anything_** - they are a different instance class. You can buy **2x m5.large** instead of **1x m5.xlarge** reserved instance - same thing from a billing perspective. 3. Now why does it recommend _54x t2.nano_? Probably it found out that your actual needs are somewhere between `t2.xlarge` and `t2.2xlarge` - and it\'s best expressed as _54x t2.nano_. Depending on your application you may or may not be able to spread the load over a **number of smaller instances**. I wouldn\'t go to _54x t2.nano_ but perhaps **3x t2.large** could be a good option? You can then set up auto-scaling to remove some of the nodes during quiet times and save. And even use [**Spot Instances**](https://aws.amazon.com/ec2/spot/) and save even more. However for both ASG and Spot you\'ll need some automation in place. 4. For a much greater flexibility look at **[AWS Saving Plans](https://aws.amazon.com/savingsplans/)** - with that you\'ll be able to migrate your application to newer instance types, mix and match instance types, etc. With _Reserved Instances_ you\'re locked to a particular instance class in a particular region. With _Saving Plans_ you only commit to a certain spend per month and it\'s up to you how you use it.

I don’t recommend using a t2.nano to power a critical, high-traffic WordPress site.

Can you share a screenshot of the recommendation? Normally it will recommend enough of the smallest size in an instance family to cover being amalgamated into whatever you are running. So 54 t2.nanos may be to cover a t2.xlarge (32 nanos), a t2.large (16 nanos) and 3 t2.micros (2 nanos each so 6 nanos) or some other combination. It shouldn\'t be recommending t2.nano reserved instances for m5 instance types because they can\'t be used that way.

For a performance test that uses CloudFront, high volume and a small instance type, assign sufficient ramp-up time before reaching the full number of concurrent users.

We have deployed a web application on an m5.xlarge EC2 instance and when we try to buy an annual or 3 years reserved license, AWS recommends based on our current usage it is recommended to purchase 54 t2.nano instances instead of the m5.xlarge we have now. It calculates and shows a difference in the overall cost and shows that going with that option is more profitable to us. The thing I can\'t understand is what does it mean to buy 54 t2.nano instead of one m5.xlarge? Does it mean we need to host the application in all 54 nano EC2 servers and then put it through an ELB? I am a bit confused here about what to do

It is possible to serve thousands of visitors a month using a t2.nano EC2 instance and a CloudFront distribution.

I wouldn’t recommend a t2.nano for running an important blog that serves hundreds of concurrent users, but I would consider using one to power a low-traffic or non-critical internal site.

The t2.nano is a very useful instance capable of running workloads such as a low-traffic WordPress blog or a JMeter load generator.

CloudFront also delivers a much better experience to visitors by reducing response times significantly (in this experiment, from ~160ms to ~20ms)

By introducing CloudFront, I increased the number of concurrent users 10x to 100 with a sustainable CPU consumption of less than 5% (which will not deplete my instance’s CPU credits).

10 concurrent users with a random think time of 30-45 seconds resulted in ~0.3 requests per second, or ~18 requests per minute.

The average response time is < 160 ms.

CPU did not exceed 6% during this test

The intention of this test is to see what type of traffic a t2.nano instance can handle reliably without using CloudFront.

Can run scheduled tasks or cron jobs that require minimal processing power, such as data backups or simple automation scripts.

One thing I like about the t2.nano is that its modest 512MB-RAM is sufficient to sustain a WordPress blog with low traffic.

I set up a Wordpress blog in a t2.nano instance with a 8GB EBS volume (SSD).

Thanks to AWS CloudFront, the answer is yes!

I was curious about what type of load a t2.nano EC2 instance can handle. I also wanted to demonstrate how CloudFront reduces response times for a website and how the output of a modest instance could be improved by using CloudFront.

I use them for scraping.

Can host APIs for applications with low traffic demands, allowing for cost-effective backend services.

When you have a distributed system software, it is best to test it in an as granular and distributed manner as possible, i.e. by using many small nodes, to identify when communication becomes a bottleneck, how the failure modes happen, what is the speed and loss of each communication style, like gossip.

Suitable for lightweight monitoring agents or logging services for applications running on other instances.

T2.nano instances are the best option to run microservices.

Great for students or individuals learning cloud computing, Linux, or specific programming languages and frameworks.

Can be used as a gateway or processing node in small IoT applications that do not require heavy computation.

Useful for quickly prototyping applications or services without incurring significant costs.

Can serve as a backend for lightweight microservices, particularly in development or low-traffic scenarios.

Appropriate for small databases, such as SQLite or lightweight instances of MySQL or PostgreSQL, with minimal concurrent users.

Suitable for hosting small static websites or simple dynamic web applications that do not require extensive resources.

The AWS EC2 t2.nano instance is a low-cost option designed for applications that require minimal computing power.

If you want to run your application on Elastic Beanstalk, you can use t2.nano instances to host your application.

T2.nano instances can be used for the applications needs low memory and inconsistent high level of CPU loads on average.

You can use T2.nano instances as a bastion server that sits on the public subnet.

You can use T2.nano instances for the testing purpose.