Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

t3a.2xlarge

EC2 Instance

AMD-based burstable performance instance with 8 vCPUs and 32 GiB memory. Highest capacity in T3a family for larger applications with variable CPU requirements.

Coming Soon...

icon
Pricing of
t3a.2xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
t3a.2xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
t3a.2xlarge
FeatureSpecification
icon
Storage features of
t3a.2xlarge
FeatureSpecification
icon
Networking features of
t3a.2xlarge
FeatureSpecification
icon
Operating Systems Supported by
t3a.2xlarge
Operating SystemSupported
icon
Security features of
t3a.2xlarge
FeatureSupported
icon
General Information about
t3a.2xlarge
FeatureSpecification
icon
Benchmark Test Results for
t3a.2xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 493.0MB
AES-256 CBC 366.4MB
MD5 1.4GB
SHA256 3.3GB
SHA512 1.1GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3102 3101
Average 3099 3098
Deviation 0.51 1.6
Min 3098 3093

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
t3a.2xlarge
AI-summarized insights
filter icon
Filter by:
All

the T series is more suitable for non-performance-verified test environments

19-03-2025
benchmarking

the T series is more suitable for non-performance-verified test environments

19-03-2025
benchmarking

Cost Explorer takes [up to] 24 hours to set up, so it's not a good answer to support questions about billing.

29-11-2017
cost_savings

AWS is incredibly complex. Are you complaining that their billing can get complex?

29-11-2017
cost_savings

The local NVMe storage for i3.metal is the same as i3.16xlarge. There are 8 NVMe PCI devices. For i3.16xlarge those PCI devices are assigned to the instance running under the Xen hypervisor. When running i3.metal, there simply isn't a hypervisor and the PCI devices are accessed directly.- There is no hot swap for the NVMe storage.- The 8 NVMe devices are discrete, there is no hardware RAID controller- Anyone can get I/O performance stats on i3.16xlarge as a baseline. Intel VT-d can introduce some overhead from the handling (and caching) of DMA remapping requests in the IOMMU and interrupt delivery so I/O performance may be a bit higher on i3.metal, with a few microseconds lower latency.

29-11-2017
benchmarking

For all this progress the billing on AWS is so damn confusing to figure out if some machine is left on unused that I won’t use AWS again. GCE and Azure miles ahead here.

29-11-2017
cost_savings

Is this what khuey is referring to?:

29-11-2017

Thanks.I guess some other open questions:- If one of those drives fails, will Amazon hotswap them out, or do you need to migrate to a new instance (moving TBs of data to a new box without causing outages can be painful.)- Is there a hardware RAID controller for those drives, or is it software only?- Can anyone with access to one of these boxes produce some IO performance stats on them? Bonus points for stats on single drive vs concurrent across all drives (i.e is there any throttling). More points for RAID10 performance across the whole 8.

29-11-2017
benchmarking

It's exactly the same as with the i3.16xlarge instance type. There are eight 1900 GB drives. In an i3.16xlarge, those eight drives are passed through to the instance with PCIe passthrough but for the i3.metal instance, you avoid going through a hypervisor and IOMMU and have direct access.

29-11-2017

Storage – 15.2 terabytes of local, SSD-based NVMe storage.That's probably the most interesting aspect for me.Does anyone know how that's provisioned? i.e 8x just under 2TB volumes, or something else?

29-11-2017

Im assuming rr is only unavailable for multithreaded apps? How frequently is rr available for your use?

29-11-2017

rr works fine on multithreaded (and multiprocess) applications. It does emulate a single core machine though, so depending on your workload and how much parallelism your application actually has it might be painful.

29-11-2017

I have two use cases:- General performance analysis. For this more counters is generally incrementally better.- Running . This requires the retired-branch-counter to be available (and accurate - sometimes virtualization messes that up)The second one I actually care more about, because I've pretty much stopped trying to debug software when rr is not available, too painful ;). Feel free to email me (email is in my profile) for gory details.

29-11-2017
benchmarking

Seconding paulie_a, We're running a Xen stack right now and I haven't heard of this. We've worked around a few nasty bugs with Xen and linux doms already, but I'm wondering if we have this problem you're referring to and don't even know it.

29-11-2017

For the benefit of anyone reading this, KVM and VMWare virtualization generally work. Xen has problems because of a stupid Xen workaround for a stupid Intel hardware bug from a decade ago. I can provide more details about that via email (in my profile) if desired.

29-11-2017

Can you please just post the info. Intel deserves to be shamed

29-11-2017

One of the things the performance monitoring unit (PMU) is capable of doing is triggering an interrupt (the PMI) when a counter overflows. When combined with the ability to write to the counters, this lets you program the PMU to interrupt after a certain number of counted events. Nehalem supposedly had a bug where the PMI fires not on overflow but instead whenever the counter is zero. Xen added a workaround to set the value to 1 whenever it would instead be 0. Later this was observed on microarchitectures other than Nehalem and Xen broadened the workaround to run on every x86 CPU. Intel never provided any help in narrowing it down and there don't seem to be official errata for this behavior too.This behavior is ok for statistically profiling frequent events but if you depend on _exact_ counts (as rr does) or are profiling infrequent events it can mess up your day.[https://lists.xen.org/archives/html/xen-devel/2017-07/msg022...](https://lists.xen.org/archives/html/xen-devel/2017-07/msg02242.html) goes a little deeper and has citations.

29-11-2017
memory_usage, benchmarking

I have two use cases:- General performance analysis. For this more counters is generally incrementally better.- Running . This requires the retired-branch-counter to be available (and accurate - sometimes virtualization messes that up)The second one I actually care more about, because I've pretty much stopped trying to debug software when rr is not available, too painful ;). Feel free to email me (email is in my profile) for gory details.

29-11-2017
benchmarking

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others.

ParkMyCloud
25-02-2020
memory_usage

General purpose workloads with moderate CPU, memory, and network utilization.Save 10% over T3 instance prices

2025-10-03 00:00:00
memory_usage, cost_savings

Amazon T3a instances share similar features with T3 instances, except that they run on AMD EPYC 7000 series processors, clocked at 2.5 GHz (all core Turbo).

2025-10-03 00:00:00
cost_savings

AWS re:Invent 2020: Reduce cost with Amazon EC2’s next-generation T4g and T3 instance types

2021-05-02 00:00:00
cost_savings

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

2023-09-10 00:00:00
benchmarking

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

2023-09-10 00:00:00
memory_usage, benchmarking

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

2023-09-10 00:00:00
benchmarking

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

2023-09-10 00:00:00
memory_usage, benchmarking

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking

Additionally, your point about increased costs due to the larger instance size is noted, and I’ll factor that into my planning.

2024-10-12 00:00:00
cost_savings

Your explanation of the potential impacts of switching from Intel to AMD processors was especially helpful, and it's great to know that most applications should work seamlessly without needing significant configuration changes.

2024-10-12 00:00:00

I think many people, but not you obviously, have a naive view of AWS hardware provisioning. They think that if they suddenly deploy 50 EC2 instances, that Amazon will rush out and buy more servers and have them installed in order to meet instant demand. Of course, when you think about it, Amazon rely on having fairly high levels of usage in order to maximise profits, and having idle hardware lying around just in case someone wants it is a way to lose money, hence the spot pricing scheme to make at least some money from idle hardware, they can get their hardware back for exclusive use at any time by simply bidding the price up!

2025-10-03 00:00:00
cost_savings

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

2025-10-03 00:00:00
benchmarking

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

2025-10-03 00:00:00
benchmarking

I think many people, but not you obviously, have a naive view of AWS hardware provisioning. They think that if they suddenly deploy 50 EC2 instances, that Amazon will rush out and buy more servers and have them installed in order to meet instant demand. Of course, when you think about it, Amazon rely on having fairly high levels of usage in order to maximise profits, and having idle hardware lying around just in case someone wants it is a way to lose money, hence the spot pricing scheme to make at least some money from idle hardware, they can get their hardware back for exclusive use at any time by simply bidding the price up!

2025-10-03 00:00:00
cost_savings

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match. In general, it seems that the t3 instances running Intel CPUs are usually faster. You can also get a 10% discount on your EC2 computing bill:

2020-10-01 00:00:00
benchmarking

I think the discrepancies can be attributed to the choice of the t-style instances. They are generally over committed.

2023-09-10 00:00:00
benchmarking

Aren\'t \'t\' instances burst instances? They need to be under constant load for a long time before their burst credits for CPU, memory, network and EBS run out, after which they fall back on their baseline performance.

2023-09-10 00:00:00
memory_usage, benchmarking

T3a instances have AMD EPYC 7000 series processors with an all-core turbo CPU clock speed of up to 2.5 GHz. T3a instances offer an additional 10% cost savings over T3 instances.

2023-05-24 00:00:00
cost_savings

We cannot possibly know how these will perform for your workload. Rent both and test. Optimize according to your requirements: cost, performance, whatever.

2020-08-11 00:00:00
benchmarking, cost_savings

For T3 and T3a instance types Unlimited mode is turned on by default. This is excellent, as we remove the risk of a production outage, but gain the risk of increased costs.

2025-10-03 00:00:00
cost_savings

t3a.xlarge instance: with 4 vCPUs, I had much more CPU capacity and did not hit 100% CPU. I was using about 30 CPU credits per hour above the baseline, so I ended up with...

2020-02-10 00:00:00
cpu_credits, benchmarking

Additionally, your point about increased costs due to the larger instance size is noted, and I’ll factor that into my planning.

2025-10-03 00:00:00
cost_savings

Your explanation of the potential impacts of switching from Intel to AMD processors was especially helpful, and it's great to know that most applications should work seamlessly without needing significant configuration changes.

2025-10-03 00:00:00

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

Additionally, your point about increased costs due to the larger instance size is noted, and I’ll factor that into my planning.

2024-10-12 00:00:00
cost_savings

Your explanation of the potential impacts of switching from Intel to AMD processors was especially helpful, and it's great to know that most applications should work seamlessly without needing significant configuration changes.

2024-10-12 00:00:00

Thank you so much for your detailed and informative response! I really appreciate you taking the time to explain the considerations involved in changing from a t3.xlarge to a t3a.2xlarge instance.

2025-10-03 00:00:00
benchmarking

Thank you for this article. We have T instances for EC2 and RDS and we are expecting some very strange performance behavior. Do you have plan to test RDS?

2025-10-03 00:00:00
benchmarking

Hi Tibor, I forgot to mention, another useful thing to test out would be to stop and start the EC2 instance, this should move the instance to a less busy host, just a trick that usually works.

2025-10-03 00:00:00
benchmarking

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

Additionally, your point about increased costs due to the larger instance size is noted, and I’ll factor that into my planning.

2025-10-03 00:00:00
cost_savings

Your explanation of the potential impacts of switching from Intel to AMD processors was especially helpful, and it's great to know that most applications should work seamlessly without needing significant configuration changes.

2025-10-03 00:00:00

Additionally, your point about increased costs due to the larger instance size is noted, and I’ll factor that into my planning.

2025-10-03 00:00:00
cost_savings

Your explanation of the potential impacts of switching from Intel to AMD processors was especially helpful, and it's great to know that most applications should work seamlessly without needing significant configuration changes.

2025-10-03 00:00:00

t3a instances run on AMD EPYC CPUs, depending on your workload (threads, spikes, etc.), you will probably have similar performance if the right pieces match.

2020-10-01 00:00:00
benchmarking

You can count t2 as upgrade of t1. In general t2 offer faster access to memory and disk compared to t1.

2022-07-23 00:00:00
memory_usage, benchmarking

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others.

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others.

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others.

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others.

Over the past few days, I’ve been analyzing the AWS EC2 nodes currently running across multiple Kubernetes clusters in different regions, with various instance types. Here are the memory capacity differences I’ve found:

Load More
Similar Instances to
t3a.2xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.