Sedai raises $20 million for the first self-driving cloud!
Read Press Release

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

a1.2xlarge

EC2 Instance

Arm-based instances powered by AWS Graviton processors, offering significant cost savings for scale-out workloads with 4x the resources of a large instance. This instance type is designed to provide a balanced combination of compute, memory, and network resources for the specified workload type.

Coming Soon...

icon
Pricing of
a1.2xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

Schedule a meeting
icon
Spot Pricing Details for
a1.2xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
a1.2xlarge
FeatureSpecification
icon
Storage features of
a1.2xlarge
FeatureSpecification
icon
Networking features of
a1.2xlarge
FeatureSpecification
icon
Operating Systems Supported by
a1.2xlarge
Operating SystemSupported
icon
Security features of
a1.2xlarge
FeatureSupported
icon
General Information about
a1.2xlarge
FeatureSpecification
icon
Benchmark Test Results for
a1.2xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 352.3MB
AES-256 CBC 261.1MB
MD5 1.0GB
SHA256 3.0GB
SHA512 818.9MB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3101 3099
Average 3099 3098
Deviation 0.57 0.57
Min 3099 3097

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
a1.2xlarge
AI-summarized insights
filter icon
Filter by:
All

While the chart above is a good start, there’s more than simply considering “Reserved vs. On Demand”. So let’s take a closer look at all the options…

19-03-2025
cost_savings

EC2 instances are themselves virtual machines (VMs) running on Amazon's infrastructure. They don't have a physical BIOS that you can directly access.

19-03-2025

an EC2 instance is already virtualized, so you generally cannot run another hypervisor in one. If your applications needs to have access to Intel VT or AMD-V extensions, then you have the option in AWS to use a bare-metal instance type.

19-03-2025

Since you're already in a virtualized environment, enabling hardware virtualization within the EC2 instance's BIOS wouldn't be applicable. This functionality is managed by the underlying hardware on which AWS runs EC2 instances.

19-03-2025

If you are looking to run a hypervisor on an EC2 instance, and then run VM guests inside the EC2 instance then generally that's not possible.

19-03-2025

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

27-11-2018
memory_usage, benchmarking

Performance : Love the CPU Outputs and the virtualization templates! 8/10

21-12-2022
benchmarking

Just created a new instance. Processor speed is 2297 MHz.

27-11-2018
benchmarking

EC2 instances are priced according to instance type, regardless of the number of CPUs enabled. Disabling vCPUs does not change the cost of the instance type.

19-03-2021
cost_savings

It’s true. Also note that you get a physical core for each vCPU with t2.I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.

27-11-2018
benchmarking

GHz doesn't matter anyways. In general nothing matters since threads can be individually throttled without you noticing (apart from declining performance).

27-11-2018
benchmarking

Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.

27-11-2018
graviton

vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.You might be thinking of ECU (EC2 compute unit), which are intended to be comparable across hardware and are normalized to some old 1.7 GHz CPU that I guess was common in the early days of EC2. Amazon doesn't promote the ECU rating for instance types much, but it's still available if you look.

27-11-2018
graviton

Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way.One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core.Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU...FWIW, I tried to look up the ECU rating of the new

27-11-2018

But clock speed doesn't really mean anything these days either. 1Ghz is just as arbitrary - the instruction rate through two CPUs at the same clock rate can vary massively.So a vCPU is an arbitrary unit but as each is equivalent then they can be benchmarked for the relevant workload.

27-11-2018
graviton

Very cool. As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.

27-11-2018
graviton

Do hosting companies no longer give you the speed of a processing core? I couldn't find anywhere that explains how fast 1 of the ARM cores go? Is it 1ghz or 100mhz? Seems like that would be quite important. Seems like 1vCPU is a bit of an arbitary figure. That could be 1 vCPU that goes at 4ghz but then 2 vCPU's that only go at 1ghz. I feel like there's information missing.

27-11-2018
graviton

Just created a new instance. Processor speed is 2297 MHz.

27-11-2018
graviton

They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.

They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, [containerized microservices], caching fleets, distributed data stores, and development environments.

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, [containerized microservices], caching fleets, distributed data stores, and development environments.

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, [containerized microservices], caching fleets, distributed data stores, and development environments.

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.

Amazon EC2 is introducing instances that are powered by CPUs custom built by Amazon on the Arm architecture.

Are you running Lambda functions on those?

ECU ratings are based on an ancient x86 benchmark, and don't really compare well with each other anymore, let alone with totally different architectures. I think they've been trying to retire it for years. I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

The docker image they release for the on-host agent [0] doesn't seem to have an arm tag. That seems to point towards it being unlikely. I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.

A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.

Yes T2 performs quite admirably! If your CPU-bound workload doesn't use much memory, smaller sizes of T2 Unlimited can be a much better deal than C4. T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.

It’s true. Also note that you get a physical core for each vCPU with t2. I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.

It is in the AWS documentation:"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."

With AWS, I found this info from tableau recently:An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.

That depends on what you're paying for. If you're using Lightsail or one of the t-series EC2 instances you'll definitely get throttled as you don't get dedicated hardware. All other EC2 instance types give you dedicated hardware, on the Intel instances you get one hardware thread per vCPU and on the ARM instances you get a physical core. Those instance types don't throttle.

Details are there in appropriate pages.> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.

Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way. One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core. Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU... FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.

Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.

Just created a new instance. Processor speed is 2297 MHz.

vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.

It’s true. Also note that you get a physical core for each vCPU with t2. I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.

As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.

Are you running Lambda functions on those?

Saving 40% of your AWS bill by simply switching instances types would be a major win.

The docker image they release for the on-host agent [0] doesn't seem to have an arm tag. That seems to point towards it being unlikely. I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.

ECU ratings are based on an ancient x86 benchmark, and don't really compare well with each other anymore, let alone with totally different architectures. I think they've been trying to retire it for years. I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.

With AWS, I found this info from tableau recently:An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.

Yes T2 performs quite admirably! If your CPU-bound workload doesn't use much memory, smaller sizes of T2 Unlimited can be a much better deal than C4. T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.

It’s true. Also note that you get a physical core for each vCPU with t2. I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.

It is in the AWS documentation:"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."

With AWS, I found this info from tableau recently:An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.

Details are there in appropriate pages.> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.

That depends on what you're paying for. If you're using Lightsail or one of the t-series EC2 instances you'll definitely get throttled as you don't get dedicated hardware. All other EC2 instance types give you dedicated hardware, on the Intel instances you get one hardware thread per vCPU and on the ARM instances you get a physical core. Those instance types don't throttle.

Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way. One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core. Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU... FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.

Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.

vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.

Just created a new instance. Processor speed is 2297 MHz.

As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.

The docker image they release for the on-host agent [0] doesn't seem to have an arm tag. That seems to point towards it being unlikely. I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.

Saving 40% of your AWS bill by simply switching instances types would be a major win.

Are you running Lambda functions on those?

ECU ratings are based on an ancient x86 benchmark, and don't really compare well with each other anymore, let alone with totally different architectures. I think they've been trying to retire it for years. I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.

Yes T2 performs quite admirably! If your CPU-bound workload doesn't use much memory, smaller sizes of T2 Unlimited can be a much better deal than C4. T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.

It is in the AWS documentation:"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."

Details are there in appropriate pages.> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.

That depends on what you're paying for. If you're using Lightsail or one of the t-series EC2 instances you'll definitely get throttled as you don't get dedicated hardware. All other EC2 instance types give you dedicated hardware, on the Intel instances you get one hardware thread per vCPU and on the ARM instances you get a physical core. Those instance types don't throttle.

Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way. One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core. Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU... FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.

Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.

Just created a new instance. Processor speed is 2297 MHz.

vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.

As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.

Performance : Love the CPU Outputs and the virtualization templates! 8/10

Cons : low compatibility with open-source projects, no windows instances 4/10

Pricing : Heck Yeah! I love it 10/10

In my opinion, using AWS A1 host is super efficient for CPU-based workloads with attention to resource allocation and ARM64 architecture, so if you are going to run functional-based infrastructure and host your NoSql databases elsewhere, Yup! This is a perfect host for you to use.

Coming to the #1 requirement in one of the projects which is having MongoDB & ElasticSearch deployment and here is where things got *** üò´

my infrastructure consisted of the following: A1 — Host for Docker-based microservices, guarded by kong gateway and traefik load balancer, running 740 microservices. To say the least, I am running on a 290% CPU allocation and 150% CPU usage with thrusting on some periods and the host is enjoying it! memory allocation is amazing for this deployment and didn't fail once in months!

A1- Host for Databases based on docker containers üòß, yeah, I know üò•. MariaDB, PostgresSql were AMAZING! some maintenance from time to time on PG but other than that, all is fine and great!

Are you running Lambda functions on those?

ECU ratings are based on an ancient x86 benchmark, and don't really compare well with each other anymore, let alone with totally different architectures. I think they've been trying to retire it for years. I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

The docker image they release for the on-host agent [0] doesn't seem to have an arm tag. That seems to point towards it being unlikely. I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.

A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.

Yes T2 performs quite admirably! If your CPU-bound workload doesn't use much memory, smaller sizes of T2 Unlimited can be a much better deal than C4. T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.

How come the "ECU" ratings aren't listed for the A1 types in the pricing page? In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?

It’s true. Also note that you get a physical core for each vCPU with t2. I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.

With AWS, I found this info from tableau recently:An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.

Details are there in appropriate pages.> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.

It is in the AWS documentation:"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."

Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way. One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core. Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU... FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.

Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.

That depends on what you're paying for. If you're using Lightsail or one of the t-series EC2 instances you'll definitely get throttled as you don't get dedicated hardware. All other EC2 instance types give you dedicated hardware, on the Intel instances you get one hardware thread per vCPU and on the ARM instances you get a physical core. Those instance types don't throttle.

vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.

Just created a new instance. Processor speed is 2297 MHz.

As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.

Are you running Lambda functions on those?

Load More
Similar Instances to
a1.2xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.