Learn how Palo Alto Networks is Transforming Platform Engineering with AI Agents. Register here

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

p3.2xlarge

EC2 Instance

GPU instance with 8 vCPUs, 61 GiB memory, and 1 NVIDIA V100 GPU with 16 GB memory. Designed for machine learning, high-performance computing, and computational finance.

Coming Soon...

icon
Pricing of
p3.2xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

icon
Spot Pricing Details for
p3.2xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
p3.2xlarge
FeatureSpecification
icon
Storage features of
p3.2xlarge
FeatureSpecification
icon
Networking features of
p3.2xlarge
FeatureSpecification
icon
Operating Systems Supported by
p3.2xlarge
Operating SystemSupported
icon
Security features of
p3.2xlarge
FeatureSupported
icon
General Information about
p3.2xlarge
FeatureSpecification
icon
Benchmark Test Results for
p3.2xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC 462.2MB
AES-256 CBC 336.8MB
MD5 2.0GB
SHA256 1.3GB
SHA512 1.7GB
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3162 3160
Average 3160 3160
Deviation 0.56 0.3
Min 3159 3159

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
p3.2xlarge
AI-summarized insights
filter icon
Filter by:
All

Pretraining a small model like nanoGPT on a p3.2xlarge instance, the AWS Deep Learning AMI is a suitable choice. It comes pre-configured with popular deep learning frameworks, NVIDIA drivers, and libraries, making it easy to get started with

19-03-2025
memory_usage

One thing to note - although the p3 isn’t much better than a 1080ti you can buy yourself, it’s _much_ better than the p2. So if you need to use AWS, and need to run big models quickly, a p3 is a good option.

2025-10-17 00:00:00
benchmarking

Don’t get suckered by AWS and Volta. The Amazon P3 instances (available Oregon) feature the latest DL GPU tech at $3.06 an hour (2xlarge) but PyTorch, TF, et al can’t utilize it fully yet.

2025-10-17 00:00:00
benchmarking

Testing new Tesla V100 on AWS. Fine-tuning VGG on DeepSent dataset for 10 epochs.

26-10-2017
benchmarking

If you really want to get into the black magic of speed-ups, these cards also feature full FP16 support, which means you can double your TFLOPS by dropping to FP16 from FP32.

26-10-2017
benchmarking

For anyone using the standard set of frameworks (Tensorflow, Keras, PyTorch, Chainer, MXNet, DyNet, DeepLearning4j, ...) this type of speed-up will likely require you to do nothing - except throw more money at the P3 instance :)

26-10-2017
memory_usage, benchmarking

Oh, and the V100 comes with 16GB of (faster) RAM compared to the K80's 12GB of RAM, so you win there too.

26-10-2017
memory_usage, benchmarking

P3 (V100) with single GPU: ~20 seconds per epoch

26-10-2017
benchmarking

Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose. This site is protected by hCaptcha and its [Privacy Policy](https://hcaptcha.com/privacy) and [Terms of Service](https://hcaptcha.com/terms) apply.

19-03-2025

I asked AWS to let me use a p3 instance. Their answer : no.

2018-01-04 00:00:00

Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose. This site is protected by hCaptcha and its [Privacy Policy](https://hcaptcha.com/privacy) and [Terms of Service](https://hcaptcha.com/terms) apply.

After I build PyTorch from source, there’s no initialization delay in conv_learner. Works smoothly.

29-11-2017
benchmarking

p3 instances were not showing because of the AWS region that I was assigned. I changed that to Oregon and that fixed the problem.

19-03-2018

The P3 is 800% faster than P2 for training with fastai!

Leader
29-11-2017
benchmarking

Testing new Tesla V100 on AWS. Fine-tuning VGG on DeepSent dataset for 10 epochs.

26-10-2017
benchmarking

If you really want to get into the black magic of speed-ups, these cards also feature full FP16 support, which means you can double your TFLOPS by dropping to FP16 from FP32.

26-10-2017
benchmarking

For anyone using the standard set of frameworks (Tensorflow, Keras, PyTorch, Chainer, MXNet, DyNet, DeepLearning4j, ...) this type of speed-up will likely require you to do nothing - except throw more money at the P3 instance :)

26-10-2017
memory_usage, benchmarking

P3 (V100) with single GPU: ~20 seconds per epoch

26-10-2017
benchmarking

Oh, and the V100 comes with 16GB of (faster) RAM compared to the K80's 12GB of RAM, so you win there too.

26-10-2017
memory_usage, benchmarking

Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences, and should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose. This site is protected by hCaptcha and its [Privacy Policy](https://hcaptcha.com/privacy) and [Terms of Service](https://hcaptcha.com/terms) apply.

19-03-2025

In my app, I repurposed a pre-trained vgg19 model. Inference time of one 256x256 color jpeg on p3.2xlarge with the Volta AMI was like 100 milliseconds or less.

30-10-2017
benchmarking

How about "NVIDIA Volta Deep Learning AMI" with p3.2xlarge (Tesla V100 GPU) instance?

28-10-2017
benchmarking

I'm having the same issue on p3.2xlarge instances.

19-03-2019

One thing to note - although the p3 isn’t much better than a 1080ti you can buy yourself, it’s _much_ better than the p2. So if you need to use AWS, and need to run big models quickly, a p3 is a good option.

2025-10-17 00:00:00
benchmarking

Don’t get suckered by AWS and Volta. The Amazon P3 instances (available Oregon) feature the latest DL GPU tech at $3.06 an hour (2xlarge) but PyTorch, TF, et al can’t utilize it fully yet.

2025-10-17 00:00:00
benchmarking

In my app, I repurposed a pre-trained vgg19 model. Inference time of one 256x256 color jpeg on p3.2xlarge with the Volta AMI was like 100 milliseconds or less.

30-10-2017
benchmarking

How about "NVIDIA Volta Deep Learning AMI" with p3.2xlarge (Tesla V100 GPU) instance?

28-10-2017
benchmarking

I'm having the same issue on p3.2xlarge instances.

19-03-2019

I'm having the same issue on p3.2xlarge instances.

19-03-2019
Load More
Similar Instances to
p3.2xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.