Learn how Palo Alto Networks is Transforming Platform Engineering with AI Agents. Register here

Attend a Live Product Tour to see Sedai in action.

Register now
More
Close
AWS

p2.xlarge

EC2 Instance

Previous generation GPU compute instance with 4 vCPUs, 61 GiB memory, and 1 NVIDIA K80 GPU. Designed for machine learning, high-performance computing, and computational finance.

Coming Soon...

icon
Pricing of
p2.xlarge

N/A

On Demand

N/A

Spot

N/A

1 Yr Reserved

N/A

3 Yr Reserved

Pricing Model
Price (USD)
% Discount vs On Demand
sedai

Let us help you choose the right instance

icon
Spot Pricing Details for
p2.xlarge

Here's the latest prices for this instance across this region:

Availability Zone Current Spot Price (USD)
Frequency of Interruptions: n/a

Frequency of interruption represents the rate at which Spot has reclaimed capacity during the trailing month. They are in ranges of < 5%, 5-10%, 10-15%, 15-20% and >20%.

Last Updated On: December 17, 2024
icon
Compute features of
p2.xlarge
FeatureSpecification
icon
Storage features of
p2.xlarge
FeatureSpecification
icon
Networking features of
p2.xlarge
FeatureSpecification
icon
Operating Systems Supported by
p2.xlarge
Operating SystemSupported
icon
Security features of
p2.xlarge
FeatureSupported
icon
General Information about
p2.xlarge
FeatureSpecification
icon
Benchmark Test Results for
p2.xlarge
CPU Encryption Speed Benchmarks

Cloud Mercato tested CPU performance using a range of encryption speed tests:

Encryption Algorithm Speed (1024 Block Size, 3 threads)
AES-128 CBC N/A
AES-256 CBC N/A
MD5 N/A
SHA256 N/A
SHA512 N/A
I/O Performance

Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:

Read Write
Max 3109 3099
Average 3098 3095
Deviation 4.54 1.79
Min 3093 3091

I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.

icon
Community Insights for
p2.xlarge
AI-summarized insights
filter icon
Filter by:
All

When I am using the network the image processing takes about 30 seconds, which is way too long compared to the same network used on TITAN X took 1-2 seconds.

26-10-2017
benchmarking

Same issue here : nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. lsmod | grep nvidia (no output) dmesg | grep "Linux version" 0.000000 Linux version 4.4.0-1077-aws (buildd@lcy01-amd64-021) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10) ) #87-Ubuntu SMP Wed Mar 6 00:03:05 UTC 2019 (Ubuntu 4.4.0-1077.87-aws 4.4.170)

19-03-2019

The Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background. In the case you and others in this thread described, the unattended upgrade must have included a newer kernel which became active when the instance was rebooted.

19-03-2019

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

19-03-2019

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

19-03-2019

When I am using the network the image processing takes about 30 seconds, which is way too long compared to the same network used on TITAN X took 1-2 seconds.

26-10-2017
benchmarking

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

26-03-2019

As you have correctly observed, the Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background.

19-03-2019

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

19-03-2019

Same issue here :

19-03-2019

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

26-03-2019

As you have correctly observed, the Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background.

19-03-2019

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

19-03-2019

Same issue here :

19-03-2019

AWS has launched a new family of Elastic Compute Cloud (EC2) instance types called P2. Backed by the Tesla K80 GPU line from Nvidia, the new P2 instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD) seismic analysis, molecular modeling, genomics and computational finance workloads

Converge360
2016-04-10 00:00:00

Is that a good deal for $1/hour? (I'm not sure if a p2.large instance corresponds to use of one K80 or half of it)How much would it cost to "train" ImageNet using such instances? Or perhaps another standard DDN task for which the data is openly available?

30-09-2016
cost_savings

This is great - we'll try to get our Tensorflow and Caffe AMI repo updated soon:

30-09-2016

My first thought: "I wonder what the economics are like, re: cryptocurrency mining?"My second thought: "I wonder if Amazon use their 'idle' capacity to mine cryptocurrency?"

30-09-2016

I can start g2 but not p2 instances (limit 0). My AWS usage is very low, but Amazon has charged by credit card a few times so I guess I am a paying customer (not using free tier)

30-09-2016

Fuck bitcoins. I want to know how many electric sheep [1] this beast can breed per second!

30-09-2016

You will need time to buy that gpu, upgrade your PC, realise that it doesn't fit into your existing case and that you need a better power supply. Then some time to find a better place to be than sitting next to that really loud and hot PC. And it all takes optimistically days, while you can start computing on EC2 within in an hour.

30-09-2016

I question the amount of people that would be buying these for research AND gaming.

30-09-2016

By the way, I own a 1080 since two weeks and I can't overstate how powerful this thing is. Even if you are not into gaming, getting a 1080 is still a considerable option if you want to experiment with deep learning.

30-09-2016

While you're right that buying your own GPU makes a lot of sense for personal use or small tasks, if you factor in the cost of keeping a ML team waiting for the network to finish training, then it might be cheaper to invest $7000 and have it run in a few hours instead of weeks.

30-09-2016
cost_savings

Using spot pricing you tend to get it far cheaper than that. I was using the previous GPU instances a few months ago which are meant to be $0.65 an hour. But using spot pricing, setting the maximum to about $50 so I wouldn't get shutdown in the middle of training, I seemed to spend an average of about $0.20 an hour.

30-09-2016
cost_savings

Still seems much better to buy your own gtx 1080 for $700 which you would have spent in a month playing with parameters on these instances.

30-09-2016
memory_usage, cost_savings

You can set up the volume a CPU only machine (even on a free instance), and then just launch that volume with a machine with these big expensive GPU's on it.

30-09-2016

$0.9 per K80 GPU per hour, while expensive, opens up so many opportunities - especially when you can get a properly connected machine.

30-09-2016
Load More
Similar Instances to
p2.xlarge

Consider these:

Feedback

We value your input! If you have any feedback or suggestions about this t4g.nano instance information page, please let us know.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.