Cloud Mercato tested CPU performance using a range of encryption speed tests:
Cloud Mercato's tested the I/O performance of this instance using a 100GB General Purpose SSD. Below are the results:
I/O rate testing is conducted with local and block storages attached to the instance. Cloud Mercato uses the well-known open-source tool FIO. To express IOPS the following parametersare used: 4K block, random access, no filesystem (except for write access with root volume and avoidance of cache and buffer.
.png)


When I am using the network the image processing takes about 30 seconds, which is way too long compared to the same network used on TITAN X took 1-2 seconds.

Same issue here : nvidia-smi NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. lsmod | grep nvidia (no output) dmesg | grep "Linux version" 0.000000 Linux version 4.4.0-1077-aws (buildd@lcy01-amd64-021) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10) ) #87-Ubuntu SMP Wed Mar 6 00:03:05 UTC 2019 (Ubuntu 4.4.0-1077.87-aws 4.4.170)

The Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background. In the case you and others in this thread described, the unattended upgrade must have included a newer kernel which became active when the instance was rebooted.

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

When I am using the network the image processing takes about 30 seconds, which is way too long compared to the same network used on TITAN X took 1-2 seconds.

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

As you have correctly observed, the Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background.

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

Same issue here :

I'm having the same issue : '''NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver'''

As you have correctly observed, the Deep Learning Ubuntu AMI family you are using autonomously performs updates in the background.

Basically, after shutdown and reboot, the instance no longer has the nvidia module loaded in the kernel. Furthermore, according to dmesg, there seems to be a different kernel loaded. All of this happens without me actively causing it.

Same issue here :

AWS has launched a new family of Elastic Compute Cloud (EC2) instance types called P2. Backed by the Tesla K80 GPU line from Nvidia, the new P2 instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD) seismic analysis, molecular modeling, genomics and computational finance workloads

Is that a good deal for $1/hour? (I'm not sure if a p2.large instance corresponds to use of one K80 or half of it)How much would it cost to "train" ImageNet using such instances? Or perhaps another standard DDN task for which the data is openly available?

This is great - we'll try to get our Tensorflow and Caffe AMI repo updated soon:

My first thought: "I wonder what the economics are like, re: cryptocurrency mining?"My second thought: "I wonder if Amazon use their 'idle' capacity to mine cryptocurrency?"

I can start g2 but not p2 instances (limit 0). My AWS usage is very low, but Amazon has charged by credit card a few times so I guess I am a paying customer (not using free tier)

Fuck bitcoins. I want to know how many electric sheep [1] this beast can breed per second!

You will need time to buy that gpu, upgrade your PC, realise that it doesn't fit into your existing case and that you need a better power supply. Then some time to find a better place to be than sitting next to that really loud and hot PC. And it all takes optimistically days, while you can start computing on EC2 within in an hour.

I question the amount of people that would be buying these for research AND gaming.

By the way, I own a 1080 since two weeks and I can't overstate how powerful this thing is. Even if you are not into gaming, getting a 1080 is still a considerable option if you want to experiment with deep learning.

While you're right that buying your own GPU makes a lot of sense for personal use or small tasks, if you factor in the cost of keeping a ML team waiting for the network to finish training, then it might be cheaper to invest $7000 and have it run in a few hours instead of weeks.

Using spot pricing you tend to get it far cheaper than that. I was using the previous GPU instances a few months ago which are meant to be $0.65 an hour. But using spot pricing, setting the maximum to about $50 so I wouldn't get shutdown in the middle of training, I seemed to spend an average of about $0.20 an hour.

Still seems much better to buy your own gtx 1080 for $700 which you would have spent in a month playing with parameters on these instances.

You can set up the volume a CPU only machine (even on a free instance), and then just launch that volume with a machine with these big expensive GPU's on it.

$0.9 per K80 GPU per hour, while expensive, opens up so many opportunities - especially when you can get a properly connected machine.