I was working on the Udacity Image Classification project (using the instructions from this topic) and I was running into Resource Exhaustion errors while using Jupyter Notebook on a TensorFlow 0.12 with GPU environment. Whenever I tried more than 3 layers in the TensorFlow graph, I would get this error.
The max architecture I was able to use was 2 convolution layers of 32x32x32 and 32x32x128, and one fully connected layer of 32x32x300.
On my laptop, in a VMware Ubuntu machine with 12 GB of RAM allocated to it, I was able to run bigger, more complex architectures with more and larger layers, without running out of memory, but because of VMware I was not able to use the GPU, so FloydHub was way faster.
The GPU environment was supposed to have 61 GB of RAM and 12 GB of GPU RAM, so I'm not sure why I would get this error in such a powerful environment.
The questions I have are these:
when using TensorFlow in a GPU FloydHub environment, the whole graph and solution get loaded in GPU RAM, so all I have available is 12 GB of RAM?
if the answer is no to the previous question, and it's using the 61 GB of RAM as well to load the TensorFlow graph, what are the possible reasons I've got the Resource Exhaustion error in Jupyter Notebook?
Unfortunately I deleted the experiment, but if needed I can create it again in order to troubleshoot this.
Please help...
I didn't find the right solution from the Internet.
References
https://forum.floydhub.com/t/resource-exhaustion-error/44
Case study examples