Any updates on GPU-accelerated TensorFlow implementation?

I’ve seen quite a few posts in the past just by searching ‘tensorflow’ on the forum, and the replies seem to unanimously suggest that it is not yet fully working.

In particular, I had just set up Datalore Enterprise using docker with a GPU-enabled agent today, and I noticed that whilst using PyTorch, my local GPU works perfectly (at least it can be detected, haven’t done too much testing), whereas TensorFlow could not even detect the GPU.

As a result, I’m wondering if proper TensorFlow support is still something that is in active development?

Cheers!

1 Like

I have the same issue using Datalore and TensorFlow.
At the moment i’m using Jupyter Hub, which supports GPU+TF.

Is there any hint how to get this working?

I got TF working a while ago, although I don’t think I’ve done any tweaks; it just magically started to GPU-accelerate.

Very strange, I’ll investigate more when I have the time.

I’m very interested in how you got this working.

I really don’t know. This environment.yml entry makes PyTorch, TensorFlow, and CuPy all work for me:

datalore-env-format-version: "0.2"
datalore-package-manager: "conda"
datalore-base-env: "minimal"
dependencies:
- cupy
- cudatoolkit
- seaborn
- tensorflow
- pytorch
- scikit-learn
- pip:
  - torchvision