CUDA link, shared GPU
Would it be possible to get an API (or sources) to create a wrapper for CUDA, in order to distribute AI tasks (Tensorflow, PyTorch) through your cable?
Tagged: