IIRC Apache Arrow [1] promised similar goal and it seems covers CUDA as well [2]. I wonder how these relates in the big picture. This one seems much simpler than arrow, which is probably a good thing in terms of the differentiation?
This one seems to be a very thin interface over raw buffers, with the CUDA runtime managing the hard parts of migrating to device(s) and back. Pretty neat to offer natural interfaces to that sort of managed memory, but necessarily low-level. Anything more expressive than (automagically-migrated) arrays of primitives is up to the programmer.
I have this irrational reaction to any apache managed project to instantly reject it. I've just been burned so many times by software that isn't ready for prime time.
The Apache Arrow project is led by the creator of Pandas, which is one of the most important packages in the Python ecosystem. I agree Apache has some half-baked projects, but I think Arrow has the track record and backing to achieve its goals
- [1] https://arrow.apache.org/
- [2] https://arrow.apache.org/docs/python/cuda.html