This is pretty awesome work! Just a quick read so far but their approach shows great results.
As recently announced at HPEC 2020 GraphBLAS BOF, NVIDIA is contributing CUDA support to the SuiteSparse GraphBLAS library for sparse matrices. It would be interesting to see how the TCU work from this paper could influence that contribution!
Sparse matrix multiplication of an adjacency matrix is the same operation as one step in a breadth first search across a graph. This is why sparse multiplication is so important to optimize.
Dense matrices with N^2 space requirements simply can't hold or work with very large graphs, where as sparse (order N) and hypersparse (<< order N) techniques in GraphBLAS mean much higher density graph networks can be stored.
A great video on the mathematical foundations on the graphblas can be seen here:
As recently announced at HPEC 2020 GraphBLAS BOF, NVIDIA is contributing CUDA support to the SuiteSparse GraphBLAS library for sparse matrices. It would be interesting to see how the TCU work from this paper could influence that contribution!
Sparse matrix multiplication of an adjacency matrix is the same operation as one step in a breadth first search across a graph. This is why sparse multiplication is so important to optimize.
Dense matrices with N^2 space requirements simply can't hold or work with very large graphs, where as sparse (order N) and hypersparse (<< order N) techniques in GraphBLAS mean much higher density graph networks can be stored.
A great video on the mathematical foundations on the graphblas can be seen here:
https://www.youtube.com/watch?v=gZSNp6XbOK8
Any to plug my own work, here's an introductory video on using the GraphBLAS with python:
https://www.youtube.com/watch?v=JUbXW_f03W0