we had a similar adventure implementing the next thing which was lorentz embeddings. Some equations were wrong in the paper which AFAIK have not been fixed in subsequent revisions.
@dchatterjee172 found a lot of the inconsistencies in the work.
we never implemented the poincare paper since by then Facebook had published it's own code, but for the lorentz paper there were a few which were wrong. I'll update the repo to point to those in a bit
Why isn't this being used heavily in other NLP frameworks? All the newest Language Models seem to be throwing more compute at transformer models instead of moving away from Euclidean Space.
@dchatterjee172 found a lot of the inconsistencies in the work.
https://github.com/theSage21/lorentz-embeddings