Hacker News new | past | comments | ask | show | jobs | submit login
A Geometric Algebra Implementation Using Binary Tree (archives-ouvertes.fr)
70 points by grondilu on April 28, 2017 | hide | past | favorite | 6 comments



Yes! I've been looking for something like this for months now! A few weeks back, as this paper was being published, I remember a night where I was walking down the street wondering if such a thing was possible, to use some kind of tree structure to do efficient Clifford algebra products in higher dimensions. It must have been in the air :)

Consider that a Clifford algebra of a 15 dimensional vector space has an associated 2^15 = 32768 dimensional basis. To be able to do efficient calculations in arbitrary subspaces of such a huge algebra is amazing.

That said, higher dimensional Clifford algebras are related to lower ones by very well defined rules: https://en.wikipedia.org/wiki/Classification_of_Clifford_alg...


Strange to see an academic paper without any context making it on the front page of HN, I didn't know that many people would be interested by Geometric algebra.

The abstract was promising:

> This paper presents an efficient implementation of geometric algebra, based on a recursive representation of the algebra elements using binary trees. The proposed approach consists in restructuring a state of the art recursive algorithm to handle parallel optimizations.

Unfortunately I found the results and conclusion disappointing.

That method sounds interesting for the problem, unfortunately there's no figures/tables presenting the results at all, and no mention of parallel optimizations (Unless the SIMD optimization is what the authors refer as parallel optimizations. I was expecting multi cores optimizations, or at least shared memory multi-threading or GPU computing).

They focus heavily on theory, but only ~2 pages out of 21 are about results. I'm used to read papers where the experimental results describe the experimental setup (What hardware etc.)

They only give small hints about expected results such as:

> The results show here an overall 25% relative gain over Gaigen mainly due to the SIMD implementation

But it's hard to wrap your head around this. Is it SIMD on a 4 core machine? 8 core? 2 cores? Knowing the hardware setup would change my perception of the results. Especially the prior-art mentions "Gaalop" a CUDA/OpenCL implementation. Why is that not what they compare their results with?

The authors describe Galoop as "the most recent efficient implementation of the geometric algebra" but yet seem to contradict themselves in the Conclusion:

> We show that the proposed method is at least as fast as the main state of the art specialized geometric algebra implementations.

Is Gaigen or Galoop the state of the art? I found this confusing.


There are lots of mathematical people on HN, so it's not too surprising. Then it also wouldn't surprise me if there were more people like me: I have a romanticized idea of geometric algebra as a better way of understanding things I know and things I'll someday learn, and will upvote/bookmark/save-for-later a lot of what I come across about it on the chance I get the combo of time+motivation to overhaul my linear algebra knowledge and go really deep into geometric algebra on both the analytical side and the computational side.

Thanks for the summary.


geometric algebra is one of my favorite-est things. for more info and history, check this out:

http://geometry.mrao.cam.ac.uk/


One of the drawbacks is that it's fairly slow computationally. I wrote a Perl 6 module[1] and I'll try to update it with this binary tree method. Hopefully I'll see an improvement.

1. https://github.com/grondilu/clifford/


why perl? just curious




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: