Hacker News new | past | comments | ask | show | jobs | submit login
The Irish Logarithm (plover.com)
202 points by 082349872349872 12 months ago | hide | past | favorite | 31 comments



A base 8 or 16 version of this might be useful for calculating products on old CPUs without a multiply instruction, without having unreasonably large tables.

One table driven way of accelerating 8×8 bit multiplication with a table of modest size (e.g., 256 words) is https://dercuano.github.io/notes/multiplication-with-squares... -- however since there are 17578 distinct products of two 8 bit numbers, even a full Irish Logarithm table would probably be infeasible to store in most systems where the technique would be relevant. (you've used over 50% of a 16-bit address space on the table) And 6 table look-ups for 4 bits at a time is likely to be slower than 2 table look-ups for the 8-bit square table, even with the extra bit shifting required.


Would it though? The tables have 45 non-zero elements. The zeros aren't all regularly distributed so you're probably looking at storing way more than that.

A full table with the answers stored directly is only 55 elements. Maybe I missed something but what's the point?

Does it scale better than N^2?


yeah I'd kinda talked myself out of it the idea the end of my own post.


There's a much simpler way to derive Ludgate's logarithms.

See this article:

https://treasures.scss.tcd.ie/miscellany/TCD-SCSS-X.20121208...

The Python code is old-fashioned, but the description of how the indices are chosen is much much more economical and straightforward than this blog post.


On the other hand, I find this blog post well-motivated and explains the thought process leading up to an explanation of the tables, while that article just presents a block of Python code that I don't find straightforward at all: I imagine that if someone tried to explain from scratch how to come up with that code, it would actually be longer than this blog post.


The article doesn't just present the Python code. There is a figure on the previous page which spells out very cleary how the indices are derived. I'm going to stick to my guns and say that this blog post obfuscates matters.


I appreciate your comments! but could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


See the line “The tables look a little uncouth at first but it is not hard to figure out what is going on”: this blog post is about the author's thought process in figuring out what is going on, carrying the reader along: such a figuring-out will often be messier, with occasional wrong turns, than presenting the end of the process as a fait accompli, but it can be more illuminating (like Euler's writing rather than Gauss's who was accused of being “like the fox, who effaces his tracks in the sand with his tail”). I wouldn't call it obfuscating.

But ah +1, I see what you mean. I must confess I had only skimmed both articles earlier, but having thought about it now, While the blog post treats Table1 as a trivial consequence and reverse-engineers Table2, it is cleaner to do it the other way around (treating T2 as a consequence of T1 and reverse-engineering T1). I have done so, and ended up actually writing the “explain from scratch how to come up with that code” blog post: https://shreevatsa.net/post/irish-logarithm/ — and only after writing it out myself do I understand what Coghlan was getting at :) This is also basically what the posted blog post has BTW — it's not really any harder; it's just that it's hard to follow someone else's reasoning in text while it would be easier at a blackboard with pictures and gestures.


Thanks very much for the followup article! I suspected that sksksnrpny was correct, and that the direct forward construction given in the paper they referred to would be simpler than the ad-hoc backtracking search I did. But I needed to work through an example to be sure, and now I don't have to because you've done it for me.

Also I'm delighted that you remembered that Matthew P. Wiener thing from 30 years ago.


Recent and related:

Percy Ludgate - https://news.ycombinator.com/item?id=37533487 - Sept 2023 (17 comments)


As an Irish person from Cork, I'm a bit embarrassed I've never heard of him until now.

That said we're probably not great at acknowledging historical figures in the field. George Boole's house is three minutes' walk from my house and it's basically derelict (and was literally falling down at one point). It would be a wonderful building for a museum of his life and achievements.


To be fair, figures like Boole and Hamilton are very much celebrated within Irish academia. And it's not a new thing. I attended a conference on the legacy of Boole in 1995: https://www.maths.tcd.ie/pub/ims/bull33/bull33_3-3.pdf

Ludgate is a more marginal case because his invention didn't come to fruition. There is a bit of a sense in which he is being celebrated/hyped in a silly superficial ("collectively narcissistic") way intended to bolster self-esteem for Irish technical people. https://ingeniousireland.ie/2012/10/1909-a-novel-irish-compu...

Meanwhile significant people like Carew Meredith aren't on the public radar at all. https://en.wikipedia.org/wiki/Carew_Arthur_Meredith


I heard of him as Computer Science in TCD awards - or used to award - a prize for the best 4th year project called the Ludgate prize.

The problem is that he left no written body of work. I recall reading of a researcher who contacted his surviving relatives to enquire about any notes or materials he had left after his death but nothing had been preserved - I guess as his work was viewed as completely obscure at the time and he had achieved little recognition for it during his life. As far as I know what little we know of his proposed machine involves guesswork from very little material.


I've never heard of him until I took a trip to Dublin, and happened to stroll around random neighborhoods until I saw a plaque on a house celebrating his achievement!


there was a zoom lecture on his contributions in 2021,

the slides may be of interest: https://treasures.scss.tcd.ie/miscellany/TCD-SCSS-X.20121208...

and a hub of related content: https://treasures.scss.tcd.ie/ludgate/


>Looking things up in a multiplication table is harder because it is two-dimensional.

But T2 is pretty large, why not just make T2 have 100 elements and index it by a*10+b (i.e., just concatenate the digits together)? This is faster because you don't have to add or lookup anything in T1.

If the numbers are represented by the length of rods then this concatenation is not so easy, but neither is it in this algorithm:

> you have some thingy that slides up by T1(a) units, and then by T1(b) more, and then wherever it ends up is used to index into T2 to get the answer

So I would replace this with:

you have some thingy that slides up by a large units and then by b small units (where one large unit is 10 times longer than one small unit), and then wherever it ends up is used to index into to get the answer


This is very cool, as are a lot of older algorithms, and some newer ones. Probably mechanical computers had lots of under-the-hood optimizations that would be interesting to dig into.

It reminds me of e.g. the CORDIC algorithm, used to calculate trig functions (needed by rocket guidance systems) with a combination of fixed-point arithmetic and table look-up. Doubtless there are many others.

CORDIC: https://en.wikipedia.org/wiki/CORDIC


One of my favorite optimizations was the https://en.wikipedia.org/wiki/Stern%E2%80%93Brocot_tree. One of the discoverers was a clockmaker who wanted to be able to produce simple gear ratios that would be very close to a precise real number.


Typo: ♣ uses T_1(a) twice, one should be T_1(b)


Fixed, thanks.


unrelated -- I stumbled across your blog years ago, before I started visiting HN, and have been quite a fan. Thanks for the great blog! I didn't realize you're active on HN, it's fun seeing you on here :)


Thanks for your kind words and for reading my blog!


I want to thank you for Higher Order Perl. It was the book that finally led me to understanding Lisp. :-)


Thank you for your kind words!


No problem, enjoyed your work.


Weren't logarithms originally developed by Napier and his successors specifically for the purpose of making multiplication easier? Ludgate's algorithm as written here looks a lot like that.


Logarithms allow approximately multiplying numbers with several significant figures. This method compresses exact single-digit multiplication into three exact lookups (with log tables you would instead look for nearby entries) and one addition. Also unlike logarithms, the intermediate integers are quite meaningless. They are a bit like logarithms but severely fiddled with.


Seems a bit like a hand trained neural net overfitted on multiplication of single digit numbers.


Ludgate's logarithms allow multiplication by addition of indices, but they are not derived from a base, and the indices are all whole numbers. It is an essentially empirical system, unlike the comparable Zech/Jacobi logarithms which are based on number theory.


Seems like a sort of hashed lookup table, rather than a straight algorithm - but pretty cool, nonetheless.


Hashing algorithm before the random access memory! What a surprise. (This algorithm dates back to 1909.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: