Newton-Raphson doesn't buy you much for logarithms, because the iteration itself involves either exp(x) or log(x)[1].
If you don't have a real multiplier[2], then the algorithm described is somewhat attractive because multiplying by 1/(1+2^-k) has a nice series expansion. You can do it using only shifts and adds (and the number of terms you need is O(1/k), so it falls off reasonably quickly), and you get the part you need to compute the next `k` early, so there's no serial dependency. Actually finding k is just a count-leading-zeros operation, which is cheap enough.
Computers today pretty much all have real multipliers, so no one does this. Instead we reduce to a range like [1/sqrt(2),sqrt(2)], and then either use a minimax approximation on that interval (if the accuracy requirement is reasonably low) or further reduce by looking up a value of r close to x for which we have a pre-computed 1/r and log(r) and take advantage of:
log(x) = log(r * 1/r * x) = log(r) + log(1/r * x)
this allows one to achieve high-accuracy results if r is chosen so that one of 1/r and log(r) is exact and the other is unusually close to the exact value[3], or stored in extended precision.
[1] If you have a fast approximate exp and log you can make use of it in an approximate N-R iteration, but it still doesn't buy you much. In practice no one does this; we compute exp and log directly, and they're among the fastest functions in the math library.
[2] Meaning either you don't have one at all or that it's much, much slower than addition, as was common in the 1980s and earlier.
[3] This trick is called "Gal's Accurate Tables" (https://en.wikipedia.org/wiki/Gal's_accurate_tables), but as with most things named for someone, it was independently invented multiple times, and likely dates back decades before the credited inventor.
IIUC you're speculating that the relevant constraint they faced was that multiplication was very burdensome. I agree; it seems plausible this was the reason they found this an attractive approach.
Also agreed about Newton-Raphson; that was a thinko on my part. It would be an unusual situation if NR were the right tool for estimating log.
If you don't have a real multiplier[2], then the algorithm described is somewhat attractive because multiplying by 1/(1+2^-k) has a nice series expansion. You can do it using only shifts and adds (and the number of terms you need is O(1/k), so it falls off reasonably quickly), and you get the part you need to compute the next `k` early, so there's no serial dependency. Actually finding k is just a count-leading-zeros operation, which is cheap enough.
Computers today pretty much all have real multipliers, so no one does this. Instead we reduce to a range like [1/sqrt(2),sqrt(2)], and then either use a minimax approximation on that interval (if the accuracy requirement is reasonably low) or further reduce by looking up a value of r close to x for which we have a pre-computed 1/r and log(r) and take advantage of:
this allows one to achieve high-accuracy results if r is chosen so that one of 1/r and log(r) is exact and the other is unusually close to the exact value[3], or stored in extended precision.[1] If you have a fast approximate exp and log you can make use of it in an approximate N-R iteration, but it still doesn't buy you much. In practice no one does this; we compute exp and log directly, and they're among the fastest functions in the math library.
[2] Meaning either you don't have one at all or that it's much, much slower than addition, as was common in the 1980s and earlier.
[3] This trick is called "Gal's Accurate Tables" (https://en.wikipedia.org/wiki/Gal's_accurate_tables), but as with most things named for someone, it was independently invented multiple times, and likely dates back decades before the credited inventor.