Not always, though it might depend on what platform you mean with retro. Kaze Emanuar on YT does a lot of development for the N64 and it feels like half the time he talks about how the memory bus affects all kinds of optimizations. In Mario 64 he replaced the original lookup table for the sine function because an approximation was faster and accurate enough (or rather two approximations for two different purposes).
I love that channel, he reworked the entire Mario 64 code [0] to make it run at stable 60FPS...because he wanted his mods to run faster.
Back when I started I thought I would make games. I used a lookup table for cos/sin kept as an integer. I only needed enough precision for rotation on 320x240. It was something like 20-30 cycles faster per pixel. Even more if you didn't have a FP co-processor.
By retro platform GP meant Atari ST and Commodore Amiga and the like: LUT were the name of the game for everything back then. Games, intros/cracktros/demos.
Heck even sprites movements often weren't done using math but using precomputed tables: storing movements in "pixels per frame".
It worked particularly well on some platforms because they had relatively big RAM amounts compared to the slow CPU and RAM access weren't as taxing as today (these CPUs didn't have L1/L2/L3 caches).
The speedup from the approximation wasn't that much if anything. He made his real improvements elswhere. But your point stands that memory speed really has moved the goalposts on what is feasible to speed up through using precalculation tables and if you can do it with math then that is often much faster.
I love that channel, he reworked the entire Mario 64 code [0] to make it run at stable 60FPS...because he wanted his mods to run faster.
[0] https://www.youtube.com/watch?v=t_rzYnXEQlE