Hacker News new | past | comments | ask | show | jobs | submit login

It looks like this specific algorithm can easily be deterministically reversed - the XORs and bitwise shifts mean that, given four output numbers, you can completely determine what state it was in before the numbers were generated - and, in fact, that there is probably a simple series of bit shifts and XORs you can perform on the last four outputs that produces the next number.

Bit shifts and XORs are very much the kind of pattern neural networks should excel at learning - they mean that each output bit is a simple linear function of the input bits.

Actually doing it is still a good demonstration! But the fact that the function in question is used as a PRNG doesn’t imply that other PRNGs would be susceptible to a similar approach.




XOR is not linear in the inputs. I mention this, because Minsky's paper used that fact to show that perceptrons could never work as AI. This pretty much started the first AI winter in 1973.

It's an interesting story: https://towardsdatascience.com/history-of-the-first-ai-winte...


Linearity is actually relative to the algebra you are using. Xor is not linear in real arithmetic but it is in GF(2). This is useful because some of the algorithms work across many algebras, and are useful for different problems. Eg, the tropical algebra.

But you are quite correct about the AI winter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: