Hacker News new | past | comments | ask | show | jobs | submit login

>...any algorithm that doesn't have general consensus among experts that it is unlikely to be broken in 100 years.

How could anyone, expert or not, possibly know this? You would have to be able to predict advances in computing technology and advances in algorithms and the synergy between them.




For advances in computing, "double the performance every 18 months" would seem like a good upper bound.

Algorithm advances are harder to predict, but choosing something dependant on a provably hard mathematical problem would be one approach. Another approach would be a construction that is secure if any of a set of base encryption/hashing primitives remains unbroken. (so an attacker has to break MD5/BLAKE/Whirlpool/SHA256 all at the same time to break your hash). One can probably draw some kind of 'survival' curve to predict how long a given algorithm will stand up before failure, and therefore predict how long a set of say 10 or 20 algorithms would survive.


>For advances in computing, "double the performance every 18 months" would seem like a good upper bound.

But not a realistic one past the mid '00s. Due to the end of Dennard scaling[1] that sort of exponential performance increase is no longer available. At this point a fundamental breakthrough (quantum computing for example) would be required to get a large and cheap increase in computing capability. That puts the hardware side in more or less the same situation as the algorithmic side. A fundamental breakthrough is required. That does not help with the uncertainty here. This stuff is more or less impossible to predict.

[1] https://www.extremetech.com/computing/116561-the-death-of-cp...


If the algorithm was based on a theoretically hard mathematical problem perhaps...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: