Hacker News new | past | comments | ask | show | jobs | submit login

As a first attempt, I guess you'd create a table of 5th powers of integers through (say) 1000, then look for combinations of two, three, or four entries from the same array that added up to one of the same numbers. Or more likely, walk through the array and subtract all combinations of one, two, three, or four elements from each, looking for zeroes. The most naive search through such a table could take on the order of a trillion trials to find a counterexample, which is pretty painful, but I'm sure there are plenty of ways to optimize it. Especially if you worked from high to low, you could skip enormous chunks of the search space that would take the computation out of range.

The CDC 6600 was a pretty hardcore machine; it was Seymour Cray's first design at his Chippewa Falls skunk works, according to Wikipedia. Would definitely be interesting to know how long their solution took.




The naive way:

- compute S0 = {n^5 | 1 < n <= 1320} (3960 multiplications)

- compute S1 = {n^5 | 1 < n <= 1000} (zero multiplications)

- compute S2 = {s+t | s,t ∈ S1} (500,500 additions)

- compute S4 = {s+t | s,t ∈ S2} (around 10^11 additions)

- compute the intersection of S0 with 4

You don't have to store S4, so 2 megabytes of memory or so should suffice. 10^11 operations likely was out of reach for the CDC600, though.

To speed up things, do the above for numbers up to 100 or so using modulo arithmetic. That gives you candidate solutions modulo your word size. That way, you don't have to compute large parts of S4.


I computed an array of fifth powers, up to 200, then had four nested loops. Using python3 on a 2.66-GHz Core 2 Duo, it took 48 seconds to find the reported numbers. If there had been no solutions, it would have taken only a couple of seconds longer.


That's modern hardware, where recomputing n^5 + m^5 when needed may on average be faster than doing the lookup with its risk of a cache miss.

That CDC6600 dates from a time where register access wasn't much faster (if it was faster at all. https://en.m.wikipedia.org/wiki/CDC_6600#Description: For instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two) than reading from memory, and integer multiplication took >1 cycle (https://en.m.wikipedia.org/wiki/CDC_6600#Description: in addition to the clock being faster, the simple processor executed instructions in fewer clock cycles; for instance, the CPU could complete a multiplication in ten cycles.)


I've actually written FORTRAN on a CDC 6600, at Dalhousie University. The algorithm was probably written in assembler (COMPASS) for speed. The 6600 had a 60-bit register, advanced for its time. The cabinets had some real cool glass panels. The console was two futuristic vector CRTs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: