Hacker News new | past | comments | ask | show | jobs | submit login
Mystery Math Whiz and Novelist Advance Permutation Problem (quantamagazine.org)
343 points by beefman on Nov 6, 2018 | hide | past | favorite | 80 comments



I’m one of the people who has worked on this problem, and is quoted in the article. Anyone who is seriously interested in this problem may join the group[0] where most of the interested people hang out.

We’re making progress pretty rapidly at the moment. The lower bound has recently been improved (by one), and Greg Egan’s construction has been shown to be the best possible under certain conditions.

I think it’s reasonably likely – though definitely not certain – that we will have a complete solution within weeks.

0. https://groups.google.com/forum/?nomobile=true#!forum/superp...


I played around with this one I found on Reddit's "daily programmer" subreddit a few months back, never found a great approach other than brute force (along with trying to identify unique/expensive sequences).

Do you see it as a variation on the one in the article?

https://www.reddit.com/r/dailyprogrammer/comments/8rcjx0/201...

"Given an input set of strings, produce an output string. Every string in the input must be an anagram of some slice of the output. A slice in this context is a series of characters from the string separated by a fixed amount (i.e. anything that can be formed using Python's s[a:b:c] syntax). It's different from a substring in that you're allowed to skip characters, as long as you skip the same number of characters on each step."


Interesting problem! It’s a variant of the Shortest Superstring Problem, which is known to be NP-hard. I bet this variant is NP-hard as well.

Obviously that doesn’t mean we can’t find nice algorithms that work well in particular cases, but I bet it’s hard to prove that a solution is the best possible.

What makes the minimal superpermutation problem different is that we’re trying to pack together the set of ALL permutations of some number of symbols, so the set of strings has a lot of known symmetry and structure we can take advantage of. In this case I hope it will be possible to find a general solution that we can prove is the best possible.


Cool!

In the article they say "The factorial rule predicts that to watch six episodes in every possible order should require 873 episodes, but Houston found a way to do it in 872. And since there is a simple way to convert a short superpermutation on n symbols into a short superpermutation on n + 1 symbols, Houston’s example meant that the factorial rule fails for every value of n above 6 too."

I don't quite get this. Is there an inductive proof you can use as a construction to reduce the length for subsequent N? Would love to see a birds-eye view of the intuition of this.


Yes, there’s a recursive construction that converts a superpermutation of length l on n symbols to a superpermutation of length l + (n+1)! on (n+1) symbols.

The construction is described in various places online. I like Greg Egan’s description here: http://www.gregegan.net/SCIENCE/Superpermutations/Superpermu...

Sorry for the delayed reply. I was stymied by my aggressive noprocrast setting.


That's my lunch time reading sorted, thanks!


If you have the smallest superpermutation for N digits, then you can construct a bigger one by adding the extra digit with a simple rule but it won't be the shortest possible.

It just turned out that the conjecture was easily disproved by that simple rule construction.


I'm not familiar with this particular problem, but can ideas from De Bruijn Sequences be applied?

https://en.wikipedia.org/wiki/De_Bruijn_sequence


The article doesn't focus much on the 4chan aspect of the story, but I think it's interesting to realize that your only real currency on anonymous message boards is the quality and content of your post at that moment in time. In the real world, or anywhere online where you have a reputation, that reputation can give you credibility when credibility is not deserved. It also attaches an ego dynamic to everything you say, and every time you're called out. With anonymity, that's all gone. If your ideas stand, it has nothing to do with your reputation. And if you're shown to be wrong, it's much easier to change your position and adjust your worldview.


   real currency on anonymous 
   message boards is the quality 
   and content of your ...
I suggest to generalise this great insight: anonymity, and the requirement that one's arguments must be valid independent from any reputation associated with the speaker, is the core essence of the scientific method. The edge case is mathematical proof: nobody would accept a mathematical theorem because it was from <famous person>. Instead it must be accompanied by a proof, a proof so detailed that everyone can verify it. Over 2000 years ago, Plato (in his Meno) use the slave boy as an example for "everyone", while today, we are even more stringent: even a machine (e.g. Isabelle/HOL, Agda, Coq, Lean), the prototypical mechanised slave, must understand a proof.

Scholarly tools like anonymous peer review, double blind reviews, double blind experiemnts, anonymous grading etc all work precisely because of anonymity.

A key mechanisim behind the positive qualities of anonymity is that the anonymous author no longer must fear punishment for the anonmous speech. The key invention here was the invention of writing, which enabled the separation (in space and time) between author and audience. This was amplified by the invention of the printing press (which amphified the audience), and later the internet.

I'm not a historian of writing, but I suggest the following hypothesis: contemporary arguments against online anonymity (e.g. leads to trolling etc) are structurally somewhat similar to (but not idential with ) historical arguments against writing and the printing press.


>today, we are even more stringent: even a machine (e.g. Isabelle/HOL, Agda, Coq, Lean), the prototypical mechanised slave, must understand a proof.

This is not quite true yet. An absolutely minuscule fraction of math papers published today involve machine proofs.


And indeed, this will likely always be the case, as the true purpose of proving a mathematical result is almost never to actually get the boolean true/false output, but to illustrate how to solve the problem in the first place and what methods were helpful for that.

Which means that the proof doesn't have to be machine-checkable; in fact, it'd be less effective at doing its job if it included every trivial step. It's not code. It's pseudocode; the purpose is to communicate to humans.


> And indeed, this will likely always be the case, as the true purpose of proving a mathematical result is almost never to actually get the boolean true/false output, but to illustrate how to solve the problem in the first place and what methods were helpful for that.

As a working mathematician, I wholeheartedly agree with the second part of the quoted sentence! But I think that the first part isn't quite right: The function of proofs to determine truth is extremely important, especially considering that, especially in pure mathematics, we often can't check our results in any other manner (like conducting physical experiments or checking many cases with the computer).

Bad experiences with proofs which later turned out to be faulty led Voevodsky to initiate his univalent foundations program (also called homotopy type theory), where all results should be checked by the computer. (For the particular area he was working on, no existing theorem prover turned out to be sufficient. He made a breakthrough in finding a new foundation to base theorem provers on, which made it possible to formalize results in his area of mathematics not only in principle, but also in practice.)

The hope is also that with computer-checked proofs, we can advance to new areas of mathematics which are currently inprenetable to our puny human minds. I'm very much looking forward to this future!

You can learn more about Voevodsky's vision in slides of him and in a recorded lecture: https://www.math.ias.edu/~vladimir/Site3/Univalent_Foundatio... https://www.youtube.com/watch?v=O45LaFsaqMA


What do you think of Lamport's 'structured proofs'?

https://www.microsoft.com/en-us/research/uploads/prod/2016/1...


I love them! Done well, they can be extremely pleasant to read and understand.

A good friend of mine creates those as a hobby: https://pwacker.com/


XKCDs relevant to the end of your comment:

* https://xkcd.com/1227

* https://xkcd.com/1601


I think that view is rather rose-tinted, because it assumes (A) a majority of the the anonymous audience are able to independently assess its quality (B) they'll do so accurately (C) it's so easy/fast that they'll actually do it.

For every "undiscovered genius whose treatise finally gets a fair shake", there are hundreds of cranks, trolls, or attention-seekers who -- freed of reputation -- can create a hundred mutated iterations of low-quality stuff until something takes off just due to random luck. Sockpuppets are another concern, since it's easier to fabricate a consensus approval.

None of this is especially new either -- BBSes and usenet and web-forums have been around for decades.


>I think it's interesting to realize that your only real currency on anonymous message boards is the quality

Perhaps initially, but an equally effective currency is shock/bait value. And when reply counts are the only way to keep threads alive, it's often the most provocative but not necessarily highest quality threads that garner replies and snowball. In fact in the case of many boards it often leads a pool of low quality drudge as the user count increases.


It also means you get nothing for your hard work and anyone can claim your work as their own. They won't be successful if they also don't understand it, but it still muddies the waters. If you try to come forward to put your name on it, you'll find resistance and be forced to prove who you are, but that's going to require records from 4chan and maybe your ISP, which may no longer exist. I've forgotten if 4chan post hashes (to identify a unique, anonymous user) can be reasonably brute-forced to find a collision.


The standard solution to this is to include an encrypted message that says something like "The author of this document is My Name", and then publish the decryption key when you want to claim authorship.


I think it would be more accurate to say it's not exactly quality that matters, but rather the ability to get attention. On 4chan, sometimes that's through a good quality math proof, sometimes it's through a live update of you murdering someone.


Quality is in the eye of the beholder.


Here are the archived original 4chan posts: https://warosu.org/sci/thread/S3751105#p3751197

There are also a bunch of rather unfortunate new comments at the end of the archive from the last few days, when the link has obviously gained some public attention. However, in the midst of all the new nonsense, the recent one that begins "I think I solved it" is pretty interesting:

  --Anonymous Sun Oct 28 14:10:06 2018--

  I think I solved it.

  First calculate the max string which is the number of permutations multiplied by n.
  Then divide the max string by n - 1
  Then add to the answer the fibonacci number located at 2(n-3)
  
  int max_string = n! * n;
  int answer = max_string / (n - 1);
  answer += fib(2(n-3));
  
  Example for n = 5;
  
  max_string = 5! * 5 = 600
  answer = 600 / 4 = 150
  answer += fib(2(5-3)) = fib(4) = 3
  answer = 153
  
  Here is the first 14 using this method:
  
  2 -> 3
  3 -> 9
  4 -> 33
  5 -> 153
  6 -> 872
  7 -> 5901
  8 -> 46135
  9 -> 408384
  10 -> 4032377
  11 -> 43909467
  12 -> 522549784
  13 -> 6745945965
  14 -> 93884331311


I don’t think there is anything in this idea. Note that

  n! * n / (n-1)
  = n^2 * (n-2)!
  = n * (n-1 + 1) * (n-2)!
  = n! + n * (n-2)!
  = n! + (n-1 + 1) * (n-2)!
  = n! + (n-1)! + (n-2)!
So the conjecture here is that the length of the minimal superpermutation is

  n! + (n-1)! + (n-2)! + fib(2n - 6).
The only thing to be said in favour of this idea is that it lies between the known lower and upper bounds, so we can’t yet prove it’s wrong. But there is no reason at all to expect Fibonacci numbers to appear in this context.

The most plausible conjecture we have right now is that the length of the minimal superpermutation is

  n! + (n-1)! + (n-2)! + (n-3)! + (n-4)
for n > 3. We can’t prove this either, but at least there is reasonable evidence for it.


For n = 7, their formula gives 5901 and yours gives 5907. How feasible would it be to verify which one's correct? Exhaust every 5901-long sequence? Or is there some smarter way?


No one has yet managed to solve n=6 completely, so I think solving n=7 is a long way off.

On the other hand, I think we will soon have an improved lower bound that is strong enough to refute this formula.


I wonder if some day, some one will anonymously post solution to the P vs NP problem, without fully understanding the scope and effect of the solution on the world.

Some times not knowing the history and hardness of a tasks helps in a novice attempting an impossible task, and arriving at the solution, completely ignorant of the history of the problem.


"He didn't know it was impossible, so he did it".

That's one of best motto I've ever heard!


This almost certainly won't happen. First, because "P vs NP" is a very well-studied problem and many complex technical constraints have been proved about what kinds of things can and cannot be part of "P" algorithm for an NP-complete problem. There's no way to be educate enough to be able express something that could possibly be a well-formed proof (and not made meaningless by its vaguess) without knowing enough to navigate these concerns and therefore appreciating the magnitude of the problem.

Second, because the proof won't be useful outside of arcane theory. If P != NP, that verifies something everyone already asumes, so it wouldn't change much except for abstract curiosity of a few experts able to comprehend the complexity of the meaning.


Out of curiosity, if someone proved P to not equal NP, what kind of scope and effect would it bring to the world outside academia? P == NP would obviously be a world-transforming proof


The actual result of "P =/= NP" wouldn't actually be very interesting or informative - after all, we're all pretty sure of that anyway.

What would be very interesting - and likely give pretty deep insights into the very nature of computation - would be how exactly they proved it.


Right. The important thing about the Clay Mathematics Millenium Prize is that it is a set of well-chosen SMART-ish goals. They want to incentivize work on really hard fields of mathematics but they have tried to do this by choosing results that are each not too intimidating. The Navier-Stokes prize, for example, allows you to assume all of the nicest features of Navier-Stokes problems in practice -- basically all of the fluid flows are well below supersonic and are occurring in highly homogeneous, nice fluids -- and ask the most basic mathematical question: does a smooth solution to these equations always exist? That question by itself is not so important, but it's specific and measurable. But it's a bit of a "reach" -- you would have to have some cutting insight about turbulence in order to answer this question one way or the other. That cutting insight is what they're trying to incentivize.

P vs NP is the same: if solutions to a problem are easy to check, is there always some better way to analyze the mechanics of the checker to make those solutions easy to find, so that we aren't stuck with brute-forcing it? Whether the answer goes one way or the other, the point is that solving the problem would have to provide some insight to the effect of "here is a periodic table of elements for all of the 'easy' algorithms -- and here are the properties of all the 'molecules' made by combining those building blocks." And only once someone advances our understanding with those cutting insights can we say "yes we can always reach into the verification mechanism to understand it well enough to build a better-than-brute-force algorithm" or "no, here is such a thing that algorithms cannot do, it's not just that I am not smart enough to find a way for them to do this -- they are fundamentally incapable of doing it faster."


It would probably give us some new insight on what computation actually is. Our inability to formally proof class separation suggests we're missing something.


as I understand it would have little effect since it's implicitly assumed to be that way, might be wrong though


RSA algorithm is based upon factorization, reverse problem for which is in NP. If a solution's difficulty is a low degree polinom (3 or lower, but I might be wrong) it could be used to get private keys from public ones in a matter of days.


The proof itself would not give factorization algorithm.

Even if P = NP, prime factorization can be very time consuming to solve. The prime factorization problem is in the NP class, but we don't know if it is NP-hard.


Factorization is not NP, it's "easier"


It's definitely in the class, but it's undecided which subclass it belongs to.





If anyone is interested in a practical attack that relates to this problem, you could check out Samy Kamkar's open sesame [1]. It leverages De Bruijn (explained in the post) to create an efficient sequence containing the required permutations.

The short description is: a project using a child's radio-toy to open a garage door, without knowing the correct code.

[1] http://samy.pl/opensesame/


I've done this on a fire alarm panel too. It checked the last digits so even though it required 3 digits you could check all combinations with 1002 button pressed rather than 3000.

These are slightly different problems though because you need 111 in this case but not in the one this article is about. It's much easier (according to my mathmo friend who solved it).


I’ve seen a couple articles about this problem but never quite understood what the difference was between it and simply the number of permutations. This paragraph from the article does a great job (finally!) show casing it in an example:

> If a television series has just three episodes, there are six possible orders in which to view them: 123, 132, 213, 231, 312 and 321. You could string these six sequences together to give a list of 18 episodes that includes every ordering, but there’s a much more efficient way to do it: 123121321.

Rephrasing it a bit, the problem is generating the shortest string of tokens (with repetition of the N tokens) in which every possible permutation of those N tokens is a substring.


I highly recommend to read his novel that's also mentioned in the article: "Permutation City" [1]. Apart from a great story, it provides a lot of food for thought.

[1]: https://en.wikipedia.org/wiki/Permutation_City


Agreed. It's a must-read for anyone interested in a thrilling story mixing ethics of simulated realities, uploading of consciousness, self-modification and finite automata.

Simulated beings are a repeated topic in Egan's stories. Some of them are freely available online, for instance https://www.gregegan.net/PLANCK/Complete/Planck.html (where simulated beings create clones of themselves which are to fall into a black hole, hoping to gain insights on fundamental physics) and https://www.gregegan.net/INCANDESCENCE/00/Crocodile.html (where a couple, after having lived for more than ten thousand years, decides to embark on a final adventure, leaving all loved ones behind because of differences in time perception).


I read Diaspora last year and Permutation City this year. They both blew me away. Some of the most original, thought provoking science fiction I've ever read.


Third that. This book is incredible and gripping. There is no soft intro, you are thrown into the story from the first paragraph.


I'm happy to notice that mathematics is no longer confined to academic journals. Anyone can contribute at any level. I wish a similar destruction of academic walls to other fields, especially physics.


I'd imagine it being the possible for theoretical physics, as it is in math. Otherwise, one problem is when you start needing advanced equipment and laboratories. The barrier to contributing is higher.


This case is exceptional in that the problem and solution can both be explained in fairly elementary terms. Combinatorics is really the only field of mathematics with many such opportunities available.


I think computer science is another field that individuals can potentially break through the academic wall, because you would only need a decent desktop to do your research... (I think quite a lot of resarchers do stuff in AI, graphics, etc using gaming-level PCs instead of huge workstations/servers)


This is true in general, but deep learning with lots of hyperparameters gives an advantage to organizations with clusters that can run a huge hyperparameter search quickly.


SigOpt (YC W15) has an academic program that allows full access to their optimization platform for academic use [1]. Hundreds of academics around the world have used it and there are already dozens of papers that have been published that have benefited from it [2].

These techniques can be orders of magnitude more efficient than a standard hyperparameter search and really cut down the barrier to entry for these types of results.

[1]: https://sigopt.com/solution/for-academia/

[2]: https://sigopt.com/research/publications/


> math whiz and novelist

Somehow I suspected it was Greg Egan before clicking on that link.


I follow him on Twitter - I expected a lot of SF related posts, but it's a LOT of maths and physics including complex simulations:

https://twitter.com/gregeganSF


Greg Egan also is a frequent contributor to The n-category café, a forum for the particular branch of mathematics I'm working in. I think it's terrific to meet your favorite science fiction author in a place like this.


It reminds me a bit of the quest to find 'God's Number' for the minimum number of moves such that you can always solve a Rubik's Cube. In that, the upper and lower bounds were pushed closer and closer by proofs until finally they met- meaning the number 20 was exactly the answer. (Well, the lower bound was 20 for years, and the upper bound was pushed down over time).

> Within a day or two, he had come up with a new upper bound on the length of the shortest superpermutation for n symbols: n! + (n – 1)! + (n – 2)! + (n – 3)! + n – 3

> The anonymous 4chan poster’s lower bound, meanwhile, was tantalizingly close to the new upper bound: It works out to n! + (n – 1)! + (n – 2)! + n – 3

IE: The upper bound has been lowered to

  n! + (n – 1)! + (n – 2)! + (n – 3)! + n – 3
The lower bound has been raised to

  n! + (n – 1)! + (n – 2)!            + n – 3
If the mathematicians can get either number closer by (n-3)!, then they have it.


This is a common proof strategy in math. If you want to show two values a and b are equal, proving (1) a cannot be greater than b and (2) a cannot be less than b is often easier by proving (1) and (2) separately rather than directly proving they are equal.


Apologies if this sounds dumb. But isn't P(n) = n! In this case, n=14 (episodes), which translates to factorial(14) = 87178291200. Which covers every possible arrangement of the 14 episodes? Where is this 93,884,313,611 coming from? Or am I making a fool of myself here?


Yes, so there are n! = 87,178,291,200 different permutations of those 14 sequences. You want to watch the 14 episodes in every possible order, ie in each of those permutations. You could just watch all of those permutations consecutively, for n x n! episodes in sequence, which would be 1,220,496,076,800 episodes (or, well, 14 episodes per permutation).

However, you can do much better, exploiting overlap, using only 93,884,313,611 episodes, which is really quite remarkable, because it is just 1.08 episodes per permutation.

Example with 3 (from the article):

3! = 6 permutations

Bad way to watch all: concatenate all permutations

A) 123 132 213 231 312 321 (= 3*6 = 18 episodes, or 3 episodes per permutation)

Good way to watch all permutations:

B) 123121321 (= 9 episodes, or 1.5 episodes per permutation)

Note that each of the permutations above in A is contained in the string at B).


Ah, I see it now. Thank you for explaining it.


That's the number of permutations. But we are allowing overlaps by a shifted window so the number can be lower. It's easier to explain with an example - in the article there's a nice diagram.


n - 1 + n! <= P(n) < n * n!

The lower bound is achieved when the permutations overlap perfectly, e.g. when n = 2:

1, 2, 1

The upper bound is simply the concatenation of all permutations. You can get lower than this by exploiting the fact that some permutations' suffixes are other permutations' prefixes.


For those interested in some background info, here's a Numberphile video explaining Superpermutations: https://www.youtube.com/watch?v=wJGE4aEWc28


I'm curious how this relates to some of the things talked about in Vol 4 of Knuth's work. In particular, this sounds directly related Algorithm C (Permutation generation by cyclic shifts) of 7.2.1.2.


Check the original thread here:

http://4watch.org/superstring/


I can think of at least one prominent mathematician and anime fan who might've written that anonymous proof.


Wait till this appears on LeetCode and gets included in the FAANG whiteboard interviews to be solved in 15 minutes.


I know you’re joking, but apparently it was given as a programming problem in a contest on at least one occasion.

See here: http://www.njohnston.ca/2014/08/obvious-does-not-imply-true-...


Here’s another more recent case where it appeared in a coding contest: https://discuss.codechef.com/questions/133115/wrong-problem-...


It's not a joke: https://leetcode.com/problems/cracking-the-safe/

It has been asked in real interviews


That’s a different (rather easier) problem: note that the safe combination can contain the same digit more than once.

The solution to this problem is given by de Bruijn sequences: https://en.wikipedia.org/wiki/De_Bruijn_sequence


Ah, I see. Thanks


Yeah, I was thinking the same thing. Seems very similar to this problem: https://leetcode.com/problems/cracking-the-safe/

And can confirm that at least one FAANG company has asked this in a real interview.


Haven't taken the time to read through this properly so forgive the question - but is this any different from the above problem?

And if so - if someone solves this programming task on leetcode have they not then also solved the permutation problem from OP?


It is different because a safe combination may contain the same digit more than once, so the intended solution is a de Bruijn sequence: https://en.wikipedia.org/wiki/De_Bruijn_sequence


Real life Will Hunting...?



Since that discussion was mostly about whether/how to cite the finding, maybe we can have another discussion about the more interesting points: what the finding was, and how it was found.

(The submission title is by far the biggest influence on what comments get posted.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: