Interesting that the folks who wrote a proof based on the 4Chan user's post listed him as a first author. In all of my experience, Mathematics truly seems to be the field with the highest integrity.
> Mathematics truly seems to be the field with the highest integrity.
Mathematician here. I'd like to think we have high (not sure about highest) integrity. But I should also point out that the custom in the field is to list authors alphabetically by family name, and the paper in question (https://oeis.org/A180632/a180632.pdf) does not break with custom if you view "anonymous" as "family name"...
Once talking with my wife, biologist, I, mathematician, said something like, "I never get first author because my name is so late in the alphabet", and she stared at me like I had just sprouted a second head. Confusion and shock.
That's when she had to sit me down and explain that first author is super mega important in her field, not just who is literally listed first.
errr... they are so hierarchically oriented they add in a different order of significant on the other end of the list of authors, so being first OR last is good
> the custom in the field is to list authors alphabetically by family name
This is only true in some fields of mathematics; many areas of mathematics do not follow this rule. (Also mathematician, and none of the papers I've been involved with have done the alphabetical thing.)
I'm a mathematician and some years ago wrote a joint paper with some philosophers, I suggested the paper but we all did comparable amounts of work on it: they insisted that my name was first (which was non alphabetically), it made me really quite uncomfortable as I imagined that others would assume that I was some kind of arrogant arse who had insisted on precedence.
It’s because it’s the hardest field to bullshit. Everything is black and white and you can’t just make up fake data.
On the converse, psychology and economics seem to have the most problems in terms of reproducibility and fraud. This is because they are the easiest to falsify, and their results are interpreted according to less precise epistemology.
There’s no ‘Austrian school’ vs ‘Chicago school’ in maths (to my knowledge).
Also, the general public isn’t interested in pure maths, so there are less incentives to fake data so you can get a nice press release, or so you can publish your new book, or so that the government bureaucrat will subsidise your ‘research.’
Not to troll, but these kinds of comments: opinion -> caveat I know nothing about this, are growing more common on hacker news. Maybe consider if you're adding value by expressing your opinion in view of your awareness that it's completely uninformed?
That’s a fair call. I didn’t mean that I know absolutely nothing, just admitting that I don’t read much sociology.
My philosophy is that it’s still useful to give an opinion on something that interest you, even if it is probably wrong. If I am wrong, someone can correct me and I learn.
Also I suspect admitting ignorance leads to replies arguing in better faith since a discussion isn’t adversarial (I admit I’m not married to my opinion because I’m ignorant)
“There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.”
Absolutely nothing wrong with forming opinions in the absence of deep understanding. We do it all the time. It can be fun and usefully provocative to engage in discussions as an interested but ill informed outsider. Providing first principles objections can even force people to question their assumptions when engaging or explaining.
But, and it's a huge but - expressing a broad opinion on an issue, when knowing nothing about it - as opposed to say asking a good faith question - isn't particularly useful to anyone.
My comment started a couple threads of discussion, and I learned something new about pure Mathematics. You are getting “useful to me” and “useful to anyone” mixed up.
I think Asimov would disagree with your interpretation.
“I believe that every human being with a physically normal brain can learn a great deal and can be surprisingly intellectual. I believe that what we badly need is social approval of learning and social rewards for learning.
We can all be members of the intellectual elite and then, and only then, will a phrase like "America's right to know" and, indeed, any true concept of democracy, have any meaning.”
What level of expertise do you think is required to comment on something? Would you consider yourself well versed in Internet ethics?
Also, your comment ironically expresses a broad opinion instead of asking good faith questions.
Aren't all sciences were born from philosophy? Graduating from the journey of understanding the meaning of something and graduating to measuring it to further understand why the world works the way it works?
Social sciences, economics, and medicine are relatively immature (hundreds of years versus thousands) compared to the hard sciences. These fields all apply the scientific method to the best degree available and the subject matter is pretty complex therefore results are difficult to reproduce. They could do a better job at self-regulation, but maybe that's part of its maturation process.
Humans with their personalities and motivations are harder to "science".
Yes, I would say that science is a branch of epistemology that uses evidence to generate knowledge to increase predictive power. I think the difference between hard and soft sciences is just how tangible your predictions are.
Soft fields are definitely harder to research, but instead of acknowledging this, many soft researchers ignore epistemological limits to make their research sound ‘harder.’
A good example is this psych researcher deriving equations of emotion based on fluid mechanics [O]. Somehow people in these fields don’t understand and/or care and allow this bullshit to continue.
I think many or most soft research programs are currently in “degenerate” territory (Lakotos)
I think they mean consensus about which axioms to accept. There could be competing schools of mathematics along those lines, I think there just happens not to be.
If you did write a paper that assumed different axioms to the norm you would just state that you had done so! Because mathematics is axiomatic, mathematicians are happy to play around with axioms as long as it leads to something interesting.
Ah that makes sense, but do they really matter? If I want to use calculus to design the thickness of a bridge, does the selection of fundamental axioms matter at all?
Don’t all roads lead to Rome? It’s like an engineer deciding to use SI or Imperial units, people have their opinions but at the end of the day both work.
I think in theory, there could be very different axioms that both have so many open questions that competing schools vie for the best talent to explore 'their' space. Whether that's a situation likely to ever occur I have no idea.
I may just be a biased engineer, but until you find a way to apply your theoretical maths, surely it doesn’t matter if there are theoretical differences between approaches?
In the same way some heterodox political philosophies may result in new moral systems, but it doesn’t really matter that these moral systems are different to the status quo, until someone uses it as an ideology for their revolution.
It’s probably good to have a variety of theory to choose from, except when your different theories result in different practical outcomes.
Hardly 'competing', more like (say) Commodore-64 demo Vs nVidia GPU demo scenes - you pick your arena and push what can be done with constraints (axioms).
In mathematics you might compare groups that work with Vs without the Axiom of Choice, you can can have one group satisfied by an existence proof that asserts X exists otherwise a contradiction must exist and another group that only accept existence proofs that demonstrate a means to construct an example of X.
But is a Hilbert system easier to run in a computer since there are many axioms which clarifies the cases where contradictions or some inference rules are used which might be hard to program? For example, on the top of my head, in predicate logic, how would you encode an inference rule like exists elimination? It's tricky enough for me to think about, but to be able to encode a computer with it, is harder, and I can't remember if something like this falls under the undecidable problems category or not.
There can be uncertainty about unproven results (e.g., P vs NP), or about results that are too complicated for people to verify (there have been a number of these, where the proof essentially creates a new branch of mathematics).
I’ve heard it’s own version of pontoons though. In particular, that it’s becoming so dense that enough people aren’t revalidating the proofs. There’s been examples on HN of published proofs that existed for years and years before someone pointed out they’re wrong. To a non-mathematician this sounds like it’s own version of the replication problem.
This is somewhat overblown, and only really an issue for some particular theorems and obscure corners of mathematics. And this isn't a recent problem, Hilbert's proof of Nullenschtatz in the early 20th century had logical errors, but is otherwise true.
If a proof is used by 3 people it won't have much scrutiny, but once major results start to be based on the proof, it'll get reviewed more carefully and either accepted or rejected in the long run. The ABC conjecture is probably the biggest example.
This is not true at all, Brouwer truly thought the original Aristotelian concept of the "law of excluded middle" was epistemologically unfounded. I would recommend his original paper introducing intuitionistic mathematics but it's very dense. You can refer to: https://plato.stanford.edu/entries/brouwer/
Later, he argued he found mathematical proofs of counterexamples of LEM. From above source:
> “Intuitionist Reflections on Formalism” of 1928 identifies and discusses four key differences between formalism and intuitionism, all having to do either with the role of PEM or with the relation between mathematics and language. Brouwer emphasises, as he had done in his dissertation, that formalism presupposes contentual mathematics at the metalevel. He also here presents his first strong counterexample, a refutation of PEM in the form ∀x∈R(Px∨¬Px), by showing that it is false that every real number is either rational or irrational. See the supplement on Strong Counterexamples.
I've mentioned this before on here, but mathematics papers don't use the usual rules for author ordering. By convention, more or less the entire field has agreed that authors shall be listed alphabetically. So, Max Zorn probably got listed last on every single multi-author paper he published, while Odd Aalen[0] has probably been listed first on every paper he's published.
To prove that I'm not just making this up out of my ass, here[1] is a statement from the American Mathematical Society that talks about it. I'll quote the meat of it here:
> In most areas of mathematics, joint research is a sharing of ideas and skills
that cannot be attributed to the individuals separately. The roles of
researchers are seldom differentiated (in the way they are in laboratory
sciences, for example). Determining which person contributed which ideas is
often meaningless because the ideas grow from complex discussions among all
partners. Naming a "senior" researcher may indicate the relative status of the
participants, but its purpose is not to indicate the relative merit of the
contributions. Joint work in mathematics almost always involves a small number
of researchers contributing equally to a research project.
> For this reason, mathematicians traditionally list authors on joint papers in
alphabetical order. An analysis of journal articles with at least one U.S. based
author shows that nearly half were jointly authored. Of these, more than 75%
listed the authors in alphabetical order. In pure mathematics, nearly all joint
papers (over 90%) list authors alphabetically.
Exceptions do exist, as alluded to by the "over 90%" number mentioned in the AMS statement. But, many of those are caused by transliteration artifacts (e.g. Author1 and Author2 are in alphabetical order in the language the paper was originally published in, but the English transliterated versions of their names are not).
This is all true, but it's also reasonably common to make an exception in special circumstances, particularly where one of the authors made the key discovery, and move that author's name to the front.
For example, the recent papers on aperiodic monotiles have Dave Smith as the first author, even though his name doesn't come first alphabetically.
(I can also think of another recent example, which modesty forbids me to detail.)
In the case at hand, we did deliberately intend anon to be the lead author. (I was one of the ‘coauthors’ who helped to write it up.)
Fun fact. Max Zorn was at Indiana University when I was a grad student there in the '80s. He hated to be known for Zorn's Lemma instead of the other work he did. He thought that was just a fairly trivial observation.
Hah. Only on HN does one mention a pseudorandom mathematician, then run into someone who was a student in their department 35-40 years ago lol....
You know how the joke goes, right? “The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s lemma?” [0]
I'm honestly not sure I could have resisted the temptation to ask him "What's yellow and equivalent to the axiom of choice?"[1] one day.
Okay, nevermind. I would have resisted, but I'd be chuckling about it off and on the whole time when I was sure he wasn't around, while I was doing my degree.
In all seriousness, though, proving the equivalence of AC, ZL, and WO was probably my first venture into "real" abstract mathematics in undergrad. For essentially the first time, there was no picture I could draw that would have any semblance of accuracy or utility, and yet at the end, the result popped right out just the same.
Unfortunately, I didn't make it to the independence of the Continuum Hypothesis that year, and had to move on to other courses. :/
---
[0]: AC, ZL, and WO are also equivalent to the Hausdorf maximal principle: in any poset (P, ≤), every totally ordered subset S is contained in some maximal totally ordered subset T. In some ways, HM is kind of the "dual" of ZL by swapping totally ordered subsets for elements, ⊆ for ≤, and push the whole thing up into the power set realm, I think you just get ZL trivially. Of course, I always found HMP much more intuitive than ZL... basically, AC > HM > ZL > WO in my mind, in descending order of intuitiveness.
I never met Paul Cohen, but he started out in my field, analysis, not logic. When he realized he would not make it to the top he started looking around for easier pickings. He came up with "forcing" that applies ideas from game theory.
When I visited IU in the '90's there was a new stoplight on E 3rd. Max got hit by a car when trundling into his office every day in Swain Hall East and his colleagues somehow convinced the city of Bloomington to put one up.
While misspending my youth playing 5-minute chess at Bear's Place up the street, Raymond Smullyan showed up and asked if he could kibitz. I can write a very short book titled "What is the Answer to that Question?" It would be much shorter than https://www.amazon.com/Million-Zeros-Douglas-Crockford/dp/19.... Bjarne told me the guy had gone crazy. He is right.
I actually picked up a copy of Cohen's book [0] on CH a few weeks ago. It's in my "actually going to read this" pile right now. Looks pretty accessible, even for an ersatz graph theorist/combinatorialist such as myself.
In math, it's primarily because they want credit for the paper to go to the collaboration. This has the secondary effect of eliminating arguments about whose names go where. Is it the same way in those fields?
This is kind of interesting to look back on. I checked the group really quick and they shut down the program trying to find a better solution to the n=6 problem in March of this year. The most upvoted post was also a bit optimistic in thinking this problem might be solved in a few weeks. I don't know much about it but I am assuming it has not been solved yet.
It's an action packed short story about math truths. Super good.
> A truly wonderful story in which two math grad students discover that the things we consider to be "truths" in number theory are actually part of a dynamical system, subject to change over time and in competition with alternative "truths" that are equally valid at other "locations" in the number system.
I mean the title of Permutation City is more relevant. :)
But the one I linked to is an action / adventure story about discovering new something in math. So content wise, it's more relevant. And I just liked it and want people to read it.
all I have to add, and this may be of interest to nobody but me, I remember this discussion back in the day. Not because of the number of permutations of orderings to watch, but because I was intrigued that people had different reasons for preferring different orderings for the shows. Just struck me as strangely beautiful.
This has (or had) real world implications. Back when phone answering machines were a thing, and you could dial a pin number to your own to have it play back messages... well, I remember an article in 2600 (I think) that explained that there was a superpermutation that gives up the goods. It would take up to 3 or 4 minutes to dial in, but eventually you could start hearing the messages.
There was some mention of news reporters doing this to the phone numbers of local politicians, but that must have been speculative or outright bullshit (even then there would have been an applicable law against using such an exploit).
I wonder if the problem would be simpler if the superpermutation would be treated as a repeating (circular) sequence. I.e., what is the shortest sequence (having length n) such that if the episodes where shown in an endless repetition of that sequence, viewers could start at any episode and watch the next n + k episodes to catch all permutations. This would remove both ends of the superpermutation being a special case, so to speak.
The article doesn't do a good job of explaining the constraint.
The sequence being generated is called a superpermutation [0].
From the Wikipedia article:
"""
...a superpermutation on n symbols is a string that contains each permutation of n symbols as a substring.
"""
In other words, construct a big long string made up of the N symbols where every permutation of the N symbols appears in it. Since there's the possibility of overlaps, you can do better than the naive method of just pasting all N! permutations together.
Note that this sounds very similar to a De Brujn sequence [1] but is different since a De Bruijn sequence ask for every possible sequence of N symbols (of length M, say), not every possible permutation. So a De Bruijn sequence would have 000, 001, 002, ... 200, 201, ... , 222 in it whereas a superpermutation would exclude (or at least not count) some of those sequences as they're not permutations (012, 021, 102, 120, 201, 210).
Is a superpermuation just a more efficient annotation or is it more helpful in solving an optimization problem? They talk about the travelling salesman problem in the article but they don't exactly explain if knowing the superpermutation helps solve it faster.
One goal is to find an 'efficient' annotation of a superposition. That is, "what is the minimum superpermutation string length of of N symbols", which itself is a sort of optimization problem. We can presumably get bounds on the minimum and maximum it can be and maybe the 'optimal' shortest superpermutation has enough variation that the bounds aren't/can't be exact.
As to what "practical" applications superpermutations have, nothing comes to mind but considering how many applications de Bruijn sequences show up, I'd be surprised if they didn't start cropping up from time to time in the future.
As to the travelling salesman/Hamiltonian path/cycle problem, my impression is that they used a clever construction of a Hamiltonian cycle on a hyper-cube/Cayley graph/high-dimensional-high-degree graph to show lower/upper bounds on the number of superpermutations. In other words, the implication is the other way. They construct a Hamiltonian cycle on a specially crafted graph to show lower/upper bounds rather than use superpermutations to say anything about Hamiltonian cycles.
To get a flavor for how this works, I'll copy pasta the de Bruijn "construction" section of Wikipedia [0]:
"""
The de Bruijn sequences can be constructed by taking a Hamiltonian path of an n-dimensional de Bruijn graph over k symbols (or equivalently, an Eulerian cycle of an (n − 1)-dimensional de Bruijn graph).
"""
With a de Bruijn graph [1] being a specialized graph construction.
> If a television series has just three episodes, there are six possible orders in which to view them: 123, 132, 213, 231, 312 and 321. You could string these six sequences together to give a list of 18 episodes that includes every ordering, but there’s a much more efficient way to do it: 123121321.
And with 7 substrings of length 3, it's provably minimal, since making every 3-substring a permutation of 123 forces every element after the first 2. Assuming without loss of generality that it starts with 1 2, it is forced to start with
1 2 3 1 2 3
but there already repeats the same permutation. So one substring must be wasted (in the article this is phrased as having to traverse one higher cost edge in the corresponding graph).
that ...,1,2,1,.. in the middle bothers me, nobody wants to watch three in a row that does not include all three, i'd rather have a duplicate triplet, or at least include the case as another potential bound, or classes of orderings where perhaps some numbers of elements have these defects while others don't.
You can interleave the orderings. If there are two episodes (1 and 2), we only have to watch three episodes in a row: 121. Then, we've watched all permutations -- 12 and 21.
The "14!" solution (where we watch 14x14! episodes) means there is no interleaving -- we just watch each permutation in turn. In our 2-episode example, we'd have to watch one extra episode (for a total of 2x2! = 4), e.g. in the order 1221.
The question is what's the shortest sequence that contains every permutation as a substring. If each episode after the 14th got you a new permutation, you would need 13+14!, but that's not possible. For example with 3 episodes, you could begin 12312, then any episode you choose from that position does not contribute a permutation. I think the shortest sequence goes 123121321.
I think they mean if say there were episodes a, b and c the orders to watch would be abc, bac, cba, acb, cab, bca. Now if you watched abcba you'd experience both orders abc and cba but only watch 5 episodes in a row instead of 6. So what is the minimum number of episodes to watch in a row to experience all six different orders?
Is this not the same problem as older numerical keypads on doors? The ones that opens as soon as you enter the correct 4-digit sequence. The cops used a pattern to crack those fairly quickly back in the 90s at least.
Are you actually allowed to use Super positions for this (watching the TV show in every order). Presumably the whole point is that you get a different experience from watching a certain episode after watching others and vice versa, and since our brains don't really exist in a state of quantum superposition, not sure you can kill two birds with one stone. You're either watching it in the context of earlier than these episodes or later than these episodes, but never really both.
It also isn’t a practical problem, even for very low number of episodes. For n >= 2, you’d want to look both at #1 before you’ve seen #2, and at #2 before you’ve ever seen #1. As long as we’ve no way to zap memories, that is impossible.