Hacker News new | past | comments | ask | show | jobs | submit login
Graduate Student Solves Decades-Old Conway Knot Problem (quantamagazine.org)
479 points by theafh on May 19, 2020 | hide | past | favorite | 68 comments



Link to paper: https://arxiv.org/abs/1808.02923

I have not read the paper, but I have skimmed it briefly, and it looks pretty exciting. This isn't a case of "here's a new invariant, and, oh, BTW, it works to show the Conway knot isn't slice." It's an actual new technique. And, at first glance, it looks like a pretty simple technique. I didn't immediately see anything here that wasn't just a neat combination of low dimensional topology and basic knot theory techniques.

I'd be interested to see what this technique could do with knots having more than 12 crossings.


Yeah, it's pretty cool! On one hand you have knot traces, which are spaces that completely determine whether a knot is slice, and on the other you have the Rassmussen s-invariant, which can sometimes tell whether a knot isn't slice. It turns out that knot traces contain different information from the s-invariant.

One way sliceness is studied is through knot concordance, which is a certain restricted kind of deformation of a knot through 4-d space. All slice knots are concordant to each other. The Rasmussen s-invariant is invariant under concordance. So, the particular pair of knots C and K' cannot be concordant since they have different Rassmussen s-invariants. One consequence is that it's not true that knots with diffeomorphic knot traces are concordant in general.

Another interesting thing is that C and K' are related by positive mutation, which makes them topologically concordant (weaker than concordance). Since K' is slice, this implies C is topologically slice (weaker than sliceness), though this was already known by Freedman's work because the Alexander polynomial of C is 1.

Altogether, C is topologically slice but not slice (not the first example) while also being a positive mutant of a slice knot (which is what made the problem so difficult for so long).

The knot trace X(K) is a 4-dimensional manifold whose boundary is a 3-manifold called the 0-surgery of K. There was a conjecture that knots with diffeomorphic 0-surgeries are concordant (perhaps up to mirror images). While it was already disproved, this pair of knots gives another counterexample. In fact, she gives infinitely many pairs of counterexamples in an earlier paper: https://arxiv.org/abs/1702.03974


From a high level overview what she did feels mostly like a hacker approach: finding a side channel [1]. I wonder to what extent mathematicians think about side channels.

Instead of talking to the service/mathematical object (A) directly, you talk to another service/mathematical object (B) that leaks information about (A). Precisely, the information that you want.

The way she leaked that information was through a property called traceness that apparently was underappreciated by knot theorists in terms of sliceness problems. Which makes sense, otherwise it wouldn't be an information leak. Finding an info leak in itself, no matter what discipline your in is already amazing.

As far as I understood the quantamagazine article, mathematical object (B) still had to be constructed which only a person well-versed in knot theory could do. So not only did she find an info leak, she basically created something entirely new that few people can do (yep, the hacker analogy breaks here, this part is the "incredible builder" analogy).

This is so cool. Side channels are everywhere, even in math. Apparently, for knots it's called traceness.

[1] Not sure if side channel is the right word, but I view it as: something that leaks information about another thing. For example, air vibrations leaking information on what instructions the CPU is executing (I'm making this up, one would need very fine-grained air vibration data to see if this would be a side channel).


You could argue that one of the most famous proofs of last 30 years - proof of Fermat's Last Theorem used such approach. Andrew Wiles had proven that if this FLT was false, then certain theorem about elliptical curves (unrelated at first glance and from different area of mathematics) would have to be violated, and mathematicians already knew it was true.


I think empath75 is right to point to dualities. I don't think proof-by-contradiction is really analogous to side-channels.


This is pretty common. I had a professor who worked in an applied math domain and said his whole career was based on finding the right change of coordinates.


Your prof might have just been parroting Grothendieck, who was famous for that:

https://en.wikipedia.org/wiki/Alexander_Grothendieck


Dualities are what you’re thinking about.

https://en.m.wikipedia.org/wiki/Duality_(mathematics)


My area was algebraic combinatorics, which is all about this kind of thing. Combinatorics, representation theory, and geometry share a bunch of objects related by some translations from one field to another. So you make progress in one field until you get stuck, then translate to another to make further progress... And repeat.


This story will go down in mathematical history. A graduate student attacks an old unsolved problem, in her spare time, shows up with the solution a week later. Invents a new approach to topology in the process.


I wonder how much it helped the creative flow to not know it was an allegedly unsolvable problem. No fear to fail, no stress, not the feeling something complex might be needed because everything simple was already tried.


And in addition, it wasn't (quite) her field. (This shows somewhat in the paper linked elsewhere in the comments: you can kind of see the two different jargons in use.) The power of fresh eyes is often underestimated.


This. I've built my career off of diversity of experience - changing both the industry and functional role I work in almost every time I change companies.

It's amazing how impactful it can be to take an industry-naïve approach to a problem/project. A lot of disciplines have fundamentally similar challenges, but with solutions that evolved in completely different directions. Intractable problems or recurring issues in one industry can frequently be unblocked by plucking mature solutions or approaches from another, but so many people grow linearly within a single industry/discipline that such cross-seeding of concepts rarely actually have an opportunity to occur.


I must ask, please do not take this the wrong way, how have you been able to switch consistently with such ease? And what country do you work, where you are being respected for having industry-naïvety?


Being a software engineer you can readily switch industries but still have deep knowledge in developing software. Practicing breaking down problems in different domains will train you up in a more dynamic and flexible style of thinking (perhaps at the expense of deep expertise in a given area?)


> how have you been able to switch consistently with such ease?

Originally, it was anything but easy. But at this point in my career, the pattern itself is what sells it:

- I've never left a company having the same title or role I started with. And have always left with substantially greater responsibilities than I started with. Which holds true even for companies I was at for a year or less.

- I've never progressed linearly within the same industry or function when switching companies

- Almost all of my roles from IC to people-manager have involved high levels of trust and autonomy with minimal direct oversight and limited if any pre-existing structure, process, support, infrastructure, or direction.

The above demonstrates over a decade of reliably and successfully adapting between roles and environments, and outlaying it as such when going through my background with a hiring company tends to preempt any reservations/concerns the hiring company has in that regard. I've also now amassed SVP, President, and C-Level recommendations in pretty much every functional area, from individuals that have either directly or indirectly managed me. And can pull in the appropriate reference to address any remaining hesitations around functional capabilities or ramp up expectations. All of which have made my last few transitions significantly easier than earlier in my career.

> And what country do you work, where you are being respected for having industry-naïvety?

I'm in the US. I wouldn't put it as being respected for industry-naïvety, which is one of the reasons you rarely see people grow their career the way I have. I just seek out opportunities where that's not a liability and am fully transparent when I'm in a situation where that may be an expectation. Standard needs get handled (or at least sanity checked) by those with a more standard background if available, abnormal/atypical/ill-defined/unknown needs filter through to me. The value and leverage I provide by bringing to bear my variety of skills, experience, and perspectives in those atypical or rapidly evolving situations more than makes up for the occasional crash course in the basics I need to fill in some industry/functional holes.

The caveat being that I'm pretty useless or even detrimental[1] in many environments - such as companies that are highly regimented and a cross-disciplinary mindset would be antithetical to the culture and cause undesired friction, or in highly stable and mature environments where deviating from standard practices by introducing foreign concepts would destabilize the steady-state of the existing situation. In those situations, I'm far more of a liability than an asset. And will directly probe during the interview process to understand the context of the role and whether it's a good fit for what I bring to the table.

[1] While usually considered detrimental, I've also been placed in those situations with the express intent of introducing such a destabilizing factor or internal friction, as a way to shake things up and allow for change.


Yeah I have the same experience, arriving to a place with old habits, but not the same I saw the job before, bringing "new" ideas to the new place, blowing their mind, moving on and redoing.

It's almost as if I feel my role is more about changing constantly and "advising" like a consultant, which I usually despise, than staying 10 years maintaining an old rotting software.

There's definitely value in rotation, and my newest company, a 150 yo venerable institution, is amazed at what one can bring from small startups. They have had me rotate from team to team to "bring the mojo" since I started, where usually you pop a profiler, show that that's not how you use a hashmap, and save a few millions here and there for the users / clients... Now I'm the profiler guy, something I learned to do in my previous company to fix a few mem leaks... And each job was like that, they never let me dust in a team doing ticket after ticket in soul sucking sprints :D

Are you someone maybe who like to give smartass-like suggestions on absolutely everything until someone let you try ?


This is definitely her field. Pretty much every individual topic in the paper was present at the conference mentioned in the acknowledgments, either during a talk or at the coffee breaks (not all put together, of course). I don't think she focuses on Khovanov homology and the Rasmussen s-invariant per se, but its one of the main obstructions to a knot being slice. I think it's more than just fresh eyes: she could see farther than others because of how good she is at these geometric calculations for knot traces!


Maybe a meaningful way to solve some of these problems would be to unwittingly assign them as optional homework to undergraduate students?

It's not the first time this has happened.


"the feeling something complex might be needed because everything simple was already tried."

I feel this hard. Any relatively well known long standing problem I (wrongly?) assume must be impossible for me to solve because so many much more experienced that me people have put many hours of work into it already. But maybe I'm overestimating the actual effort being put into these sorts of things, or the amount of benefit fresh eyes can bring.


I'm really happy with one particular aspect of how this was reported too. Somewhat touchy, but I want to give props to quant magazine for not emphasizing her gender in order to spur more clicks. It's fully a proven tactic for getting more views, but it always seems to undermine the actual article and I really appreciate them not doing it.



It's been known to happen. Would that all of our Ph.D.s were that easy.


Correction: Her MIT position (a Moore instructor, at least according to her own website https://sites.google.com/view/lpiccirillo/home ) is not a tenure-track job (see https://math.mit.edu/about/employment.php ). It's still one of the best academic jobs a fresh PhD can get these days (certainly more prestigious than a "mere" postdoc). I don't think you can get a tenure-track job right out of your PhD, even an MIT fake tenure-track job (they say it guarantees you tenure, just not necessarily at MIT).


> I don't think you can get a tenure-track job right out of your PhD, even an MIT fake tenure-track job (they say it guarantees you tenure, just not necessarily at MIT).

I know some (very bright) graduate students do get assistant professor positions at top universities right out of their PhDs; I think those are tenure-track?


This depends on the field and how successful the grad student was.

In lots of fields, including CS and different kinds of engineering, this is somewhat common. Some will defer the appointment a year and do a postdoc anyway because that provides time to prepare research problems before the tenure clock starts to tick.

In math and the sciences, you hear a lot more stories of multiple postdocs being required before you can be considered for a professorship. I don't have first hand experience in those domains, however.


Who?

My guess is that these are "named postdocs" ("[some name] Assistant Professor", e.g. https://www.mathjobs.org/jobs/jobs/14065 or https://www.mathjobs.org/jobs/jobs/15707 ). Despite their names, they're limited to 3 years and only get renewed in exceptional circumstances.


John Pardon is a relatively recent example. He also became full professor at Princeton a year after graduating.


I see he was also involved with Knots [1]. Can someone explain to me why this is such an active field in mathematics?

I always hear about it and topology. Makes me want to read a book on it...

[1] https://en.wikipedia.org/wiki/John_Pardon


One reason people care about knots in low-dimensional topology is that every compact ("finite-volume") 3-dimensional manifold without boundary can be constructed by taking a 3-sphere (the set of points in R^4 unit distance from the origin), boring out tubes along a collection of disjoint knots, then gluing solid tori ("donuts") back in in a different way, a process called Dehn surgery. This is the Lickorish-Wallace theorem. Sort of the intuition is that if you take a 3-manifold and have worms eat out enough closed loops, the manifold loses its integrity and becomes indistinguishable from the complement of a collection of disjoint knots. (Lickorish's version of the proof involves a theorem that's colloquially known as the Lickorish Twist Theorem.)

In particular, every 3-manifold is the boundary of a 4-manifold obtained in a way that's reminiscent of knot traces from the article. You take a disjoint collection of knots in the boundary of a 4-ball, then glue in the 1-handles ("caps" in the article) along these knots, but with slight change: you glue in 1-handles with any framing whatsoever, not just the 0-framing like in knot traces (and actually using just +1-framing and -1-framing is sufficient). It's actually a remarkable fact in its own right that every 3-manifold bounds a 4-manifold; this is saying the 3-dimensional cobordism group is trivial. Other-dimensional cobordism groups are not trivial in general.

Every 3-manifold has a diagram, then, consisting of a multi-component knot (known as a link) with each component labeled by an integer (or a rational number if you are ok with "fake" surgeries). There is a whole thing called the Kirby calculus that gives a sufficient set of moves to go between any two such representations of a particular 3-manifold. An extension to this calculus went into Piccirillo's calculations with knot traces -- she cites the classic Gompf and Stipsicz for details.

One use of this representation of a 3-manifold is to construct Reshetikhin-Turaev invariants, which are sequences of numbers associated to a 3-manifold. This is related to the Jones polynomial, and these invariants satisfy a number of wonderful properties that together mean they form a topological quantum field theory (TQFT). I don't know the physics, but I'm under the impression you can interpret it as having something to do with quantum states of anyonic particles.

For books, you might look at Adams "The Knot Book" or Prasolov "Intuitive Topology" to get a substantial taste of knots and low-dimensional topology.


I am not qualified to attempt to explain why knots are a hot topic..

I can recommend a book on topology though. Robert Ghrist’s book:

https://www.math.upenn.edu/~ghrist/notes.html

Do you like physics? If so maybe try John Baez’s book for more knot-centric inspiration:

https://www.amazon.com/GAUGE-FIELDS-KNOTS-GRAVITY-Everything...


Basically, the study of knots is the study of how the simplest 1 dimensional thing (the circle), can sit in 3-dimensional space. And it turns out that even this "simple" case is incredibly rich and difficult. So that's a reason to expect knot theory to be an inherently interesting thing to study. So topologists, and especially topologists specialising in 3-dimensional objects were always interested in knots.

In the 1980s, Vaughan Jones discovered the Jones polynomial, which is a property of knots which remarkably turned out to have deep connections to all sorts of things including quantum field theory! This led to 3 decades and counting of intense study into the relationship between knots and fundamental physics. I'd like to say more, but I'm knot really qualified to speak about the connections to other fields. So that's basically the tl;dr of why so many people care about knots!


> a property of knots which remarkably turned out to have deep connections to all sorts of things including quantum field theory

Does this mean that the strings in string theory are knots?


Absence of proof is not proof of absence?


A circle is 1-dimensional and not 2-dimensional?


Yeah, a disc (the surface contained in a circle) is two dimensional. A circle is one dimensional because it only takes one number to decribe where you are in the circle (think rotary dial).


I'm not going to name names here but I'm aware of at least one person whose title is a "named" Assistant Professor and one whose title is (from what I can tell online) just "Assistant Professor".


Generally in the US there's no career path to becoming a tenured professor. (A lot of assistant professors, etc. sleep in their cars.)

You would have to have the right connections, or publish papers that were undeniably brilliant. (ie. be a celebrity.)


> Generally in the US there's no career path to becoming a tenured professor.

It's true that, in general, there are far more graduating PhD students every year than tenure-track faculty positions. But I think "no career path" drastically overstates the difficulty. It's way easier than making it as a pro athlete, for example.

> A lot of assistant professors, etc. sleep in their cars.

Are you thinking of adjunct professors, who are not tenure-track? Assistant professors typically make at least 50k+, and in the more lucrative fields (computer science, business schools, biopharm I think) >100k is common.

> You would have to have the right connections, or publish papers that were undeniably brilliant. (ie. be a celebrity.)

You're correct that an average PhD student will probably not get any tenure-track faculty interviews, let alone offers. This is especially true in more resource-constrained fields like the humanities.


Her CV [1] lists her job title as Assistant Professor. Also, this quibble seems to drastically miss the point of the article?

[1] https://sites.google.com/view/lpiccirillo/research/cv


Yes and no. It really depends on how you're reading it.

It's certainly unusual for a grad student to move straight into a tenure-track position at a research university, although not so much at a primarily teaching-oriented institution.

If the point you're taking from it is "here's a cool breakthrough in knot theory and low dimensional topology," then her title is irrelevant. If the point is "this woman made a discovery so awesome that it was published in the Annals, and got her a tenure track position straight out of grad school," then it's quite relevant.


Oh, that means the CV is more up-to-date than her homepage. Still, that's 1 year between her PhD and her tenure-track (an MIT tenure-track, by the way, is not a near-guarantee of tenure; MIT is one of the few maths programs that regularly don't tenure their tenure-tracks).

I've made that comment because the error made the article unbelievable to me, and probably made it unbelievable to other readers who are familiar with the current functioning of the academic job market in maths. I found it somewhat reassuring when I realized that the error was only a minor misunderstanding as opposed to the kind of fabulism I've gotten used from news media.


Even in the most technically demanding and theoretical of disciplines (and in some sense, perhaps especially so) it is creativity and an ability (instinct?) to see possibilities that others don't that distinguish the best researchers. This is a wonderful example of that.


I have immense respect for researchers who venture in these type of disciplines. I don't think I would be able to do it. I do have a bit of a daring question: isn't there a slightly more fine-grained way to quantify that every nook and cranny of such a problem has indeed be researched by researchers? I simply assume that a rigorous research by the best minds of the world has happened, but I never see any data on it, not even anecdata.

I mean, I remember a post from Julia Evans, making a Ruby profiler, where she was astonished on how few people were actually working on it [1].

I suspect that in some cases, probably not this one, but in similar theoretical fields, a similar thing might be occuring. And if not, how do we test that? I'm probably not the only one who's curious.

[1] I found a talk of her in which she emphasizes on it:

> So the three myths that I want to start out by talking about are myth one-- to do something new and innovative you need to be an expert-- myth two-- if it were possible and worthwhile, someone would have done it already so you probably shouldn't try-- and three-- if you want to do a new open source project, you need to code a lot on the weekend and your evenings.

https://www.deconstructconf.com/2018/julia-evans-build-impos...


An interesting thought. My understanding is that the library science community has been investigating this for some time in terms of the development of ontologies and libraries to try to categorize research output. Here's a random paper I found on the broader topic: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3129146/

I suspect that this field is vastly under-studied and investigated relative to what it ideally should be.

Generally speaking, it is easier in more theory-driven sub-fields to probe new areas. And indeed it is often rewarded. It's harder when $ is needed for experiments since that becomes more grant process-driven (something which is inherently more risk-averse).

My observation is that usually a few pioneering people push out into a new topic area. Then, whether a community forms around it and starts getting excited about it depends a lot on timing, luck and also resources. Sometimes nothing happens for decades until the stars align and people realize that there's something there.


It's always nice to find out there's someone else on this team:

> I don't program after work or on the weekend, really. I do write blog posts and draw a lot of comics. But I don't program.


> isn't there a slightly more fine-grained way to quantify that every nook and cranny of such a problem has indeed be researched by researchers?

Sometimes there is (the map coloring theorem), sometimes there isn't (the rest of math.)

> I simply assume that a rigorous research by the best minds of the world has happened, but I never see any data on it, not even anecdata.

Most mathematicians work on areas that interest them, ie. alone or with a colleague in another university.

Never heard of anything systematic involving "the best minds of the world" outside perhaps military projects, and some cooperative research is being done on forums now.

Comparing math and Open Source software development is kind of strange and not helpful.

Anybody can expend a lot of time and effort and successfully write a profiler, if they wanted to. Few people make a career in math.

If you're not a native English speaker, you might want to get checked for ADHD, since your post wasn't very coherent.


> Anybody can expend a lot of time and effort and successfully write a profiler, if they wanted to. Few people make a career in math.

Anybody can expend a lot of time and effort to write a profiler... but few people make a career of it. Anybody can expend a lot of time and effort on math... but few people make a career of it.

> If you're not a native English speaker, you might want to get checked for ADHD, since your post wasn't very coherent.

That's a strange suggestion to make after reading a single HN comment, especially when you're basing it off of your own subjective interpretation of said comment.

I thought the parent made a coherent point that people may avoid hard problems because of the assumption that 1) they need expertise that they don't have and 2) someone else is already working on it. The question raised was: how, in general, do we verify those assumptions?


> I thought the parent made a coherent point that people may avoid hard problems because of the assumption that 1) they need expertise that they don't have and 2) someone else is already working on it. The question raised was: how, in general, do we verify those assumptions?

This is what I meant. Though, I do remember I was a bit fuzzy on how to phrase things and opted for a conversational style instead.


> Anybody can expend a lot of time and effort to write a profiler... but few people make a career of it.

Correct, you generally don't get paid for writing Open Source on your own time.

What was the silly point your were trying to make, badly?


Solving decades-old math/science problems deserves a name mention in the title.


“ To make a knotted object in four-dimensional space, you need a two-dimensional sphere, not a one-dimensional loop. ”. You just need to assume it’s right.


You can't have a one-dimensional knot in the fourth dimension, because it's trivial to pull it apart if you move part of it into the fourth dimension.


am I the only one who got super exited with the progress bar on quantamagazine.org? it tracks how far into reading the article you are!!

definitely going to steal the idea :)


In the olden times we called those ”scroll bars” and you didn’t have to implement them yourself.


Financial Times does it too :)


I see Piccirillo's paper was published in the Annals of Mathematics in February. Anyone know if Conway got to see it before he passed?


What a great story. It’s something we can all learn from.


It's a great story. I'm curious what you think everyone can learn from it, other than "geniuses can occasionally solve longstanding intractable problems very quickly."


My takeaway was more that fresh eyes and expertise in a related field is a powerful problem-solving combination.

She's clearly smart, but I'm reluctant to call anyone a genius if it causes us to view their success as, say, wondrous light from a distant star rather than recognizable human effort.


Not to put too fine a point on it, but that problem resisted years of “fresh eyes,” “related expertise,” and “recognizable human effort,” but she still walked in and clobbered it. Some types of success aren’t very relatable.


Depends what you define by fresh eyes. You can have fresh people with old ideas, trying constantly the same thing, until an idiot comes along with an exotic expertise and solve your problem.

And it happens everyday to many of us, sometimes we are the old ideas people, having a noob show us how it could be done better and faster, and sometimes we are the ones joining an old group and proposing something new we saw elsewhere to apply to one of their stupidly long-standing problem.

I find her story very relatable and a good reminder we should never dismiss the noobs in scientific, creative or high-profit endeavors: they can always bring something we didn't think of by exploring an area we overlooked. And as an example, you can look at the high-speed, youth-driven electronic innovation craze in Shenzhen, often overlooked, where everyone share everything, everyone and everything is new somehow, and exploration is encouraged (and financially rewarded) more than production of blueprints.


From TFA:

“Whenever a new invariant comes along, we try to test it against the Conway knot,” Greene said. “It’s just this one stubborn example that, it seems, no matter what invariant you come up with, it won’t tell you whether or not the thing is slice.”

The Conway knot “sits at the intersection of the blind spots” of these different tools, Piccirillo said.

One mathematician, Mark Hughes of Brigham Young University, created a neural network that uses knot invariants and other information to make predictions about features such as sliceness. For most knots, the network makes clear predictions. But its guess about whether the Conway knot is smoothly slice? Fifty-fifty.

“Over time it stood out as the knot that we couldn’t handle,” Livingston said.

Not my area, but if the article is to be believed, pretty much every new related technique gets thrown at this knot as a matter of course. That's what I mean by "fresh eyes."


Outstanding work.

Let us dub Lisa Piccirillo a "Space Age Bo's'n Mate".

https://en.wikipedia.org/wiki/Boatswain


Too bad Conway didnt live long enough to see this.


If I'm not mistaken, it says her findings were published in Annals in February, and Conway passed in April.


The paper is from 2018 so he might have seen it.


rip conway




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: