Hacker News new | past | comments | ask | show | jobs | submit login
P vs NP: The most important unsolved problem in computer science (scientificamerican.com)
36 points by Anon84 10 months ago | hide | past | favorite | 80 comments



This article has the best lay-friendly explanation of what P vs NP means that I've seen.


I think this video does a much better job. It places the problem in the historical context of computer science and explains why it's an important question. It's also more in depth while still being very short and approachable.

I highly recommend the channel if you're interested in information theory. It's just starting to publish new videos after a very long hiatus.

https://youtu.be/rK3xcRYLWeg


It's good purely from being lay-friendly, but I really wish they didn't labour their specific description so much because while it's an intuitive explanation it's also still actually wrong.

Also if you're going to write an article that long about P and NP then as the other commenter mentioned, it would be nice to mention the words polynomial and non-deterministic.


Non-deterministic should be explained more often because it's meaning in computer science is nonintuitive.


> still actually wrong

Can you be specific about where it's wrong? For the layperson?


And yet it never uses the word "polynomial".


It uses "efficient" everywhere, only alluding to polynomial-time here:

> Theoretical computer scientists use a technical definition for “efficient” that can be debated, but it serves as a useful proxy for the colloquial concept.

The idea that “polynomial-time” and “efficient” are reasonable proxies for each other is the Cobham–Edmonds thesis: https://en.wikipedia.org/wiki/Cobham%27s_thesis


I don't see how this statement is useful, because if an alg is "tractable" iff it is poly-time then n^10 algorithms would be considered "tractable". I think the real reason we care about P in a CS-theoretic sense is because it's a closed group under various operations, which allows us to talk about things in an implementation-oblivious fashion.

In practice in the real world we care only about low-degree P though (usually n < 3 at most), are really interested in linear & nlogn, and care about constant factors.

There's also Quasi-Polynomial time algorithms which are interesting because they straddle the line between P and EXPTIME.


Is it possible that there's something about the nature of computation that makes it so that P = NP / P != NP is simply unprovable/unknowable?


Yes its possible p vs np is actually undecidable


if P=NP, could the question still be undecidable? as in, could there be an algorithm for 3SAT that, in practice, happens to be polynomial-time and correct, but there's no way to prove the algorithm's correctness and/or time complexity under any system of formal logic?


Yes, it seems that the state of the art admits this as a possibility (though it seems surprising, and you can make a hand wavy argument dismissing it, as another poster does downthread). Eg this survey by Scott Aaronson:

https://www.scottaaronson.com/papers/pnp.pdf#page=26


I don't think we know enough about how computation relates to proof fidning to be able to say. While (sufficiently well specified) proof verification is pretty clearly a computation, creating a new proof is less clear. Given that the space of possible proofs is infinite, I don't think a brute force approach is possible even for a nondeterministic algorithm, so I don't think that, even if P=NP were the truth we would have a clear reason to expect to find a proof of this.


It's not impossible to prove P==NP if that is the case as the program is the proof. Also if we assume "if P==NP then there exists program less than a trillion lines" then P!=NP is provable if that is the case.


Exhibiting program that terminates in polynomial time for NP problem isn't sufficient to prove P==NP. You also need to actually prove the function terminates in polynomial time.

And I could easily imagine that the general problem "does program P(X) terminate in time less than f(X)" is equivalent to the halting problem but in any case it's not at all guaranteed you could create such a proof.


the program isn't the proof. you have to prove that the program always gives the correct answer, and works inside some time bound. what if, say, it generates the Collatz sequence as a subroutine? good luck proving that the program is polynomial when we can't even prove that sequence always converges.


I disagree that this is an important problem. Even in the unlikely case that some NP-hard algorithm is P, it may be completely infeasible to compute on modern hardware. I'd wager that to certainty be the case if any solution exists at all.

I am not aware of any practical use of P=NP outside of pure complexity theory, and the P!=NP case is already ubiquitous assumption anyways. P=NP would be a notable result, but probably not important.


When people say the most important problem in a theoretical discipline, usually its not due to practical applications.

That said, there is definitely potential practical implications for this. Even if it means we can know np problems do not have efficient solutions - that is something with practical implications.


Even a P=NP result doesn't tell us that NP problems have efficient solutions. That depends entirely on

- if the solution is constructive,

- if the asymptotic solution has good constants (lower bound on input size, highest degree term), and

- having no other mitigating factors (requiring absurd amounts of space, for example)

The idea that P=efficient is a complete misnomer. For simple algorithms, asymptotic analysis is a fine enough coarse-grained view of program performance. However for extreme cases (like may be the case for P=NP) asymptotic analysis tells you almost nothing at all.


> Even a P=NP result doesn't tell us that NP problems have efficient solutions.

Yes it does. That is literally exactly what it means. The class P is the class of problems which are considered theoretically "tractable"/"efficiently solvable"/"feasibly solvable" (Cobham-Edmonds thesis). Hence, if NP=P, then that same definition extends to all problems in NP.


There are very obviously algorithms in P which are not "efficient". For example, an algorithm in O(n^10^10^10) is not efficient in any reasonable sense. It is in fact much much much less efficient than O(e^n) for any n low enough that the whole thing would finish in under a year or so.

In practical terms, the class of efficient algorithms is probably O(n^3) at best, and even then assuming no huge constant factors.


I am not quite the mathematician to find out but I would love to know the relationship between the constant factor and the exponent in matrix multiplication research papers.


P vs NP is not an "in practical terms" question. It is a theoretical question with theoretical definitions of theoretical terms, including "efficient", which directly corresponds to the class P by definition.


Ok. When I say efficient, I mean "produces efficient code on near-term hardware". I understand that complexity theorists have a different definition of "efficient"-- they also have a different definition of "important" too.


The question being asked was "what would proving P=NP mean for us in practical terms". The fact that mathematicians call all polynomial-time algorithms efficient is irrelevant to this question.


> The question being asked

Where was that question asked?


It wasn't exactly a question, but the thread started by discussing practical implications:

> That said, there is definitely potential practical implications for this. Even if it means we can know np problems do not have efficient solutions [emphasis mine]

So, this was about efficiency in the practical sense, not some largely useless definition of efficiency by which galactic algorithms are "efficient".


That's not Cobham-Edmonds thesis. The assertion is that problems are feasible _only if_ they are in P, not _if and only if_ they are in P.


Sorry, but no. Edmonds clearly maps "efficiently-solvable problems" onto the notion of P in a complete way, not only as a subset of P.


I chased some links from Wikipedia and you're right that Edmonds uses "efficiently solvable" to mean P. However, he does not take "efficiently solvable" to mean "feasible" for basically the same reasons as I've said in this thread. From section 2 of Edmonds' "Paths, Trees, and Flowers" (1965):

> An explanation is due on the use of the words "efficient algorithm." First, what I present is a conceptual description of an algorithm and not a particular formalized algorithm or "code." For practical purposes computational details are vital. However, my purpose is only to show as attractively as I can that there is an efficient algorithm. According to the dictionary, "efficient" means "adequate in operation or performance." This is roughly the meaning I want—in the sense that it is conceivable for maximum matching to have no efficient algorithm. Perhaps a better word is "good."

> It is by no means obvious whether or not there exists an algorithm whose difficulty increases only algebraically with the size of the graph. The mathematical significance of this paper rests largely on the assumption that the two preceding sentences have mathematical meaning.

> ...

> When the measure of problem-size is reasonable and when the sizes assume values arbitrarily large, an asymptotic estimate of FA(N) (let us call it the order of difficulty of algorithm A) is theoretically important. It cannot be rigged by making the algorithm artificially difficult for smaller sizes. It is one criterion showing how good the algorithm is—not merely in comparison with other given algorithms for the same class of problems, but also on the whole how good in comparison with itself. There are, of course, other equally valuable criteria. And in practice this one is rough, one reason being that the size of a problem which would ever be considered is bounded.

You should read the rest of section 2, it's short very clear. Calling P the class of "efficiently solvable" problems is a completely reasonable framing for this paper, considering that this was written at a time when we had fewer tools to formally compare algorithms. Edmonds correctly does not claim that all P algorithms are feasible, and my opinion that P=NP is not important is based on the 60 years of feasible non-P computations we've had since.


I think even a negative solution could have practical implications. For starters you can now know for sure the problem can't be solved efficiently, which means you can stop looking and focus on other things. Possibly you can use it in situations you need a hard problem lkke in crypto (although there is more to it then that, since just because a problem is NP doesnt mean that a specific instance is)

I am not a complexity theorist, but ive always doubted the n^1000 solution to p=np being likely. Think about it - that essentially means you have to loop through the data 1000 times, but no more than that, no matter how much data. It just doesn't seem natural that looping 999 wouldn't be enough, but 1000 would be precisely enough. Its a sort of middle ground that seems like it would be extremely odd to me. Much more so than p=np with a low exponent or p!=np.


Apologies for nitpicking, but n^m doesn't mean m loops (ie n^1000 dient mean 1000 passes): that would be mn (1000n in your example). I think your intuition argument kind of breaks down there.


I meant to say 1000 nested loops. You're right to call me out on it though, as that is a huge difference in meaning.

I still think it would be bizarre to have to nest a loop n layers deep, where n-1 is not enough, but n layers is sufficient for really large n. Like what extra info would you get on the 1000th nested loop that you didn't on the first 999.

Of course there is nothing formal about this, it just feels like it would be wrong to me (and hence personally i would consider it the most interesting result). Of course gut feelings dont really count for much and i have nothing more than that.

I suppose my intuition is that increasing the exponent gives diminishing returns to how much more power you really get , so it doesn't make sense for problems to be in the n^1000 range. My gut feeling is they should either be easier or harder. I certainly can't think of very many non-exponential algorithms in the > n^50 range.


Not 1000 iterations of 1 loop, but 1000 loops nested inside each other.


> - having no other mitigating factors (requiring absurd amounts of space, for example)

I think you can't require absurd amounts of space, because you only have P steps to access that space, therefore the space is bounded to P anyway. (Memory you can't access can't help you.)


Agreed. We have problems that we have polynomial algorithms for but we don’t use them because their exponential counterparts are orders of magnitude faster on average.


This sounds absolutely wrong to me. (Though note - I'm not a complexity theorist.)

For one thing, the proof of P!=NP (which is what most people assume) will almost certainly provide us a lot of new math and insight. So yes, there won't necessarily be something new practically just from the result P!=NP, because this is what everyone assumes, but it's likely to yield lots of new math.

As for the case where P=NP - that's almost certainly a gamechanger, especially if this is proved by way of a counterexample, e.g. a problem in NP that is actually computable in P. While this may not be immediately computable on current hardware, at that point we'll almost certainly focus a lot of time and effort into making it more feasible.

What CS problem do you think is more important?


I am a theorist (though not in complexity) so I agree with you that new math is never bad. I also agree that an very lucky P=NP result _may_ be implementable _at some point_ in the future, though to be honest I wouldn't put my money on that happening.

As for more important problems-- I think it's more important to improve the specializations of NP problems, heuristically, for problem instances which arise in practice. A concrete example in my line of work could be improving the performance of SMT solvers on encodings of real programs. There's a lot of exciting work happening in this area and it's opening doors in program verification previously thought to be unrealistic. IIUC the formal methods team at AWS is putting a lot of work into memoized, distributed SMT solving, and are making meaningful gains over the current state of the art.

I don't really care if we can solve the most general NP hard problems in O(bad*N^bad) only for N>bad. A) Even a getting result that weak seems to be too challenging to prove and probably not true, and B) trying to solve the most general problem is complete overkill for any problems that come up outside a complexity textbook.


OK, fair enough. My intuition is that we will "get lucky" in developing new math for proving P!=NP, but that's just an intuition and I'm not a theorist, so you probably have a better sense for this than I do.


While this may not be a practically important problem, I can't readily come up with a more important problem in computer science off the top of my head.


Isn't the answer obvious because of entropy. It's easier to create disorder but much more difficult to create order.


No. This is entirely irrelevant to the problem of P vs NP.


Should say the most unimportant. Is there anything practical to gain by solving it?


Proving the equality would be huge.

But nobody seriously believes that to be the case. There are no practical consequences of providing something generally assumed to be true. Possibly the proof itself would be enlightening, but without seeing it we cannot guess how.

Providing the inequality would reduce the time we spend dealing with cranks, but that's about it.


Would answering this question have a real impact on daily life?


Probably only if P=NP AND if the P algorithms have small polynomials.

It's possible that problems which today we only know how to solve with exponential algorithms are in fact solvable with algorithms in O(n^1000000) or whatever, in which case nothing actually changes - for all practical sizes (which is not a lot of sizes at all), the exponential algorithm is actually faster.

However, if someone found an O(n^2) algorithm that solves, say, SAT, that would be a major change in our daily lives - there are numerous problems that are completely intractable today that would become quite trivial.


I'm struggling to see how a proof of P=NP really makes any difference, though- other than psychologically, in the minds of people working on solving problems.

It doesn't have any real practical use for people actually building empirically faster implementations of useful algorithms (like SAT, as you mention).

I should have been clearer in stating my premise: it appears that determining that P=NP itself would have no real direct impact on any existing real-life problems; in nearly all cases, "faster algorithms that impact life" come from a combination of theoretical computer science discoveries combined with engineering (partly CS engineering, partly hardware engineering).

I'd love to find a real counterexample, IE, something in material science, or logistics/optimization, that would make a difference to the world economy. I see CS folks get really excited about this but often times as I poke and pull on the facts, I find that the more theoretical computational complexity gets, the less impact it has on daily activities.


I think I see where you're coming from, at least somewhat. Any revolution would come from finding a fast algorithm for an interesting problem, which would also happen to prove P=NP, but that wouldn't be the important part. The algorithm itself would be.


an O(N^2) algorithm could be useless, if it's a galactic algorithm: https://en.m.wikipedia.org/wiki/Galactic_algorithm

conversely, theoretically an O(N^100) algorithm could be practical if it's ~N^2 in practice except on very rare instances. remember big-O is an upper bound.


Sure, there are even more caveats.

I should note though that big-O is not technically an upper bound on program execution time (worse case scenario), you can use big-O to refer to the best case scenario or the average case. Big-O is a notation for fhe upper bound of a function in general (e.g. f(x) = x^2 + 70x + 9 is in O(x^2)).

I think it's reasonable in this kind of case to assume we are speaking about the average-case complexity, though it's true that many think of worse case, taking the upper bound of the upper bound of the number of execution steps.


Yes, if a polynomial algorithm is found for a single NP-complete problem then any other NP problem can be converted and solved polynomially


I think this is what’s often missing when this topic is quickly presented. And, it makes the whole thing so much easier to understand. If someone were to prove P = NP by showing how to do it efficiently, that would make many computer problems easier to solve. Even problems like “guessing a password”, and that has obvious implications.

If we just knew P = NP because a trusted oracle said so, or someone proved it without showing how to do the conversion, or the conversion was impossibly laborious (a constant 2*100 steps) then that changes much less. If someone proved P ≠ NP that would change the least, since that’s how we operate today.

It’s not about getting the answer. It’s about what getting the answer suggests.


But are you sure that woudl actually make any practical difference for any existing problems in the near term?


If P!=NP, probably nothing changes.

If P=NP, a very large class of optimization problems probably become practical to solve, which would be a big win for science and technology generally, to the extent that I think it would be felt in daily life.

Edit to add: and also as another commenter mentioned, most existing encryption would likely be much easier to break, which would also be felt.


Yes, in the sense that if P = NP, it would conceptually be a lot easier in theory to break a lot of public key encryption, which are NP hard problems.


The only impact would be the slew of blog articles appearing on Hacker News.


Depends on the answer.


Sounds like a great task for AI to prove as true, false, or unknowable


At this juncture there's no reason to think (quite the opposite really) that LLMs are any better at the kind of sophisticated reasoning presumably necessary to prove something like this than human computer scientists are.


There’s more to AI and deep learning than LLMs


There is no reason to think any of the other approaches would help either


Speaking of which, anybody have advice on how to simplify section 2 of this paper and can you please tell me how I’m fooling myself? https://drive.google.com/file/d/1ugJ9czd4EPUPlQO2uGQgsfc6EHW...


At least in section 2, the biggest way you're deluding yourself is in assuming that you can extract a small number of candidate solutions from the problem. If you actually work on any of the NP-complete problems you'll quickly realize that they have huge numbers of candidate solutions which are wrong. So, to find a true solution, you have to apply the solution verification algorithm (the one in P) a great many times, leading to the all known algorithms being super-polynomial.

More generally, you seem to think that problems in P and those in NP are considered unrelated. In fact, for all of the NP-complete problems (SAT, Travelling Salesman, knapsack etc) verifying a single candidate solution is a problem in P. However, the best known algorithms for solving the full problem are to iterate over all candidate solutions and run the P algorithm for each - which is exponential time or worse for all of these problems.

To find a P algorithm that solves one of the NP-complete problems, you need to find an idea that is better than just trying solutions to see if they fit.


Thank you!


With all the due respect: go study.

P, NP, 3-SAT and all the other terms you are using have formal mathematical definitions -- you can't just make stuff up.

It's complete nonsense.


Wow, the definitions dont just have the wrong formal definitions, but like the wrong intuitive/informal definitions


Which ones?


The definitions of P, NP and polynomial time seem to be wrong.


Its too incoherent to really understand, but as far as i can tell, i think its a long winded way to beg the question.


Can you be more specific? Sorry, I don’t understand how I failed to make an argument, and I certainly didn’t mean for this to be incoherent!


I'm sorry, incoherent was a little mean. I do find it very hard to follow though

I think the main problem is you are using terms wrong. When i was reading i was substituting correct meanings in, and then getting confused when nothing you said made sense. Some examples:

* using incorrect/misleading definitions of terms, e.g. consider the first sentence of section 2:

> The difference between “p” and “np” hinges on the difference between solving problems without solutions, and checking if given solutions solve a problem

There is a certain sense this is true, but its very misleading because it doesn't mention time. The amount of time taken is very key here. If you don't account for that, there is no difference between np vs p.

As another example

> Therefore, it is almost surely possible to convert “np” problems into “p” problems: generate candidate solutions from the problem itself, then check them.

Suggests you don't know what NP means, since (speaking informally) one of the definitions of NP is the set of problems that could be solved that way in polynomial time if you had a computer that was good at guessing.

> The antithesis to “p != np” is “p equals np” which implies all “np” problems are exactly equally easy to solve as all “p” problems

This is just a straight up false statement. P=NP means that NP problems are no harder than the hardest problem in P. It doesn't mean exactly equal.

> We generate secret keys using problems we believe are easy to check, but hard to solve (“np”), because we support the thesis “p != np”

RSA is not known to be np-complete. P!=NP does not imply RSA is secure.

-----

Anyways, i think fundamentally you basically redefined np to be something that is equal to p, and used circular logic to prove that it was (e.g. you talk about generating a "category" of simple correct solutions to an NP problem, and just presume you can do that. Well obviously if you had a polynomial time algorithm to do that, P=NP because the entire question is whether it is possible to do such a thing)


Thank you for the clarification, I will definitely consider these points.


I'm going to thoroughly trash-talk your work, but please keep in mind: just because you've produced something nigh-on-worthless, that doesn't mean you are worthless. On top of that, you've definitely got some skills, like writing clearly. (The ideas are nonsense, but I understand why they're nonsense, because you've presented your understanding very well indeed.)

> Abstract

Showing that all problems in NP are equivalent to problems in P is proving P = NP… but your abstract acts like that's a different thing. It's clear you don't even understand the question you're trying to answer. Most people would stop here.

Most people would stop here, because they have found that there are certain people who you cannot convince of their own work's invalidity. Because replying means you will be sent papers to "proof-read" for the rest of the crank's life. I'm going to try putting exercises-for-the-reader in this criticism, and see if that helps, but I haven't got my hopes up.

There is no "middle road". Either P = NP-Complete (the hardest problems in NP can be solved by algorithms in P), in which case P = NP; or P ∩ NP-Complete = ∅ (there is no P algorithm that can solve any NP-Complete problem). This is because all NP-Complete problems are equivalent with each other.

Exercise: reduce the clique problem to 3-SAT.

> Math IS Chemistry

Sounds like you're vaguely thinking of https://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_... at a surface level, except all of the details are wrong. That diagram is nonsense.

Exercise: calculate the minimum work required, in joules, to zero a 64-bit register with unknown contents at 20°C.

> Introduction

Equivalence relations have nothing to do with metrics. We are thinking clearly about the problem, and you aren't.

Exercise: reach the Algorithm world of the Lean4 Natural Number Game: https://adam.math.hhu.de/

> Words

That's not polynomial time: that's quadratic time. Polynomial time is O(n^k) for some k. ≠ means "is not completely the same": completely different (for sets) can be written "A ∩ B = ∅".

You're not even attempting to solve "P=NP?": you're tackling a different problem. There's no problem with that: the problem you've attempted should only take you about a year of proper study to learn how to solve, and I'd actually be interested to see you give a correct explanation. (You might find an answer more quickly than that.) The question is “Are there any quadratic-time algorithms in the class NP?”, and the correct explanation is about 3±2 sentences long.

Exercise: look up the standard meanings of the words you've used. (Where you've said 'category', you mean 'class'; and 'stuff' should be 'physical object' or 'body'.)

Exercise: derive Bayes' theorem.

> 1. An argument of universal freedom […] because the universe would not exist.

You have no reason to believe any of this: it's just baseless speculation.

Exercise: read The Open Society and Its Enemies, by Karl Popper. 1947 book, published in two volumes (though there are combined editions). If it's not in your local library, you can borrow it from the Internet Archive: (1) https://archive.org/details/in.ernet.dli.2015.59272 and (2) https://archive.org/details/in.ernet.dli.2015.77661

> What is the opposite of waste? Synthesis. A universe must generate stuff.

This really doesn't follow from anything you've said. It doesn't even have a clear meaning.

Exercise: practice deleting passages from your work. (If you can't bring yourself to do that, cutting-and-pasting them to a discard file is also acceptable, so long as you don't try to salvage them later.) Kill your darlings.

> Why? […] arrow of time.

Not entirely wrong! This whole tangent has nothing to do with P=NP, but it's kinda almost right!

Exercise: find an example of a physical conservation law, and learn why people believe it's a conserved quantity.

Exercise: find a physicist who thinks magnetic monopoles can exist, and a physicist who thinks they can't exist. Learn what their reasoning is. (Don't actually ask them directly: I mean find someone who's explained their reasoning already, and read what they wrote.) Think back on what you learned from The Open Society and Its Enemies: how does this relate to that?

> Would it be possible for a universe to spontaneously invert encryption, simply to increase freedom? I would argue, yes.

Exercise: find out what a "macrostate" is. Consider a discrete 52-card deck of cards, and a uniform shuffling procedure. Calculate the probability of the macrostate transitions (sorted → sorted, sorted → unsorted, unsorted → sorted, unsorted → unsorted). Consider the evolution of the system for the first five discrete time steps, starting at a sorted configuration.

> Is a "hack" […] Kinetics.

The mistakes you're making here should already be addressed by the earlier corrections.

Exercise: explain why what you've written here is wrong.

Exercise: learn what cybersecurity is.

> 2. An argument that “solve” is equivalent to “check”

First sentence is wrong. Third sentence (about “undecidable”) actually looks like computer science: I think your understanding here is correct.

> Therefore, it is almost surely possible to convert “np” problems into “p” problems: generate candidate solutions from the problem itself, then check them.

You can do this. However, you'll need to try exponentially-many candidate solutions, so the overall algorithm takes exponential time.

Exercise: calculate the asymptotic time complexity of bubble sort, and deterministic bogosort. Compare the two.

> For example, in 3-sat […] generate the candidate and check it.

The interesting part about 3-sat – the bit that makes it NP-hard – is the fact that you can't, actually, do that. You can't iteratively go from a incorrect candidate solution to a less incorrect solution: the candidate solution might be wrong in a fundamental way, partly-correct only by accident, with no way to improve it.

Exercise: understand that this is a metaphor for your paper. You can't make this paper correct by making iterative changes to it, because it's so fundamentally wrong that it can't ever be made correct.

> existing SAT solvers

… aren't in P. So you can't use them as part of a polynomial-time algorithm. Not even if you use a three-valued logic version of the SAT-solving algorithms: that doesn't change how fast they run, or really anything at all (since the algorithms, being two-valued logic algorithms, would just be ignoring one of the values of your three-valued logic).

> 3-Sat Problem = (-1, 2, 3), (-1, -2, 3) => Solution to Check = {1: False, 2: Maybe, 3: True}

This looks like the very beginnings of a heuristic (symbolic) 3-SAT solver. It's not polynomial-time, but it'd be slightly faster than the naïve brute-force approach. That's something to be proud of rediscovering!

Exercise: implement a naïve 3-SAT solver (tries all the possibilities), then add a heuristic to make it a bit faster.

> […] the solutions must share structure […] This means we can save incremental progress to speed up checks over time.

I can see why you might think that (I used to think that as well), but it isn't true.

Exercise: find a 3-SAT formula with two solutions that don't share structure. You probably changed your mind about what "structure" is half-way through, so now find a 3-SAT formula with two solutions that don't share your new definition of structure.

> However, if we truly understand our problem, […] shrinks the category of solutions dramatically.

Yes… if we just look at the first and last bit (the bit in the middle, about "Maybe", is wrong). And doing that is the hard part: we're trying to measure how long this process takes, when we're measuring the time complexity of an algorithm to solve a problem.

Exercise: try to implement a 3-SAT solver that works the way you describe in the middle bit, and see why it doesn't work.

> 3. An argument for universal search

This whole section is completely wrong.

Exercise: write an algorithm in P that verifies a solution to the Clique problem. (Finding a solution cannot be done in P, in the general case.)

Exercise: write an algorithm in P that finds a solution to some special-cases of the Clique problem. Show that it cannot solve all cases in P.

> Summary

Pretty good summary of your arguments. If you figure out how to stop writing nonsense, you might make a decent academic.

> Implications

Exercise: research the Principle of Explosion.

> References

I see you've got a Category Theory reference. You do not understand Category Theory, and you will not be ready to learn about Category Theory for a very very long time, if ever. Most pure mathematicians never learn category theory. It's got nothing to do with your "category" (which is usually called "class").

Other than that, this section is not bad!

Exercise: learn a convention for including inline citations. (optional)

Exercise: why are citations not usually URLs?

(But yeah, overall, this paper is worthless. I spent entirely too long reviewing it. I can see you're passionate, but it wasn't worth me reading it, and I knew that from the start.)


What a treasure trove! Thank you for the feedback.


No offense but your paper and website is giving off serious elizabeth holmes vibes.


Dang, that hurts bro geez, no technical specific points about the paper?


What are you trying to do?

Also it's funny this topic came up since i just watchd this a couple days ago - https://www.youtube.com/watch?v=pQsdygaYcE4&t=1039s


Ah, the website is old, I wanted/want to make an exponentially more affordable silicon wafer foundry by using cheaper structures, and I’m interested in R2R processing.

I updated the link on my bio to point to a simple code project instead. Thank you for pointing out the website was out of date and overly ambitious.


> - “p” - a category of problems which can be solved in O(n²)

> - “np” - a category of problems which can be checked, but not solved, in O(n²)

These definitions are incorrect. Certainly there are other polynomials than just n². P is the complexity class wherein the problem can be solved on a Turing machine in a number of steps equal to some polynomial of the size of the input. For NP, replace "Turing machine" with "nondeterministic Turing machine" and otherwise it's the same. The 'size of the input' is the count of symbols on the machine's tape, allowing for the input to be written in any finite alphabet, chosen by the designer of the Turing machine.

Nondeterministic can mean different things in different contexts, so I want to clarify. Here, nondeterministic means that the machine does the 'impossible' thing of forking itself and running up to an infinite number of copies of the deterministic machines (including distinct copies of their state) at the same time, then if one of the computations succeeds then that computation is kept and the others are discarded (including the count of how many steps they spent). This is why, if you have a machine that can check a problem in P, you can make the machine to solve it in NP by wrapping your checker in "use nondeterminism to generate every possible input value" and simultaneously check them all.

I highly recommend the theory of computation course at MIT, which is freely available online:

  https://ocw.mit.edu/courses/18-404j-theory-of-computation-fall-2020/
  https://www.youtube.com/playlist?list=PLidiQIHRzpXIFFbyGrWkqXXVj0BztDcTF
I found the course very approachable.

> All information necessary to generate checkable solutions to any problem must be encoded in the problem itself, or the problem would be undecidable, because there would be no way to check a solution against a problem if the problem and solution were completely unrelated.

This is actually a great insight, but you must keep in mind that if I tell you that the solution to "a > b" is True, you cannot find a and b. You are touching on an important concept in computer science theory called "reversible computing". It's also related to information theory. If I have a machine that solves the travelling salesman problem that outputs "the fastest route is from B to C to A", then you must ask yourself whether there are multiple different inputs to the problem which can generate that output? (You don't always have the freedom to change the input type of the problem, or else I could factor integers in constant time by demanding that all integers must be input as a product of primes.)

When the values are computable from the other, the question remains about the number of steps it takes to do. You still have to show that it's bounded above by some polynomial, and then that this is true for all problems in NP. We already have ways to rewrite one NP problem into another, and so showing a single machine that takes polynomial steps to solve any of those NP problems would suffice to show that P = NP. This is how we've proven lots of complexity classes the same in the past. On the other hand, we have almost no techniques which prove that two complexity classes are distinct.


Thank you!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: