Hacker News new | past | comments | ask | show | jobs | submit login

Yes, of course you don't, I think you misunderstood my comment. The comment I replied to was asking about true brute force enumeration of all boards -- not a solver that uses inferences based on the hints -- he guessed that you only need 9! * 9! total attempts which implies enumerating all boards and picking the one that matches the initial hints. This is in fact a constant time solution like he guessed, but it is also intractable.

A normal "very simple" solver of the kind you're talking about will solve the solution very quickly, in seconds or less, and it will also not be a constant time algorithm.




9!*9! is just little over 131 billion dashboards. checking correctnes (no number is repeated in row or column) is as simple as counting a sum in all boxes, rows and columns.

Generation of all possible boards is little more complex but I can hardly see how it could take "eons" - with proper implementation (Apache Spark :P ) and reasonably powerful hardware (1000 CPU core cluster, ha! ha! ha!) it should run under 1 day and take just little over 13 TB of disk/RAM space :).


9! * 9! is not the number of boards, that is a bad estimate. If you don't account for symmetries, rotations, re-numberings and other things, the number of boards is 6.7e21. Even if you could check a full board in a nanosecond (which you can't) enumerating that number would take more than 200,000 cpu-years.

The paper linked to in the OP's article says: "Due to the sheer number of sudoku solution grids a brute force search would have been infeasible, but we found a better approach to make this project possible. Our software for exhaustively searching through a completed sudoku grid, named checker, was originally released in 2006. However, this first version was rather slow. Indeed, the paper [1] estimates that our original checker of late 2006 would take over 300,000 processor-years in order to search every sudoku grid."

https://arxiv.org/pdf/1201.0749.pdf

The estimate in the comment (9!*9!) seems to be implying a simple enumeration, not a complex strategy of symmetry-folding. But even if you do reduce the enumeration, the authors of that paper say their software requires 800 CPU years. I'm not making any claims about whether getting that down to a day might be possible, but I wish you good luck. By all means, show everyone how to do it with a proper implementation and a large cluster! ;)


O(1) means bounded by a constant upper bound, not that every actual run of the program takes the exact same amount of time. Just like "simple quick sort is O(n^2)" does not mean that every invocation will take a quadratic amount of time. The parent's "can be solved in constant time" means the same thing, informally.

Anyway, we are in agreement that enumerating all boards would be quite inefficient.


Are you claiming that all sudoku solvers are constant time algorithms, because the upper bound is not greater than enumerating the total constant number of boards? There are no O(1) sudoku solvers in existence that run in a short amount of time, yet you seem to be suggesting that the "very simple" sudoku solvers you referred to can be classified as O(1) algorithms.


I'm claiming it's possible to write Sudoku solvers that have some constant upper bound on execution time. If you read this as "a short amount of time", you are misreading it.

I'm also not saying that this is true for all Sudoku solvers, since you could write one that sometimes just decides to uselessly burn cycles for a few centuries before returning a correct solution.

Here's that simple argument again: Sudoku can be solved in time at most exponential in the input size. The input size of 9x9 Sudoku is constant. The exponential of that constant is a constant. Not a very interesting argument? I agree, so I'm dropping this now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: