setInterval(function() {
var n = parseInt(document.querySelector("#n").textContent);
document.querySelector(is_prime(n) === "prime" ? "#yes" : "#no").click();
}, 0);
document.querySelector("#start").click();
EDIT: replaced my isPrime with the one already in the game.
EDIT 2: There appears to be some sort of increasing delay (pretty sure it's not just `is_prime` being slow) starting around 500 points. Maybe the game itself trying to compute primes?
24607 (with the cheat), 621 (without) :D
Fun evening challenge :P
function isPrime(n){
if ((n<=1) || (!n % 2)) {
return false;
}
for (var k = 3; k < Math.round(Math.sqrt(n) + 0.5); k = k+2) {
if (n % k == 0) {
return false;
}
}
return true;
}
var nSpan = $("span#n")
var yesButton = $("button#yes");
var noButton = $("button#no");
var body = $("body");
function solve(){
if (isPrime(parseInt(nSpan.innerHTML)) {
yesButton.click();
} else {
noButton.click();
}
if (body.className != "end") {
setTimeout(solve,0);
}
}
solve();
Decreasing the loop's upper bound to Math.sqrt(value) and only testing odd values after 2 are really simple ways to improve performance.
Beyond that it'd be nice to build a list of previously determined primes to reduce your number of checks, but I guess at that point it might just be better to implement classic primality tests...
If you go that way, it may be even faster (depending on you CPU, of course) to use a recursion for computing the squares:
(n+1)^2 = n^2 + 2n + 1
That gets you:
def squares:
n = 1
nSquared = 1
delta = 1
while nSquared <= n:
n += 1
yield n
delta += 2
nSquared += delta
but as others said, once your limit is large enough, the relatively large cost of computing a square root once may be lower than the O(sqrt(n)) additions in this loop.
(Quick check: if you use a binary search to find sqrt(n), you need 2log(n) iterations, each of which requires a division by two (for integers, that is a bit shift) and one addition. A linear search takes sqrt(n) iterations, each with three additions. For whatever ratio of speed of operations you pick, the former will be faster for sufficiently large n)
You would normally use something like newton-raphson to compute a square root, which still isn't O(1) but converges fairly quickly. I would be very skeptical that n muls/adds would be faster than an sqrt, at very least for x86-64 since it is implemented as an instruction.
My mental model of a CPU is a 6502 or, at best, a 68000 without FPU :-)
When asked about modern day performance, I would just say “I do not know”. Thing is that those muls, on modern CPUs, might be pipelined in parallel with the adds, so that, time wise, you would get them for free.
But of course, it might be possible to get that sqrt for free, too. Even if it takes 50 cycles, you might start one, do a few iterations without checking for ‘reached sqrt(n) yet’, and then start a loop testing for the limit.
If you really want to know for x_64, http://www.agner.org/optimize/ probably has the answers, but then, you would have to know the x86 instruction set, which is horrendous (compared to that 6502 or 68000). Even realising that there will be single, double and vector variants, the number of different instructions with ‘SQRT’ in their name I find in http://www.agner.org/optimize/instruction_tables.pdf is insane.
And of course, probably, something completely out of left field could well be the fastest way to do this (by a few ns, probably). For example, modern CPUs can count leading zeros in an integer. If that instruction is fast (for x86, that is not a given; ‘bit scan reverse’ was slow on some CPUs) subtract from 64/32/16, and halve, and you have a decent approximation to 2log(sqrt(n)) (using ‘if n has b bits, sqrt(n) has about b/2’)
Incidentally, even if you are using Newton's method to do square roots, it helps significantly to start with a guess of the right size. If your initial guess is way too large, each step will cut it down by about a factor of 1/2; if it's way too small, it's equivalent (for finding sqrt(N)) to choosing N/guess. Thus, if you're taking the square root of 2^64, and you start with 1, your subsequent guesses will be roughly 2^63, 2^62, 2^61, 2^60, ..., taking a total of 36 steps to reach the right integer; whereas if you started with 2^33, it takes (by my computation) only 5 steps to get to the right integer.
The effect is more significant with larger numbers:
My mental model of a CPU unfortunately tends toward x86.
> But of course, it might be possible to get that sqrt for free, too. Even if it takes 50 cycles, you might start one, do a few iterations without checking for ‘reached sqrt(n) yet’, and then start a loop testing for the limit.
On the CPU side, the branch predictor might make the sqrt free by pipelining it, but that starts to make the analysis a bit harder. Strange things can happen when you introduce branch prediction. The performance would also depend on ALU/FPU contention, hyperthreads, etc...
> If you really want to know for x_64, http://www.agner.org/optimize/ probably has the answers, but then, you would have to know the x86 instruction set, which is horrendous (compared to that 6502 or 68000). Even realising that there will be single, double and vector variants, the number of different instructions with ‘SQRT’ in their name I find in http://www.agner.org/optimize/instruction_tables.pdf is insane.
I've only just started reading Agner Fog's documentation (which is awesome), but it definitely seems like the place the find low level information of the ilk when it's needed. If you read through the instruction table though, you probably noticed there are fsqrt and (v)sqrt(p)(s/d). For most purposes, x87 is actually deprecated and SSE2, the latter, is preferred for scalar floating point. I would guess the Mill devs might have some insightful comments.
> And of course, probably, something completely out of left field could well be the fastest way to do this (by a few ns, probably). For example, modern CPUs can count leading zeros in an integer. If that instruction is fast (for x86, that is not a given; ‘bit scan reverse’ was slow on some CPUs) subtract from 64/32/16, and halve, and you have a decent approximation to 2log(sqrt(n)) (using ‘if n has b bits, sqrt(n) has about b/2’)
I'm not sure that's quite right. Counting leading zeros is normally very fast if you have the instruction, but those two functions diverge pretty quick. For a constant cost, my guess is that the accuracy lost wouldn't be performance gained. The other point is that accuracy is fairly important if you're testing for primality.
It's an FPU instruction with some setup and teardown instructions associated.
I usually bow to Agner Fog on this:
"On Core2 65nm, FSQRT takes 9 to 69 cc's (with almost equal reciprocal throughput), depending on the value and precision bits. For comparison, FDIV takes 9 to 38 cc's (with almost equal reciprocal throughput), FMUL takes 5 (recipthroughput = 2) and FADD takes 3 (recipthroughput = 1). SSE performance is about equal, but looks faster because it can't do 80bit math. SSE has a super fast approximate reciprocal and approximate reciprocal sqrt though.
On Core2 45nm, division and square root got faster; FSQRT takes 6 to 20 cc's, FDIV takes 6 to 21 cc's, FADD and FMUL haven't changed. Once again SSE performance is about the same."
SSE performance is better than x87, for a number of reasons. Latency and throughput are generally better, but SSE also supports SIMD which allows more than one op per instruction. SSE also has more registers (x87 only has 8 80bit compared to at least 16 128bit). If I'm not mistaken, the number of available execution units with SSE is also larger, which means more opportunities for instruction level parallelism.
These are why modern compilers don't emit x87 for floating point.
She's right about squaring on each iteration, though. And granted, a square root is much more expensive, though done only once. Which method is faster would depend on the number of iterations.
The latency for sqrtss on broadwell is 11 cycles with a throughput of 4, where mul is a latency of 3 and throughput of 1. So, using some concrete numbers, sqrt is more expensive, but not polynomially or even an order of magnitude.
I honestly haven't looked into it in that much detail, but if I had to guess... probably. I'm not sure what the implementation used in hardware currently is, but at very least it's constant bounded,
Conversions are still somewhat expensive, but I do know compared to polynomial time or a large enough constant, it can be a better choice for an optimization. For example, computing log10 of an integer.
Since you're claiming that square root is "~N", the N here must be the number of digits. The number of multiplications required for the "optimization" you're proposing is exponential in this N.
Dynamic programming (testing divisibility by the primes already found) would greatly improve the algorithm. And of course sieving is absurdly fast compared to checking primality individually.
He initially did. Then decided to reuse the one already provided. The goal of cheating in this manner, after all, would be to match exactly the implementation the original programmer used. Even if it was buggy.
Nice game. Something weird can happen in the brain when we have to choose a binary answer under pressure. Maybe the fact that we are answering positively for prime (which is not the default) adds to that. For me it helped to rename 'yes' to 'prime' and 'no' to 'divisible'.
Same here. What happened to me is, instead of asking myself "Is this a prime number?" I
kept starting to ask myself "Can this number be expressed as a multiplication of two other numbers?" for which the answers are obviously reversed.
Finding divisors is often hard when your number is large. It turns out the condition that a number n can be expressed as a multiple of two other numbers is equivalent to there being a number z less than n except for 1 and n-1 so that (z^2 mod n) == 1.
This condition is the basis of the Miller-Rabin primality test. Sadly its a bit hard for humans to implement this algorithm for mentally proving primality.
Yes, I found the UX aspect here interesting as it twisted my brain a bit too. A current example of this is the Brexit referendum, where the question was initially posed as a Yes or No, but required revision to a "Remain" or "Leave", which is much easier for casual observers to grok and lets people get behind their favourite actual position instead of embracing an arbitrary Boolean value (as happened with the Scottish referendum's Yes and No camps).
Also, Prime itself is a tricky term for a Yes-No question as it's really a negative concept - the absence of something. So asking if X is prime is requiring double-negative logic in the same way as an app setting like "Disable X feature [On / Off]".
"The way mathematicians have viewed the number one (unity, the monad) has changed throughout the years. Most of the early Greeks did not view one as a number, but rather as the origin, or generator, of number. Around the time of Simon Stevin (1548-1620), one (and zero) were first widely viewed as numbers. This created a period of confusion about whether or not the number one was prime. In this dynamic survey, we collect a cornucopia of sources which deal directly with the "question what is the smallest prime?" The goal is to create a source book for studying the history of the definition of prime, especially as applied to the number one."
Still the case for the natural numbers (with 0 instead of 1). Though perhaps more obvious why you would choose one or the other there.
Going through school I'd be told 1 was/not prime differently by different teachers. Always seemed pretty arbitrary to me; like 'They' ought just to decide!
What we seem to be hearing here is that They had, just another classic case of this information taking 50 years to filter down the education system.. (I wonder how many planets they teach there as being in our solar system?)
For fun I did a linear regression of year vs # planets and found that the while the slope was 1.8 planets/century, this was not significantly different from zero. The evidence appears to be consistent with a constant number of planets.
Yeah, that ended my run at 22 correct answers. Pretty annoying. I knew it was non-prime by definition, but I had no idea whether this quiz was aware of that, so I just made a quick gut decision.
I've heard 91 called the "smallest non-obvious composite". Assume that testing for divisiblity by 2, 3, 5, and 11 is easy - for 2 and 5 you just look at the last digit, for 3 you take the sum of the digits, and with two-digit numbers a number is divisible by 11 if and only if both its digits are the same. Also assume that you can recognize squares. Then 91 = 7*13 is the first number that you won't recognize as composite using these test.
There is a rule for divisibility by 7 - take the last digit, double it, and subtract from the rest of the numbers. If that is divisible by 7, the original number is too. For example, 91 gives 9 - 2 * 1 = 7, so it is divisible by 7
Larger Lemma: every number ending in '0' (besides '10') is not prime in any base. (The primeness of '10' obviously depends on the primeness of the base.)
I'd strongly prefer a UX of "Prime" "Composite" buttons, not "yes" "no". And exclude 1; it's special (not-prime for good reasons, but also not composite; Prime vs. Non-Prime isn't as good a test. Plus, treatment of one is largely for definitional reasons and isn't intuitive to non-mathematicians.
The following theorem, originally imparted to me by an algebra professor in the UCLA math department (grad school), may be helpful to you when testing yourself:
Theorem. All numbers < 100 that look prime, are prime EXCEPT 91.
(My addendum)
Where "look prime" means, is not a multiple of 2, 3, 5, 11, nor a perfect square. All these numbers have quick tests for divisibility.
I don't think 49 "looks like a prime", because it's clearly 7^2. But I hadn't realized that the other perfect squares are obvious non-primes for other reasons, ie. because of 2s, 3s, and 5s. Neat!
The not 49 and 91 rule combined with the rule that "a prime above 3 is always exactly 1 off being a multiple of 6" will get you flying through this test.
Interesting how after a few games, I just became terrible at this. I started out getting 13-16 right before screwing up, but after a while, I was messing up after two on numbers like 21 and 63. I think this game broke me.
I got 22 on my first try. The most helpful trick is to note that if the digits add up to a multiple of 3 it cannot be prime (because it is divisible be 3) with the exception of 3 itself.
(now, granted there's a catch: question like 'is 89 prime' is harder than 'is 5 prime'; so it makes sense to have very high confidence of the latter (99.99% sounds reasonable) while much lower of the former)
p.s. Game over
117 = 32×13
You correctly sorted 20 numbers.
Feel I did pretty well.
var element=document.getElementById("n");element.addEventListener("DOMSubtreeModified",function(){function a(a){setTimeout(function(){var b=document.createEvent("Events");b.initEvent("click",!0,!1),document.getElementById(a).dispatchEvent(b)},1)}var b=parseInt(element.innerText);a("prime"===is_prime(b)?"yes":"no")});
Any number whose digits sum up to a multiple of 3 (3,6,9) when summed until 1 digit remains; is a multiple of 3. Could this be something useful as a quick check for qualifying numbers?
I don't have any Assembly knowledge, but I presume that it wouldn't be efficient implementing this, because of having to cast to string, then back from string for each digit. Then the while loop to sum each digit.
Running the check only for odd numbers could also be an improvement.
Well, it is what it is (the first prime number after a googolplex). What would you prefer as a description of it? A listing of its digits? That would take rather a long time to read out. I can start you off, though. Its first few digits are 100000....
(It's easy enough to write a program that will eventually, slowly, find and stream to the display its digits or any other basic information about it you are interested in, of course. It's just infeasibly slow to use such a thing at the moment.)
The current largest "known" prime, for some sense of what it is to indicate a specific number and know it to be prime, has about 20 million digits (and a very specific convenient form). The prime you are looking for will have about a googol digits instead. So, we're far off, at the moment, from being able to figure out the sort of information about it you're presumably interested in.
Well, we could represent it by googolplex + x. I believe x cannot be 1 for it to be prime due to number theoretical reasonings that I forgot in the meantime.
So, we're far off, at the moment, from being able to figure out the sort of information about it you're presumably interested in.
I'm not "presumably interested". I'm genuinely interested and curious about it. I'm just hesitant to invest a lot of time in it because of the unlikelihood of success.
Just to clarify, when I said "the sort of information about it you're presumably interested in", I wasn't at all doubting your genuine interest and curiosity, but rather, whether I was correct in my assumptions as to the sort of information about the relevant number that said interest and curiosity would be directed towards.
Mostly, it's just a hedging phrase that I felt the need to write which carries no great meaning. (And, in some sense, this whole post is similar...)
Imagine primality testing was O(p) where p is the number of bits in the number you want to test. It is, in fact, way slower but let's give ourselves a huge advantage. Testing any googolplex-sized number would still be impossible. This seems to answer your question fairly simply and conclusively.
I agree with you, there is no way we could compute it with our computers. But there is still number theory with which you can analyze incomprehensibly large numbers and make statements about them. Maybe you could say that the next 10 numbers after googolplex are not prime. That would already be a first step towards an answer. You can't just categorically say it's impossible to find out. It is just that our understanding of mathematics is far from being able to answer this question. Anyway, it's more of a playful question I'm asking just for fun. Whether or not we find an answer is not of importance to me.
It's a perfectly fun and good question, I just don't think the answer is that hard. Short of a truly titanic mathematical breakthrough, it's not possible to find out. Beside this being one of the oldest, most worked class of problems in mathematics, the people doing these large and systematic searches are not ignorant of number theory:
While you can easily tell some numbers around googolplex are not prime (googolplex + 1 and googolplex + 2 are obviously not) this doesn't tell you anything useful about where the next prime might be. Testing a single such gigantic number for primality is infeasible and there aren't exactly a lot of prime numbers out there - on average, one every couple of googols as you can see from:
EDIT 2: There appears to be some sort of increasing delay (pretty sure it's not just `is_prime` being slow) starting around 500 points. Maybe the game itself trying to compute primes?