This regular expression is due to Abigail of Perl fame. Who is still very active, see http://search.cpan.org/~abigail/ for more.
However while cute, it is very slow. It tries every possible factorization as a pattern match. When it succeeds, on a string of length n that means that n times it tries to match a string of length n against a specific pattern. This is O(n^2). Try it on primes like 35509, 195341, 526049 and 1030793 and you can observe the slowdown. (Some regular expression engines may employ tricks that speed this up over the naive analysis, but none will be super speedy about it.)
It's even worse, because the speed of factoring algorithms is measured against the number of digits of the binary expansion, and this is a unary expansion. So it's really O(2^n^2)
You really have to put parentheses when building "towers" of exponentials. The bound is (2^n)^2, which simplifies to 2^(2n). This is of course still horrible, but much less so than the 2^(n^2) for which what you wrote may be mistaken.
For some this will be obvious, but for others it may still be interesting:
If this was a "true" regular expression, in the sense that it describes a regular language (http://en.wikipedia.org/wiki/Regular_language), then checking whether the number is prime or not, by checking if the expression matches, would run in linear time. This would be worth a few nobel prizes at least.
Unfortunately, it is an "extended" regular expression, i.e. not regular at all in the strictest sense, and (as the article says) requires backtracking.
It's also worth noting that what this expression matches are not primes given in decimal notation; rather, the language it accepts is "111...111" with any prime number of 1s. And we know straight-up that no "true" regular expression matches this language, by the pumping lemma (http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_langu...).
But more importantly, when we talk about the running time of algorithms, we talk about the time in relation to how many bits it takes to describe the number. So linear in the size of the number is exponential in the number of bits, and that is not very interesting.
And a better primality test would not rock the world of cryptography. At the moment http://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_... is demonstrably good enough as a prime test that nobody really is hurting for better ones. The thing that you want is a better factoring algorithm. And the General Number Sieve is already subexponential, and so better than any regular expression could be.
I know. An algorithm this efficient would still be worth that much. :)
But more importantly, when we talk about the running time of algorithms, we talk about the time in relation to how many bits it takes to describe the number. So linear in the size of the number is exponential in the number of bits, and that is not very interesting.
No, the regular expression described here works on numbers in unary representation. Linear to the size of the unary representation is linear to the number itself.
Unary representation is useless when speaking about complexity in number theory.
If you take a number N given in unary, convert it to binary and do trial division up to the square root, it will take O(N log N) time for conversion to binary and O(sqrt(N) * log(N)^2) for trial division (depending on your computational model, it could be O(N) and O(sqrt(N)) - I am counting bit complexity). In total, it's O(N log N). The runtime is dominated by reading the input! The complexity of trial division and the brilliant AKS algorithm is the same from this viewpoint.
Even if you had an algorithm that did not have to convert to binary and could tell in time linear to unary represenation whether a number is prime, it would be interesting trivia but nothing worthy a Nobel prize. In practice numbers are given in binary (or some other base>1 number system). To use your algorithm, you would have to convert to unary, which already means trial division would be faster.
He didn't say it would win a Nobel prize, just that it would be worth a few Nobel prizes. A major breakthrough in algorithmic theory could easily result in someone winning a Fields medal, and I'd say that's easily worth more than a Nobel prize.
M-R is a probabilistic primality test, with a deterministic variant for some ranges and a general-deterministic variant dependent on GRH. Lots of people want a better test, so we have Lucas-Lehmer and APRT-CL and AKS.
Better primality testing would be interesting to the crypto community, but it could rock the math world - like "Primes is in P" introducing AKS as the first deterministic polynomial time primality test.
As you said, though, better factoring would be game-changing for both cryptography AND mathematics.
I've heard that, on modern cpus, with the first 100 or so primes cached, for all but the most performance sensitive applications, MR can be run with sufficient accuracy that there's a greater likelihood that a stray neutron will flip a bit in the cpu at the time of computing MR than that MR will produce an incorrect result. Or something like that. Is it true than MR can be tuned to be as useful as a non-probabilistic test?
If you run MR 20 times, your odds of wrongly declaring a non-prime prime are on the same order of magnitude as the number of microseconds in a 5 year period. So yes, neutron flips and MR errors are on par with each other.
If you're concerned, just run MR a few more times. :-)
Unfortunately, quite a few very-well-defined terms have been made much-less-defined by use in practice.
"regular expression" has come to mean "anything that can be specified by a Perl 'RE' expression, which can backtrack and may take exponential time", rather than the original precise definition (that I will not give here) equivalent to "anything that can be accepted by a memoryless finite state automaton"
Similarly, "real time" has come to mean "on-line" or "computation happening more or less at the same time input is generated", as opposed to the original meaning of "guaranteed response to input within a pre-specified time frame" (which has been relegated to being "hard real time", with the amortized version often called "soft real time").
One could think software construction, being somewhat of an engineering discipline, would be more precise with terms - but one would be wrong; software construction as practiced today is something between an art and a craft, and is mostly not related to engineering.
It has been posted before [2], so I'll repost my comment from back then [3]:
It's cool, but it's regexp, not regular expression -- regular expressions cannot test for primeness. The proof is a simple application of pumping lemma[1]: we want to show that language { a^p: p prime } is not regular. Suppose it is. By pumping lemma we have numbers N, n >= 1, such that for every k >= 0, we have a^(N+nk) in our language. But that means that for every k, N+nk is prime, and it fails for k = N, since N+nN = N(1+n) is not a prime, since both N and 1+n are greater than 1.
This is indeed beautiful. But the author limits itself to a low-level unpacking of the regex and running a few tests cases, all the while remaining mystified as to why this works.
A high-level description helps makes obvious what's happening:
To check the primality of a number N with string matching,
repeat a string (say, "1") N times
and declare N prime:
UNLESS the repetitions match the string 0 or 1 times
OR the repetitions can be EVENLY matched into groups of MORE than one string.
I wouldn't say he remained mystified. His exploration of the 9 case basically explained how it worked, just the author chose to leave it a bit implied for the reader to figure out. I would agree for a blog post it's usually better to hammer the point home though.
Uses back-references, so it doesn't describe a regular language in the strict sense. Still, it's pretty cool to use them to trick a string-processing engine to do integer factoring.
Also, it's easily provable (using pumping lemma) that the language: { 1^n | n is a prime number } is non-regular, so no "regular expression" (referring to strict mathematical definition here) can express it.
Actually, I think it's still a very important distinction to make. Making this point helps encourage understanding what regular languages, backtracking, et cetera mean, which allows us as engineers use our tools more effectively.
It's not a nit-pick. True regular expression engines can run in guaranteed worst-case linear time. Non-regular regex engines generally have no such guarantee. If you're using a regular expression library to parse inputs on a server, such efficiency guarantees are a simple and effective defense against input-based denial of service attacks.
> It's probably time to give up on that terminology nit-pick.
Many Perl coders are self-taught programmers with no CS background. If a man page calls something a regular expression, that is good enough for them.
Why drop to that level if you know better?
I don't want to appear "gratuitously ignorant" in front of someone who might be a decent computer scientist.
There are already ways in which I will appear ignorant due to actual ignorance; why go out of my way to dumb down the way I speak about something that I do understand?
If we are going to wreck math terminology why don't we just redefine "prime number". Let's see: 3 is prime, 5 is prime, 7 is prime, oh ... so 9 is prime 11, is prime, 13 is prime, 15 is prime ... And, what do you know: now we can use a regular regular regex for primality testing over the binary representation: why, it's just (0|1)*1. As a bonus, we can restore 1 to its proper status as a prime number and kick out the oddball 2.
Say, if Perl called an odd test function "isprime", would you start calling odd numbers prime?
I don't want to appear "gratuitously ignorant" in front of someone who might be a decent computer scientist.
Someone who is a decent computer scientist will be unlikely to care whether you say "regular expression" or "look how smart I am, I know to say that this thing Perl does is called regular expressions by ignorant plebes even though it strictly isn't, let's high-five!"
Because, y'know, someone who's actually a decent computer scientist probably has more important things to care about.
In that case, I also don't want to appear "gratuitously ignorant" in front of such a lesser computer scientist who does care. :)
Maybe one thing like this doesn't matter, but if you make a habit out of sloppy use of technical terms, it will not go unnoticed.
Analogy: one neuron firing might not exceed a threshold, but thirty seven of them might.
> "look how smart I am, I know to say that this thing Perl does is called regular expressions by ignorant plebes even though it strictly isn't, let's high-five!"
That's rather verbose, making the cheesy scare quotes finger gesture an appealing alternative.
The next line is, which philosophy of meaning do you subscribe to? Ludwig Wittgenstein ("meaning is use")? Or are you an all out Humpty-Dumpty-ist ("when I use a word, it means just what I choose it to mean, not more or less")? Or maybe Sapir-Whorf: the language not only determines its own meaning, but your entire cognitive process.
I can't tell if you're defending this line of thought or making fun of it, but the term "regular language" has a specific mathematical meaning and perverting it to include how Perl uses the term is wrong, plain and simple.
I was, in fact, slightly mocking both the argument and the related argument that people who use sloppy language like to bring out in favor of using words to mean whatever they like. I certainly agree that words with precise meanings are important to use correctly.
It actually effectively lost it a bit before. Perl did not invent regular expressions, in fact Larry Wall started with a package by http://en.wikipedia.org/wiki/Henry_Spencer that itself was a reimplementation of an API that appeared in AT&T's V8 Unix that was released in 1985 and was incorporated into standard Unix tools like egrep.
Perl standardized a good enough regular expression dialect that everyone else copied it. But the divergence between "regular expression" and "strictly only regular languages" was already a lost cause.
POSIX regular expressions (file glob patterns, BRE's and ERE's) have more features than regexes in old CS papers, but they are still regular: they compile to automata with no backtracking.
It's still "under development", while I add a few missing features, but as you can see from the README these are only for fairly obscure parts of the regex language! - E.g. POSIX groups, named properties, character set intersection, ...
A better implementation will only check to sqrt(n), as there can be no prime factor greater than that without there also being a prime factor lower than that.
The thing that made this click for me was realizing the "1" in the regex was not special. The regex is just looping through integers (starting with 2) and checking if the target number is divisible by any of them. It can use any digit if the target string uses the same digit. Ruby example using 5 instead of 1:
def is_prime(n); str = "5"*n; str !~ /^5?$|^(55+?)\1+$/; end
The regex essentially says, for integer n, "does 2 ones fit evenly into n ones", then "does 3 ones fit into n ones", etc., which is the same as "does 2 fit into n", "does 3 fit into n", etc.
Thanks for sharing. This is a post I wrote way back in 2007 and, even though, as many have pointed out, the algorithm is very inefficient, it is still a good illustration of how powerful regular expressions are.
When I am explaining the basics of regular expressions to people, I always refer them to this post to make them realize that regular expressions are not only glorified search terms. In general, they are amazed :-)
An aspect of the syntax is confusing me here. I would have assumed that the expression 'x+?' would be parsed as '((x)+)?', ie. 'x*'. However, it seems the parser special-cases this as (x)(+?) instead, and interprets that as a non-greedy operator. Isn't this horribly inconsistent?
If there were, there would be finitely many repunit primes (via the pumping lemma and the notion that there are arbitrary large non-prime repunits).
Edit: that's not quite correct. The above would imply some simple structure in repunit primes. If you add the known asymptotical behaviour of the 'number of primes < N)' function, I think it would imply it.
Also, in binary, it would either imply there are finitely many Mersenne primes, or that there is a N such that all Mersenne numbers larger than N are prime (edit: Mersenne numbers are repunits in binary)
However while cute, it is very slow. It tries every possible factorization as a pattern match. When it succeeds, on a string of length n that means that n times it tries to match a string of length n against a specific pattern. This is O(n^2). Try it on primes like 35509, 195341, 526049 and 1030793 and you can observe the slowdown. (Some regular expression engines may employ tricks that speed this up over the naive analysis, but none will be super speedy about it.)
Usual arithmetic is much more efficient. :-)