Hacker News new | past | comments | ask | show | jobs | submit login
Primitive Recursive Functions for a Working Programmer (matklad.github.io)
189 points by ingve 3 months ago | hide | past | favorite | 61 comments




Far superior to the original article. But people seem to prefer ELI5 baby talk these days.


Don't know why you are being downvoted. Quoted article is (obviously) superior.


Uh, the quoted article is nice, terse, and seems to contains a lot to play with, but doesn't really address the same things as the topic article.


My guess is that if they dropped "baby" and just said:

> people seem to prefer ELI5 talk

they might've not gotten downvoted. (Didn't downvote myself, but this makes sense to me.)


You say that as if I give a shit about sanctimonious HN downvotes. :-)


The conclusion to the article has some pretty good points regarding configuration languages; I wonder if any present language satisfies all or most of those points.


Two examples that satisfy these criteria are Dhall [1] and Cue [2], but in one sense they are not interesting examples: Dhall is a total language on purpose, it does go the “not Turing-complete” route, and Cue has no functions so there is nothing to recurse.

I would say that RCL [3] satisfies the criteria. It’s deterministic and pure, has metered execution, and it sandboxes filesystem access. It can read files when allowed by the sandbox policy, but in my view that makes those files part of the source code, it behaves the same as imports.

For RCL I did not want to go the “not Turing-complete” route for exactly the reason the author mentions: knowing that a program enventually terminates is not a useful property in practice. And conversely, it is possible to write very complex programs in total languages like Agda, non-Turing-completeness is no guarantee for simple programs/configurations. All loops in RCL are bounded, but it has functions, so it has recursion. It does not have tail calls, so at first I added a recursion depth limit (to prevent overflowing the native stack), but then the fuzzer discovered a function that runs in constant stack space, yet it hangs. I still don't fully understand how it works:

    let f = g => g(g(h => k => g(g(h))));
    f(f)
Anyway, it is not a problem in practice that this kind of pathological function can be expressed, I just put a limit on the number of execution steps (a “gas limit”, or what the author calls “metered execution”). For keeping code simple, I think the fact that the built-in looping constructs are bounded, and recursion is awkward, are a good nudge, but in the end the most valuable tool is code review and applying good judgment.

    [1]: https://dhall-lang.org/
    [2]: https://cuelang.org/
    [3]: https://rcl-lang.org/


> Cue has no functions so there is nothing to recurse.

CUE developer here. This is wrong. CUE doesn't have functions, but it does have abstraction and beta-reduction, just like lambda calculus. Types can refer to themselves. There is also mutual recursion between types. We have a termination checker than ensures CUE programs are total (although it works by very different principles compared to other total languages).

If you disable the CUE termination checker you get a a Turing complete language.

If you leave it alone you get a primitive recursive language. Here is CUE implementing an arbitrary number of steps of rule 110 cellular automaton[1], which is Turing complete: https://cuelang.org/play/?id=Ityqia88Mvq#w=function&i=cue&f=...

[0] https://en.wikipedia.org/wiki/Rule_110


Thanks for pointing that out, I didn’t know types in Cue could be self-referential.


> a function that runs in constant stack space, yet it hangs

I don't think it actually runs in constant stack space, but it takes an exponential number of execution steps to get to any given stack depth, as the number of function arguments that need to be unraveled doubles with every invocation of f.


That’s interesting!


While imperative, and not 'pure' even C was created to set upper bound of the number of iterations of every loop being known before entering, thus PR

Dennis Ritchie's research at MIT was focused on what he called loop programming.

The complexity of loop programs - ALBERT R. MEYER and DENNIS M. RITCHIE

https://people.csail.mit.edu/meyer/meyer-ritchie.pdf

Structured programming, the paradigm that almost every modern programmer follows by default is really pushing you to primitive recursive functions.

That almost universal acceptance of structured programming compared to the other two types, pop and functional, is why people are confused about Dykstra's goto is harmful paper.

While primitive recursive functions don't contain the entire set of computable functions, they do contain almost all intuitive ones that are guaranteed to HALT (total).

Unfortunately there are some real needs for languages to support loops that have an indeterminate number of iterations when you enter the loop, but it is a foot gun that is avoidable by only using them when required.

Even COBOL was modernized with unrestricted goto being moved to the ALTER command.

I can't think of a modern, useful language that doesn't allow for PR functions.

But even in C, if you avoid 'while', explicitly avoid fall through, etc... you will produce code that almost always is a total functions that will always HALT.

There are cases like even type inference in ML, which is pathological in that it is way cheaper than the complexity class, thus worth the risk, despite not being total functions that make it hard for a language to restrict those use cases.

So I would say that with a pragmatic approach, all languages support defaults that support most of the points, but imposed constraints that enforce them would seriously constrain the utility of the language.

If you review even the hated SOLID and Clean frameworks, they're pushing you towards this model too IMHO.

I think the universal acceptance of structured programming, makes this easy to forget or even fail to teach. But as an old neck beard, we were taught the risks of WHILE etc...


Let's pick Dhall and see if it fails any of the points. It seems closest.


Skylark also. It's essentially Python with anything more than primitive recursive functions cut out.


One point that the article is trying to make is that even something in PRF can give rise to a very long-running computation.


> Suppose it runs faster than O(2^(2^N)). That is, two to the power of two to the power of N, a very large number.

~~While probably meant to simply, this statement sort-of undermines the author's credibility. I mean the "a very large number" part. Correct me if I'm wrong, but surly he means "a very fast growing function".~~

Or - it seems he mean the the program completes in less than O(2^(2^N)) steps.


Just tackling the first part, about restricted languages being better for some applications:

Isn’t the advantage that an upper bound on the number of steps required can be computed statically?

This means we can refuse to compute things that violate some limit and give back a meaningful error, for all possible inputs.

Whereas with the Turing complete plus Runtime Limit approach, we might pick a limit that is too low for some inputs we care about. We won’t know until we run the computation and hit the limit! We see this sometimes with C++ template recursion.

I might be totally confused here so I hope some more knowledgable can weigh in :)


I use 10x the reasonable computation amount, ever since a coworker used a number closer to 3x and real workloads hit that limit a few years later. I figure giving a bad workflow several times longer to fail is still measured in milliseconds and doesn’t hurt that much.

But I usually run into this problem when someone has treated the problem domain as a DAG but can’t enforce the Acyclic part. But modeling your problem as a DAG is reminiscent of Dark Galadriel, when she contemplates taking the arming from Frodo - All Shall Love Me and Despair. The people who make them are always way prouder of themselves than they deserve to be.

Eventually your customers who were attracted by the expensive and complex solution to their problems run out of money, and their problems seem a lot smaller to them. Then you are left - literally - with an app that cannot be made cheap enough per operation to keep their business.


3 is roughly half of ten logarithmically. And in any business that uses software (to co duct the business) you always want engineering to keep in mind the doubling time, or if the business is shrinking. A lot of efficiency decisions are driven by doubling time. I worked where we had doubling time of 90 days when I started and there keeping everything scaled up so that your idle capacity was. On more than half made sense. When it had increased to a couple years, then your could tighten down to n/n+1 utilization. If it is shrinking, first of all that's a problem, but also you can make the infra much cheaper pretty easily, if the code is still modifiable.


> treated the problem domain as a DAG but can’t enforce the Acyclic part.

Would you mind talking about this for a bit? I'm missing a lot of context.


Accurately predicting fuel usage for a blockchain computation? It's probably interesting to others but not to me.

I'd be more interested in using it just as another hammer in the tool-belt to squash bugs that got past earlier hammers (static type-checking, no nulls, no mutations, etc.)

Terminating in finite time can't prove correctness, but if I declare that my code will terminate in finite time and my compiler disagrees, I'd certainly believe my code is incorrect.


The statically determined upper bound can vastly overestimate the actual runtime complexity in common cases, because it generally can’t predict how conditionals that lead to early exits in recursions will turn out.

Note: This assumes lazy evaluation, or short-circuiting conditionals, in order for early exits to affect the runtime, but that’s what you’d generally have in a practical language.


And what if that upper bound is an extremely loose upper bound? Say a deterministic quick sort has an upper bound that's quadratic in input size even though typically it's linearithmic? Do you refuse the computation of quick sort then? Or more dramatically consider the Hindley–Milner algorithm which has an upper bound of exponential time even though in practice it runs in linear time?


I don't know about "computed statically" but yeah I think the only advantage to drastically limiting the capabilities of a language (as in many new configuration languages, CEL, etc.) is that you can have a sort of computation limit that isn't data dependent.

But I struggle to think of a single situation in which that's a real hard requirement. How many systems are there that can't give a "query took too long" error?


Well if the query is from an adversary it has undesirable results.


Like what? Can you give an example?


DDOS against your DB or even just your hash table.


I've wanted to write a backend web programming language that is based on non-terminating/bounded principles. My theory is that all functions can prove to the compiler that they have bounded execution based on the state and arguments given, and that X is the lower bound and Y is the upper bound. This propagates all the way to the entry point.

I agree with the author with regards to the stronger semantics, and that's why I thought of this language. You often want to have some guarantees over the execution time of your program. It would probably be PRF based at the core, but Turing complete in practice. Like how rust rejects invalid borrows but still offers unsafe for raw pointers, this lang would either calculate the upper bound based on the simple loop primitives, or you'd have to use the unsafe operators and provide an alternative formula for the bounds.


@author: s/defile/define/ ;)

Also: s/necessary/necessarily/


Also:

> If math makes you slightly nauseas

That's the plural of nausea, you wanted nauseous.

But the real reason I mention it is to joke, since you said you were generally descriptivist, that you fell for a common error and actually 'nauseous' describes the thing that makes someone nauseated. ;)


I'm struggling with the mini rant / motivation of the article:

> Typically, not being Turing-complete is extolled as a virtue or even a requirement in specific domains. I claim that most such discussions are misinformed — that not being Turing complete doesn’t actually mean what folks want it to mean

Why are those discussion misinformed? Most formal analysis tools (Coq, Isabelle, Agda(?)) usually require a proof that a function terminates. This is I think is equivalent to proving that it is total implying it is primitive recursive?


I haven't got to the end of it but I assume this was motivated by some configuration languages using "not Turing complete" as a feature, when the feature they really want to advertise is "reasonably bounded execution time".

It came up recently in this discussion about CEL:

https://news.ycombinator.com/item?id=40954652


As you are talking about formal proofs, and not the scientific counterexamples modern programming uses:

Proving a function is total in the general case is a NP total search problem.

IIRC this is equivalent to NP with a co-NP oracle, or the second level on the PH hierarchy, aka expensive if even possible even in many small problems.

Most of those tools work best if you structure your programs to be total, of which structural programing with only FOR or iteration count limited WHILE/recursion are some the most common methods.

While just related to SAT, look at the tractable forms of Schaefer's dichotomy theorem is the most accessible lens I can think of.


> Proving a function is total in the general case is a NP total search problem.

My intuition suggests this should be undecidable—could you elaborate on the difference between this and the halting problem?


Halt is a decision problem, TFNP is a combinatorial problem constructed of decision problems.

The HALT decider is a total Turing machine that represents a total function.

I am struggling to explain this in a rigorous fashion, and there are many open problems.

NP is where the answer "yes" is verifiable in polynomial time.

co-NP is where the answer "no" is verifiable in polynomial time.

NP is equivalent to second order predicts logic where the second term is existential 'there exists..'

co-NP is equivalent to second order predicts logic where the second term is universal 'for any'

We know P=co-P, or that the yes and no answers are both truthy.

We think that NP!=co-NP

Many useful problems are semi-decidable or recursively enumerable.

Determining if a computable function is total is not semi-decidable.

It is the subtle difference between proving a TM halts for any input vs halts for each input.

Termination analysis is the field of trying to figure out if it halts for each input. It is actually harder than HALT, being on the second level of PH on the co-NP side.

If that granularity isn't important to you, I personally don't think there is much of a risk in using the decidable metric as a lens.

Just remember that something is decidable if and only if both it and its complement are semi-decidable.

Semi-decidable problems often have practical applications even without resorting to approximations etc ...


I'm not sure this really addresses GP's question, which I share.

You claimed in GGP that deciding whether a function is total is a TFNP problem. In particular, that would make it decidable. But the counter-claim is that deciding whether a function is total is actually an undecidable problem.

I've also examined a definition of TFNP and I don't see how it is related to the problem of deciding if a function is total. Rather, the definition seems to require that the function in question be total.


Your intuition is right. The total functions are not even a provable set. So they are definitely not recursive or even FNP; never mind TFNP. Provability of the total functions would imply decidability of an even harder problem than the halting problem: whether a given Turing machine halts on _all_ inputs.

The halting problem only asks: given a TM and input, will the TM halt on that input? That problem is actually semi-decidable/provable: we just run the machine. If the answer is yes, then at some point it will halt!


> This is I think is equivalent to proving that it is total implying it is primitive recursive?

No, as the article shows there are functions which terminate that aren't primitive recursive, and indeed Agda and (probably?) the others can prove termination for some (but necessarily not all) terminating non-primitive-recursive functions.

I think the misinformation that the article is complaining about is something like (my paraphrase):

> "Turing completeness" means "can do computation" while "non-Turing complete" means both "can't do computation" and "has nice configuration-language properties"

The article points out:

- you can be non-Turing complete and still do lots of computationally-expensive / tricky work

- your configuration language probably wants much stricter limits than merely being non-Turing complete


> I think the misinformation that the article is complaining about is something like (my paraphrase):

> > "Turing completeness" means "can do computation" while "non-Turing complete" means both "can't do computation" and "has nice configuration-language properties"

"Turing complete" means it can do any computation a Turing machine can. That is absolutely more power than is desired about 99.99% of the time. You almost always want your algorithms to contain no infinite loops.

(Algorithms. Not IO loops. Those are different, and the process of taking outside input while the program is running is outside the scope of what a Turing machine talks about anyway.)

Turing completeness is an ergonomics hack, and one with a decently successful history. But it's no panacea, and if we could find an ergonomic way to get rid of it, that would be a win.

Yes, even if we didn't also enforce primitive recursion. Sure, it's nice to know you're also not accidentally running Ackerman's function, but to be honest... I've had many more accidental infinite loops than accidental Ackerman's functions in my code. By approximately 10,000 to 0.

No system can ever prevent all errors. So let's focus on preventing the most common current ones.


First thing I do when I am learning a new high level language is write Ackermann function and see what happens. Also write out plus and mult and expt in terms of +=1. I have seen people with O(n^4) code when they could have easily used O(log n) but never seen an Ackermann in the wild.


Possibly there are more ways to be non-Turing-complete than being a nice total terminating function. For instance, an infinite loop is neither capable of universal computation nor is terminating.


I am a CUE developer. CUE is primitive recursive. It also happens to fulfill your desired criteria for a "good" configuration language.


For fun, here is CUE simulating an arbitrary but finite number of steps of Rule 110[0]: https://cuelang.org/play/?id=Ityqia88Mvq#w=function&i=cue&f=...

[0] https://en.wikipedia.org/wiki/Rule_110


[flagged]


"Please don't complain about tangential annoyances [...] They're too common to be interesting."

"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."

https://news.ycombinator.com/newsguidelines.html


> "lose" and "loose"

> They dont even sound or look anything alike...

You really don't think those two words look anything alike?


I'm pretty sure that the author's first language is not English :) doesn't seem like an unreasonable mistake to make to me


You may be right, but I think this is a mistake that native speakers are more likely to make. This is just anecdotal, so don't take me too seriously...

Mistakes like "your" instead of "you're" and "should of" instead of "should have" seem to be very common for people who learned english by listening instead of reding (native speakers). The words sound the same, so they get switched.

On the other hand, as a non-native, I struggle with phrasal verbs, such as "get in" vs "get on", "get out" vs "get off", and so on. These may be very different concepts in English, but not for my native language, so I don't always choose the correct one.


And haven't discovered that LLMs can proofread text.


Says someone who can't even spell "don't" correctly. Though this doesn't really bother me at all.


Getting "then"/"than" mixed up is a HUGE red flag.


Why? Its never a problem in spoken language, so their is no reason its a real problem in written language either.(misspellings intended)


Article LGTM. Maybe it was fixed in the last 4 hours.


"loosing" -> "losing", second paragraph.


Thanks!


Its. It's.


They're / their.


Less / fewer.


Further/farther


Good. Food. Flood.

Lews. Lows. Luse. Loose. Lose.


This specific error is common amongst non-native English speakers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: