Hacker News new | past | comments | ask | show | jobs | submit login

If there was one single thing I learned from my functional programming class in college, it’s that there exists a way to implement imperative programming in a pure functional language. I distinctly remember the aha that Turing complete means Turing complete, and functional vs imperative is a style that can be emulated in any Turing complete language. Now for the life of me I can’t remember what the exact trick or mechanism was, does anyone know what I’m talking about? I’m pretty sure it leveraged one of the named functional constructs, similar to a thunk or y-combinator or something. I’m old enough that use of “monad” wasn’t much of a thing at the time.



One of the other comments about continuation passing style seems likely.

Basically, all you do is define a function (fun (arg1, arg...) -> result) and invoke it (the exact same way you define a callback with .map(() => result) in Javascript for example).

You can store the local state of the function you're currently in, into the callback and state can be passed around that way. (Say you are in a JS function where you defined a number or other variable above: you can use that number in any continuation/closure/function and pass it around, to .map() for example or another function.)

I've also heard of recursion as being a way to pass state around in a functional language. If you have a recursive function that takes a num parameter and calls itself with (num + 1), the number can be viewed as mutable state written in a not-so-mutable way.


Continuation-passing style?


I think it’s something like this, yes. I should try to dig up the old assignment, it was in Scheme and I probably have the code buried somewhere.


As I recall, if you have constant-time random access array with mutation then you can emulate that behavior in a functional language using balanced trees, with O(log(n)) time.

This means any functional programming language can implement any imperative programming language with a factor of O(log(n)) overhead.


I guess as soon as you have array mutation, you have assignment and thus it’s already imperative? The trick I’m thinking of definitely didn’t involve any algorithmic complexity overhead, it was just more of a trick to emulate assignment and imperative statements using pure functions. It was a neat observation at the time, but with the passage of time it starts to seem a little less surprising, though I guess it still is a little since the article asks the question.


Maybe you're thinking of the zipper data structure? [1]

Where you represent the composite structure hanging from the point you want to mutate, so updating that point is merely dropping the header and appending the updated value as a new prefix.

1 - https://en.m.wikipedia.org/wiki/Zipper_(data_structure)


No quite. By 'random access array' I meant changing an arbitrary array element, anywhere in memory.

Suppose you have array mutation, but the changes are only local.

Then as TuringTest pointed out, there are efficient ways to make those local changes in a functional programming language without the log(N) overhead.


There may be a lot of different ways to do this, for example writting a BASIC interpreter in LISP, or a LISP interpreter in BASIC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: