Hacker News new | past | comments | ask | show | jobs | submit login
Is it possible to write games like Pac-Man in a functional language? (2008) (dadgum.com)
99 points by gus_leonel on July 1, 2023 | hide | past | favorite | 155 comments



It's probably not the author trying to say, but at a higher level a game is just:

new_game_state = simulate_a_frame(old_game_state, user_input, time_delta)

So of course it's possible. Whether it's practical is another question.


As an educational project, I wrote a PDP-8 emulator in Haskell that was structured essentially like that. It was a function from a Machine to a Machine, where Machine was a datatype representing the entire RAM, registers, and other associated state.

This abstraction was almost fully stripped away by the compiler. Since the original Machine object is no longer required when the step function returns from executing an instruction, and since the size of the emulated RAM never changes, the compiler can, for example, represent the list of RAM values as an array, and mutate the array in place and internally just keep track of the pointer to the mutated object and return that. It isn't necessary to copy most of the data structure most of the time.


Made a mostly pure brainfuck in haskell where output collects to list in state & tape is a zipper. Machine type is Brain. Compute returns Brain & bool for whether program halted or is wanting input to overwrite current tape value & resume pure evaluation

minihs keeps itself small, uses a stack of copies of source to implement []. Whereas the larger version parses the source to a vector of opcodes (which collapses repeated +-<>) & includes a jump table

https://github.com/serprex/bfhs/blob/master/minibf.hs


Quite practical I imagine, there is no reason to keep all the frames in memory. Keep the last n frames and get most of the benefits. The point of this programming style in a game is to have a few previous states when there is a crash/bug to figure out how the state ended up in a bad way. Maybe save a framesworth of state every keypress for debugging purposes. Then on a crash or bad state there is a debugging environment all ready to go, for the cost of a small amount of memory. Good deal.

Although in practice "purely functional" programming is a bit silly here. The point of PacMan is all the IO which is fundamentally all side effects. There are a lot of practical compromises that would be called "functional" like storing all the major state of the game in a big array (something like [turn1, turn2, turn3, ...]) and then implement a turnX -> turnX+1 animation function.

The fundamental insight of the functional approach is implementing a frame->frame function was already something that implicitly had to be done to make the game. If it is an explicit function then debugging it is easier. Because the input can be saved. No extra work is needed.


> The point of PacMan is all the IO which is fundamentally all side effects.

In fact, the point of a huge proportion of programming is IO and is "all side effects." It's still possible to adopt a functional style for internal calculations (and in fact this helps to decouple components and makes testing much easier, at the expense of relatively negligible copying overhead, so it's worth doing) but once you get beyond your internal toolkit and start actually trying to test the whole system, you're interacting with the outside world.


Functional programming is not some sort of mythical effect-free beast… All functional languages I know (including Haskell and Clojure) do side effects just fine, and there are plenty of different approaches for performing effectful computation sensibly within FP.


Calling I/o side effects makes FP seem hopelessly academic. Everything having to do with the world is seemingly a corner case - the core of the programming philosophy is only concerned with the part where we push bits around uselessly.

To be sure, I don't think this is a correct interpretation, but I do think it may play a role in the difficulty of getting people onboard with FP.


> the core of the programming philosophy is only concerned with the part where we push bits around uselessly.

It's the opposite, for FP, OOP, imperative, that is the part of the program which is easiest.

FP is about managing the difficult parts, and so is OOP. They just have different approaches, and note, many mainstream languages are incorporating FP concepts quite readily now.


Mainstream languages incorporating FP concepts don't do so with the intent of encouraging totally pure stateless programs. What they do allow is calling the occasional anonymous function or using Map() instead of ForEach() or for().


Totally pure stateless programs are not the norm in functional programming. E.g. the functional core imperative shell pattern.

It's also not just a syntax difference, mainstream languages are embracing immutability more and more, often as a default.

The point is that calling FP academic is a tired trope and even if it is far from the dominant paradigm, there is clearly practical lessons to be learned whatever paradigm you program in.


The only time I hear about a mainstream language being immutable by default is from Rust people telling me that Rust is a mainstream language now.

A Haskell program covered in piles of monads to abstract all state away still follows the functional core imperative shell pattern, because the runtime is the imperative shell. This doesn't make such a thing desirable for other languages.

The practical lesson I learned from dabbling with functional programming in academia is that functional programming is often impractical and does not offer any of the purported benefits of better code.


> The only time I hear about a mainstream language being immutable by default is from Rust people telling me that Rust is a mainstream language now.

Most mainstream languages are old, they can't be immutable my default without breaking backwards compatibility. I'm talking about the design direction of languages that are already based on mutability. For example, Java added Records which are immutable, as well as overhauling the date/time api with an immutable version.

> This doesn't make such a thing desirable for other languages.

I've used this pattern in other languages, none of this is a silver bullet that makes software engineering easy; but I do find it makes some code much easier to test and reuse.


Monads are just one design pattern for handling state, and not the only way. It's no different than having an OOP using dependency injection everywhere - it's a design decision that you use only when it makes architectural sense.

A shell is not more imperative than a processor - both are Turing machines, a completely functional mathematical object. Seeing them as imperative is a semantic interpretation of the people watching it, not an inherent property.


Functional languages have either converged on either just allowing mutable values or relying on Monads as the ways of handling state. I'm not aware of a third solution.


You can just use the classic temporal logic approach, where state is passed explicitly as a parameter and the new state is returned.

This is what state monads are doing anyway under a hidden syntax, and for simple cases it may be easier that building a full dedicated monad.


FP is not stateless, is state-aware. You need to declare what functions handle state, just like in Java you need to declare which functions handle exceptions.


C# including LINQ or Javascript including Promises isn't done with the intent of having all functions handle state and pass around a state of the world parameter (trying to actually do this in Javascript will lead to a lot of boilerplate and ugly corner cases, as one would expect in the language).


Well, that's partly because these languages were never designed to be functional in nature, and partly because the approach you propose is not how FP works. I'm not sure what else you'd expect?


If someone tells me that languages are adopting a functional approach, I would expect to see the tools and methods evolve to allow functional code to be written. What I am seeing is languages adopting specific functional methods that can be used or discarded, rather than encouraging the adoption of the actual style.


Regardless of language, you can structure your program as a pure, functional core and an imperative shell. All of the io happens at the boundaries, i.e. the shell, but the application itself consists of functional, side-effect-free code.

Some languages make this easier to model than others, and some make doing so in a pure style easier, eliding real-world side effects like RAM access, GPU, etc.

Once you are comfortable with this concept, functional languages make a lot more sense, if you don't get lost in symbol soup.

Edit: a simple Google search for "functional core imperative shell" brings up many writings on the concept.


Yes, this is the philosophy behind which I structure my programs. Elixir has been great for this model.


A side effect is anything not modeled that can affect the computation.

Input is an obvious example, as it's impossible to predict the value of an input.

Output is less obvious, but there is uncertainty like whether the filesystem is full.

The original Haskell style was to have a World object that was an input to every non-pure function.


Input is an obvious example, as it's impossible to predict the value of an input.

Do you mean this more abstractly or literally? In the example of PacMan, for every frame, the only possible input is nothing, left, right, up, or down. So you are be able to predict it is going to be one of those five, though not which one.


That is unpredictable, but not a side effect. Side effects are changes that are not declared as part of the function parameters.

If you model the controller input as a function parameter, that's just a state argument, and FP can handle it declaratively just fine without having side effects.


> Do you mean this more abstractly or literally?

>> Input is an obvious example, as it's impossible to predict the value of an input.

> though not which one

Literally.


I’m asking as someone not familiar with function programming, so trying to learn, not be pedantic, btw!

So because the possible input is not just a single value, it is not predictable. It doesn’t matter if the input was limited to a set of two values, or infinite, it is literally not predictable so therefore we don’t know the future state of the program?


That’s right! You can consider the user input a source of randomness even if it can only take two values, like heads or tails.


It's a bit odd that people often call these side-effects. They're just "effects". Side effects are things you don't want to happen, i.e. accidental/incidental effects... which is exactly what a pure language like Haskell aims to prevent.


That's not how the term is used in programming. Side effects may be intentional.

"Effects" (without the "side") usually means the effect is modeled as a type/value that is returned by the function.


That's not what "side effect" means in this context.

Side effects are the opposite of pure functions. A function with side effects will mutate global state, a function without side effects will not (it will simply return a value).

https://en.m.wikipedia.org/wiki/Side_effect_(computer_scienc...


This is true and correct, but I would add that sometimes in FP books and papers you do see authors merely writing "effects" as well. I think either would be acceptable.


"Effects" in an FP context generally refer to effects explicitly modeled as values in the program, whereas "side effects" are implicit. `Either` is an effect, throwing an exception is a side effect.


> Calling I/o side effects makes FP seem hopelessly academic.

But "imperative" has such bad mouthfeel that I had to swear it off altogether.


For debugging, it's just as easy to save a copy of each frame state before modifying it, as it is to write in a functional style. Maybe easier.


Only if you exercise a high level of discipline in keeping all your state contained in the same place, having zero implicit state, and writing clonable data representations (i.e. no impredicative references, or some kind of serialization strategy that can deal with them). 99% of programmers do not write code this way when using imperative languages.


You need to do this anyway if you're implementing saved games.

Most modern PC games/engines have a really good instant quicksave system which dumps the entire world state very quickly.


The Java Enterprise style of over-abstraction that everyone hates does this, so plenty of imperative programmers will know of it and probably have implemented it to an extent.


It is indeed practical. I was able to prototype a game engine in pure functional style which, after several development iterations was able to greatly improve performance and maintainability of the game. Essentially the game world and physics was represented as several layers of cellular automatons interacting with locks and condition checks. The production engine is not pure though due to various reasons.


This exact point is covered in the series of posts, though? With some thoughts on why this is difficult.

I confess that sometimes things do, in fact, get a ton easier if you expand your abstraction out such that you are abstracting the whole board, and not a bunch of individual pieces. That comes with its own set of road blocks, though. In particular, game worlds are a lot bigger now than they used to be.


More generally, `new_state, framebuffer = update(old_state, input, time_delta)`

How far away are we from being able to represent the GPU state and rendering instructions with simple datastructures ready to be submitted wholesale to the driver?


I know it's harder, but can we please decouple state updates from rendering?


100%. This relatively small amount of extra work makes it SO much easier to debug both rendering and state update code.


Even better would probably be to run the state update X amount of times per second, not depending on framerate at all. Otherwise you mess with physics.

And then render based on delta and interpolate movements.


I thought thats what i meant by decoupling :) Although using a time delta makes OT easier than having to force X times Per second


Yeah my comment was probably supposed to be to a different sibling.

Using a time delta would give different results when doing non-linear stuff. Like applying forces. It's different to apply "force * dt" vs doing "force * 0.5dt" twice. Also if you use a big dt you could end up going through walls or similar. So therefore the physics engine should run at defined ticks in most instances, or it will behave differently based on framerate.

https://gafferongames.com/post/fix_your_timestep/


In all the, admittedly amateur, games I’ve made I’ve been very religious about this. It never even occurred to me that not splitting this was an option.

And I can’t think of any reason why you’d combine them.


GPU could compute state separate from rendering, if such updates are simple and independent enough. (In many cases, they are not.)


That’s pretty much what a command buffer is in any of the modern gpu apis.

Recording it is still command driven but trying to give an entire app scene in one go is really messy. We tried this internally on our gpu before Vulkan.


I’m using this approach for an add-on I wrote for Zwift. Even in such a scenario the functional (state machine style) approach works really well and allows me to deal with all sorts of random inputs or glitches


> Whether it's practical is another question.

I imagine that comes down to how smart the compiler is.


everything is a reduce .. but then the reducer will diverge into a sad hardcoded state machine in lambda form


For a book length (1960's course length) take on this idea, see Conway, Regular Algebra and Finite Machines


I have a feeling I will enjoy it a lot.


A Quake clone written in Haskell [1] suggests that it is quite possible. That's just one of the games listed on [2], along with game engines and libraries.

[1] https://wiki.haskell.org/Frag

[2] https://wiki.haskell.org/Applications_and_libraries/Games


My Katamari Damacy in fairly pure functional OCaml: https://github.com/fccm/OCamlODE/blob/master/examples/katama...


That's wonderful, Katamari is great and this code looks pretty readable. Do you have any screenshots?


How is performance compared to the ordinary implementation? Same, better, worse?


Totally possible. Here’s a link to a game recently released on Steam that is written in Haskell and whose source is open: https://github.com/incoherentsoftware/defect-process

I recently hacked together an asteroids clone in Haskell with SDL2 and not much else. It’s not super pretty but it works.

I’ve talked to folks who’ve been using the newer effects libraries taking advantage of the new delimited continuation primops in GHC 9.6 for their game dev. Even with very high level libraries their reporting acceptable performance.

With enough dedication I’m certain anyone could make whatever game they wanted in an FP language.


Even without delimited continuations, ReaderT IO-based effects libraries like cleff and effectful are plenty cheap for gamedev!


Or, as the link says:

> I've spent enough time with functional languages to have figured out how to implement non-trivial, interactive applications like video games. My plan is to cover this information in a short series of entries.


Functional core + imperative shell can describe any kind of problem.

The biggest problem with the approach is that mainstream functional languages put most effort in the functional core rather than the imperative shell modeling.

Effect systems (ZIO for Scala, effect-ts) are the only places where fibers allow composing, scheduling and managing "processes" in a functional way.


It is completely possible of course.

I think questions like this are really getting at, is how do you update the 'huge-game-state' without mutating it, changing variables.

People have a very ingrained feeling that the game state is updated by updating variables.

But in functional languages, the state is immutable, so if you are new to functional languages you ask 'man how do I update this'.

Then, if this new person takes next step and is told 'you make a copy of the game state', they are like 'man that must be a big performance hit to copy the whole thing'.

So I think questions like this are really about how to handle a state with immutable data. How to make a copy efficiently, like in parts, or whatever. And then of course, you are down the rabbit whole of changing your brain thinking towards functional style.


I would really enjoy a class that was nothing but modeling various kinds of computation entirely with recursion. Every time I accomplish something neat and clean with recursion, it feels like I just used a niche superpower.


> I would really enjoy a class that was nothing but modeling various kinds of computation entirely with recursion.

I had the luck of having a whole semester doing just that with various functional and logic languages (Haskell and Prolog) with a great teacher.

You start by learning accumulator parameters and tail recursion, then you build and learn to use foldl and lazy functions; and then you may learn monads as a design pattern to reduce boilerplate for those techniques.


It'd be good to hear some discussion of whether it's advisable and what the advantages and disadvantages are. Often discussions around functional programming are incredibly dogmatic, with it already taken as gospel that functional is superior in every way and purity is next to godliness.


For some reason, your skepticism looks a lot like dogmatism…


Fair enough, my dogma is "understand the problem, then apply the appropriate programming paradigm, methodology and approach in the context of the overall system that you're building." And yeah, I believe pretty hard in this. So when someone tells me "you MUST use [procedural / object oriented / functional / graphical / vegetable / ironic] structures in your code, or it's BAD CODE and you are bad too" I tend to immediately think they're full of crap. This response is raised to the n'th power when they start using terms like "purity" to rate the philosophical 'goodness' of code in any way unrelated to its correctness, performance, simplicity, or ease of writing, verification, or future modification.


'Purity' is advisable for the same reason that 'not using goto', 'hygienic macros' or 'structured programming' are highly recommended (to the point that nowadays they are taught as THE way to program, and nobody would think to ignore them without a good reason).

Sure you can create good programs without them, and the first developers often did; but nowadays a good programmer is expected to understand why those techniques are relevant and what kind of problems appear if you decide to ignore them on purpose.

For complex modern programs, specially on the web, the 'pure functional core with iterative shell' (possibly imperative, but it can also be funcional reactive) is an increasingly common architecture that the most popular frameworks are converging to.


Not mentioning any of the complexity in the abstractions needed to ensure purity makes one a dishonest salesman, much like the OO consultants of the past did with Java and the Java frameworks.

The ills of goto and poor scoping were solved with compiler rules and very simple expressions that could be added to any language. The alleged ills of mutation as described by academic groups made up of mathematicians whose jobs very infrequently have to do with creating actually useful software often do not match the reality of coding, nor have they ever been so bad as to demand alternative solutions. The vast majority of languages had no problem eliminating GOTO; that only a small fraction have implemented purity puts it on the same level as hardcore OO, which also is only a small fraction of the market, and about as useful.


Claiming that functional programming is inherently more complex than imperative is also dishonest. Industry developers that criticise it without understanding the value and strengths of referential transparency are no better than your hypothetical academic groups.

All abstractions introduce complexity in terms of indirection and the need to particularise to different instances. Imperative and functional simply happen to have a different approach to where they place complexity and what parts they simplify.

Imperative makes it easy to update state, but then it forces you to keep in your mind every remote part of the program where your symbols may be modified (which is way harder than most developers recognize). Functional requires more work to keep track of state, but on the other hand it allows much better control of program composition and deep complex hierarchical data transformations.

As they say, use the best tool for each job. It makes no sense to deride a key wrench for being more complex than a hammer and being worse at hitting nails.


Imperative programming with state is as simple as functional programming without state. Functional programming that introduces or models state requires more abstraction, and then complexity.

>Imperative makes it easy to update state, but then it forces you to keep in your mind every remote part of the program where your symbols may be modified (which is way harder than most developers recognize)

Scoping inherently reduces the extent to which state is considered. Any imperative programmer worth their salt knows to minimize reliance on global variables - functional programming salesmen claiming a significant advantage by eliminating bugs involving them can't help but come off as more than a little condescending by belaboring this point.

>As they say, use the best tool for each job.

This is a big step down from "All programmers should know and implement purity by default because mutability is the next goto."


> This is a big step down from "All programmers should know and implement purity by default because mutability is the next goto."

Not something I said. I explicitly mentioned the functional core, iterative shell architecture, and explained how you need to understand functional enough to know when it's a good time to not use it.

At this point your posts look like someone making fun of a complex technology they do not understand, without really approaching any valid criticism of its real shortcomings, because such criticism require a thorough understanding of the thing being criticised.

> Functional programming that introduces or models state requires more abstraction, and then complexity.

I'm pretty sure I've said exactly that. Good thing we agree on it. But you don't seem to realize that this fact is only true for handling state, and not for other aspects of programming.

> Imperative programming with state is as simple as functional programming without state.

You've never tried to compose multiple asynchronous event streams of complex data types from different subsystems into one single user-facing unified presentation, have you? Your assertion is simply not true. Such feat is way simpler in functional reactive style, and a true nightmare in an imperative multithreaded program. That's why web frameworks are evolving to handle more and more reactive functional patterns for compositional asynchronous tasks.


Your comparison of mutable programming to the use of goto and unhygenic macros would suggest that mutation is a similar type of dinosaur to be culled.

>You've never tried to compose multiple asynchronous event streams of complex data types from different subsystems into one single user-facing unified presentation, have you? Your assertion is simply not true. Such feat is way simpler in functional reactive style, and a true nightmare in an imperative multithreaded program. That's why web frameworks are evolving to handle more and more reactive functional patterns for compositional asynchronous tasks.

Imperative programming with state is simple. Programming asychronous event streams of complex data types from different subsystems into a single unified presentation is complex. Functional programming claims to have abstractions that make the latter more simple than imperative programming, which may or may not be true.

The use of functional reactive style in React (I do not believe Angular relies on this) is probably driven by JavaScript having atrocious scoping rules and the general Web / DOM stack being full of hacks and badly written APIs. The hacks that are still present in React make it seem like this will just be yet another layer of cruft that is the front end, alas.


> Your comparison of mutable programming to the use of goto and unhygenic macros would suggest that mutation is a similar type of dinosaur to be culled.

Then you missed the point, which is that using mutation a.k.a. cells a.k.a. imperative code when it's not needed is a dinosaur to be culled. But you need to understand when it's not needed to realize this.

Functional reactive programming adoption is not something caused by the state of one particular stack; as it is also being adopted by game engines, big data analysis tools, development environments like web notebooks, etc. People throughout the industry are seeing the value and are simplifying their tools thanks to the paradigm.

> Functional programming claims to have abstractions that make the latter more simple than imperative programming, which may or may not be true.

Look, I get it, I really do. You don't see the possibilities of the functional paradigm and don't know how and where to use it effectively, so you keep criticizing the small part of it you do understand. I've been there, done that (though it was a long time ago).

With that attitude you can stay comfortably in a corner and be impervious to the changes the industry is undergoing, and miss out on a really cool programming style out of prejudice. Or you can accept the intellectual challenge (which it certainly is, especially for someone who has spent his entire career with a single style), and know concepts that those who learn them agree that they improve their programming even if they do not adopt them 100% for their work.

It's up to you. I don't gain anything with this, so I will not try to convince you of it.


>> Functional programming that introduces or models state requires more abstraction, and then complexity.

> I'm pretty sure I've said exactly that. Good thing we agree on it.

I strongly disagree with the complexity part of this statement. Yes, FP is in many ways "more abstract", but so is natural human thought. More abstraction is not the same as more complexity (except when the abstractions are inappropriate in a given situation), does not lead to more complexity, and the FP approach to state is objectively simpler in most* real-life scenarios.

[*] as always, exceptions do apply.


I don't believe in zero cost abstractions. All abstractions have a cost of being understood and implemented. x = x + 1 is simpler than world = worldupdate(world(increment(x)).


Abstractions exist in our brains. They are costly when they are unfamiliar (requiring learning) or when they are inappropriate (the product of incompetence and causing pain for everyone). Sometimes they are exactly as costly as they should be, reflecting the inherent and essential complexity of the problem, which is ideal. Inappropriate can mean either “too little” or “not enough”… but more accurately, it just means “the wrong abstraction”. Making everything a class, or treating everything like a Turing tape, is also creating mountains of almost-certainly inappropriate abstractions, in most (but not all) cases.

This is very handwavy, because i’m speaking in very general terms. But for a single, narrow, non-general, concrete example, just look at the ORM impedence mismatch problem with OOP languages. AFAIK, after 40-some years it’s still unsolved. I don’t believe FP suffers from this specific problem (again, just to give a concrete and non-handwavy example), because FP is very analogous to relational programming (functions are a special case of relations).

This shouldn’t be so controversial and should be very intuitive to anyone who spends a bit of time thinking about it. A function is fundamentally a simpler building block than a class. It’s more atomic.


I don't believe an ORM is a zero cost abstraction either. Or most of OOP.

The cost of implementing everything as a function is when one has to contend with the fundamental machine architecture not dealing with everything as a function. This often leads to one of the highest cost abstractions out there, the Monad.


There is no reason to use state at all here, let alone the state of the world. In FP you just do:

    x + 1
Moreover, this example is totally meaningless, because it doesn't feature anything that is different about FP and the imperative style at all.


y = x + 1 is not dealing with state. Supposing there was a desire to have x = x + 1, the simple way of doing this is x = x + 1, and the complicated way is to do world = Increment(x, world).


> Imperative programming with state is as simple as functional programming without state.

No it's not, objectively speaking. You're confusing simplicity with familiarity.


Changing a variable is simple. Creating multiple layers of indirection to change a variable in a pure fashion is not as simple. That should be obvious and apparent.


It only appears simple because the layers of complexity to achieve that effect are hidden under the language runtime, and you're exposed to just the top of the iceberg.

If you had to build the whole runtime yourself from first principles, functional behaviour would be way simpler to define.


A functional runtime from top to bottom is simple on a FPGA. It's also pretty useless. A circuit with no memory that does the same thing each time. Pretty good for a light switch, but not the kind of things people want computers for.


Look up structural sharing


If there was one single thing I learned from my functional programming class in college, it’s that there exists a way to implement imperative programming in a pure functional language. I distinctly remember the aha that Turing complete means Turing complete, and functional vs imperative is a style that can be emulated in any Turing complete language. Now for the life of me I can’t remember what the exact trick or mechanism was, does anyone know what I’m talking about? I’m pretty sure it leveraged one of the named functional constructs, similar to a thunk or y-combinator or something. I’m old enough that use of “monad” wasn’t much of a thing at the time.


One of the other comments about continuation passing style seems likely.

Basically, all you do is define a function (fun (arg1, arg...) -> result) and invoke it (the exact same way you define a callback with .map(() => result) in Javascript for example).

You can store the local state of the function you're currently in, into the callback and state can be passed around that way. (Say you are in a JS function where you defined a number or other variable above: you can use that number in any continuation/closure/function and pass it around, to .map() for example or another function.)

I've also heard of recursion as being a way to pass state around in a functional language. If you have a recursive function that takes a num parameter and calls itself with (num + 1), the number can be viewed as mutable state written in a not-so-mutable way.


Continuation-passing style?


I think it’s something like this, yes. I should try to dig up the old assignment, it was in Scheme and I probably have the code buried somewhere.


As I recall, if you have constant-time random access array with mutation then you can emulate that behavior in a functional language using balanced trees, with O(log(n)) time.

This means any functional programming language can implement any imperative programming language with a factor of O(log(n)) overhead.


I guess as soon as you have array mutation, you have assignment and thus it’s already imperative? The trick I’m thinking of definitely didn’t involve any algorithmic complexity overhead, it was just more of a trick to emulate assignment and imperative statements using pure functions. It was a neat observation at the time, but with the passage of time it starts to seem a little less surprising, though I guess it still is a little since the article asks the question.


Maybe you're thinking of the zipper data structure? [1]

Where you represent the composite structure hanging from the point you want to mutate, so updating that point is merely dropping the header and appending the updated value as a new prefix.

1 - https://en.m.wikipedia.org/wiki/Zipper_(data_structure)


No quite. By 'random access array' I meant changing an arbitrary array element, anywhere in memory.

Suppose you have array mutation, but the changes are only local.

Then as TuringTest pointed out, there are efficient ways to make those local changes in a functional programming language without the log(N) overhead.


There may be a lot of different ways to do this, for example writting a BASIC interpreter in LISP, or a LISP interpreter in BASIC.


Fable [1] (F# to JS compiler) has pac-man game as one of samples [2]

[1] https://fable.io/

[2] https://github.com/fable-compiler/repl/blob/master/public/sa...


Hey, it's my time to shine! I rewrote Chips Challenge in Haskell years ago: https://github.com/egonSchiele/chips

It's not better than using C, but worked just fine when I wrote it in 2014. I'd say, unless you're planning to sell it, write the game in whatever language you enjoy.


I like this. He shows some of the basic ideas of functional reactive programming and of entity component systems. And then the thing about large objects not being efficient to update, well it's not efficient in C either, or in databases or in anything really. Understanding how data transforms and the dependencies between data is usually A Good Thing.



Most of the criticism outlined in his follow-up is because of the state of functional programming in 2008 when this was written:

- "Not being able to re-use names for values in functions". It's not a feature of functional languages, and thankfully many these days don't suffer from this limitation (see Erlang vs Elixir for example)

- "The lack of a simple dictionary type". Again, blame the terrible functional ecosystems of 2008.


I'm curious what you mean? How can you re=use a name in a function? I've seen this problem even in Java code.


It’s called variable shadowing and it’s super common in these days. The rust book introduces it in like the 2nd chapter.


Ok. Shadowing has been a thing for a very long time. If you are in languages that use block scoping, yes, you could just build a new block and call it a day. No, that isn't ergonomic, though.


Not all variable shadowing requires block scoping. Like I said about rust, you can literally just redeclare the variable, even one that’s not mutable.

About ergonomics, I think it’s a matter of opinion. To me though it is very ergonomic. As an example, let’s say you get the users input, trim the length, and parse it as a number. Without variable shadowing you have 3 variables (like, inputResult, inputStr, inputNum) and all those variables are still accessible , potentially by accident, throughout your code.

If you just have one input variable that you redefine, then you don’t have to pollute the scope with variables or think of more names and you won’t accidentally use one of the intermediate ones elsewhere. To me that’s more ergonomic.


This is not shadowing. Or at least not any definition of variable shadowing that I've ever seen.

This is just redeclaring or redefining the variable.

Variable shadowing is allowing a variable of the same name as an existing variable, but within a different scope.

The difference in scope is absolutely critical - because the shadowing happens within some scopes, but not within others - there are still two instances of the variable, and both values are still kept, but which one is visible depends on the scope.

Basically, to quote the def - "At the level of identifiers (names, rather than variables), this is known as name masking. This outer variable is said to be shadowed by the inner variable, while the inner identifier is said to mask the outer identifier."


> This is not shadowing. Or at least not any definition of variable shadowing that I've ever seen.

Then this is just a flavor that you haven't seen. https://doc.rust-lang.org/book/ch03-01-variables-and-mutabil...

> Variable shadowing is allowing a variable of the same name as an existing variable, but within a different scope.

By different scope you mean there's an inner scope and then there's an enclosing scope, where the inner scope has access to the outer scope but not vice versa. This (Rust's shadowing) more or less works and feels the same, but the difference is that additionally allows variable shadowing to be done within the same scope.

> The difference in scope is absolutely critical

I would agree with you if the way that Rust implements shadowing took away the ability to do what you're talking about, but that remains in place. You just have the extra option of redefining an immutable variable within the same scope as well, so it's purely additive.

Personally I don't think there's anything bad with variable shadowing within the same scope, as at times it can be useful and help avoid mistakes. It's like, there are times when you are fine with original variable never coming back into scope, and you now have the ability to do that. I don't see any problems or dangers from that, especially if normal closure/block scope variable shadowing remains.


I'm not claiming that rust doesn't have shadowing, just that redefining a variable in a single scope is not really what I would call shadowing.

A shadow is obscuring an object that is still very present, just hidden. That object can be seen again by changing scope.

If there's no way to get back to the original (or the only way back is through a different name), it's not a shadow, just a redefinition. That name doesn't have a shadowed copy, it just has a previous value.

Nothing wrong with redefining a variable either, it's perfectly valid. It's just different than shadowing, in my opinion.


Yes it is shadowing.

Every line of code starts a new scope.


And a redefinition changes the lexical scope of a variable in a way that shadowing does not...

Scope is not about lines, it's about visibility.


Reading on how rust does it some. First search I'm finding is https://maxuuell.com/blog/how-not-to-shadow-variables, which amuses me to see that you can just redeclare variables in the same scope. That feels like a footgun in itself. The rest of the footguns listed there seem about right.

Is that how other languages are starting to do this?

And agreed that not polluting the scope is good. That is why mutating the variable was the preferred way for a lot of folks.


The follow up is definitely worth reading. In particular, the section where the author notes, "Everything started to flow much more nicely when I went down the road of comical inefficiency." Computers are fast enough today that much of this inefficiency is borderline invisible to the vast majority of us. Always fun to read from perspectives that this is not a true statement.


In Conrad Barski's book, Land of Lisp, you are taught to do so. Check out the website: http://landoflisp.com


I loved this book! It's about much simpler, generally turn based games though


Yeah but to be fair Pac-Man is like a turn based game where the turns are taken 120 times a second (or whatever the frame rate is)


In university I had a mind bending experience writing snake in purely functional javascript, I was so proud of it and showed my friends and the reaction was “oh god”

I can’t say they were wrong


Just to pitch in with a Clojurescript/Quill example, where the game loop takes a function that takes game state, modifies it, and returns it for the next iteration of the game loop.

https://github.com/opyate/minild72/blob/master/src/pong/core...


I’ve heard Edwin Brady, who made Idris (and Whitespace…) use the term “pac-man complete” when discussing languages, particularly functional ones.


I've tried this a lot, and I will say no, it's not realistically possible if you're being a purist.

The simple counterexample, take setpixel, which takes an image buffer, and returns an image buffer with one pixel changed. Now imagine having a large bitmap and using setpixel to draw a sprite to it. For every non-transparent pixel in the sprite you need a whole copy of the whole buffer. Obvs there are lots of improvements and cleverness but compared to the speed and simplicity of an imperative code just modifying an image buffer in-place it's vastly slower, more complicated or both.

However you can manage to write "most" of your game in a functional way and if you're careful about it, and use tricks like copy-on-write to make some things act functional, you can do it. Writing your game in a functional mindset, and keeping data immutable when you can has a lot of benefits.


We might be getting into semantics, but if you're willing to agree that IO monad maintains functionality, then there's no reason a Bitmap monad couldn't do the same for pixel writing. In principle and practice, all effects can be pushed to the outside of all applications.


Not every FP language is Haskell.


The benefits of FP are typically described as "the more functional your program is, the better", and Haskell is the most functional turing complete language you can get.


Which assumes only Haskell defines what FP is all about.


Most other functional languages don't enforce immutability to the degree Haskell does. So if you are trying to show off how pure and side effect free your code is...


They didn't suddenly stop being functional programming languages.


Why doesn’t a state monad work?


There's Monadius, a clone of Gradius written in Haskell: https://github.com/tanakh/monadius



I remember making Flappy Bird in Haskell with FRP. It was a very different but pleasant experience.


I think the most fun and behinner-approachable way to build games with functional programming is with Elm [1].

See a few (small, demo) games built by the community in [2] .

Notice Elm has abandoned the FRP approach in favor of Model-View-Update [3].

[1] https://elm-lang.org/ [2] https://github.com/rofrol/elm-games [3] https://elm-lang.org/news/farewell-to-frp


TIL that a 6502 didn’t have division and remainder instructions. Yikes.


None of the 8-bit CPUs had that, but you can get pretty far with left- and right-shift (which is the same as multiplication and division by 2), which allows to compose multiplication and division by constants quite easily (it's a bit more trouble and usually too slow for a universal runtime mul/div, but one simply tried to avoid that, or use precomputed lookup-tables instead).


not division, but the 6809 8-bit processor did have a MUL(tiply) instruction.


I stand corrected and TIL :) That's pretty awesome actually, pity that the 6809 wasn't more popular in the home computer era (but looking at the price compared to the Z80 or even 6502 it's understandable - $37 versus $9 for Z80 and $6 for the 6502 in 1980 according to Wikipedia: https://en.wikipedia.org/wiki/Motorola_6809).


I worked in embedded systems design through this era. I don't remember ever hearing of a design using one 8-bit cpu vs another due to chip cost. Those Wikipedia numbers also seem suspect.


i wasn't aware of the price difference, and at the time i owned a 6809-based system - a dragon32 (a sort of UK re-branded version of the tandy colco). i wasn't fussed about the MUL instruction, but i simply loved all of the addressing modes that thing had.


Game consoles like the SNES (65816) had a coprocessor for multiplication, division, etc. You load the values into a certain address, wait a couple cycles, and retrieve the result.

Not pretty, but those CPUs were designed to be really cheap.


I learned a lot from Chuck Moore's designs; for instance he assumes that you know how to build multiply given a multiplication step instruction, or the full set of logic ops given just and and xor. (which comes on top of the traditional forth punning of logical and bitwise booleans)

> Before I studied the art, a binding was just a binding.

After I learned the art, an assignment was an assignment but an application was an application.

Now that I've understood the art, a binding is just a binding.


Zero page addressing was far more inconvenient than calling a library for division on the 6502

16 bit addresses didn’t fit into registers, so you keep pointers in the zero page (bytes 0-255). There was no 16 bit increment either, so to increment the pointer you incremented the low byte and on overflow increment the high byte


It is quite rare to need an actual division instruction; for the majority of cases you would simply precompute the modular inverse and use that with multiplication.

Non-constant modulo can often be computed by checking for roll-over when incrementing (or just computed in a similar way to division)


There are still modern CPUs that don't. Division/remainder is much much harder than all the other basic arithmetic operations.


Part of me questions the old view that functional programming isn’t performant.

If you think about it, 1.) most performance comes down to memory access. 2.) It is widely known that accessing memory off the stack is faster than allocating memory on the heap. 3.) calling a function pushes everything onto the stack, starts a new stack pointer, and when it’s done releases everything from the stack.

So my thinking is if by definition functional programming is about organizing your program as a pure composition of functions, doesn’t that also imply an ideal memory access pattern? Can someone tell me where I'm wrong?

I guess the one issue where this comes into doubt is with strings of unknown length, and other data types that must be heap allocated, but even in those cases it seems that problem would equally affect all other paradigms.


Then there's memory locality, and shared memory. if you keep bitbanging the same area, your processor might be very happy. FP will create temporary structures to derive the next computation and it may cost a bit much. I say may because I have no experience in this, just semi educated guess. Our processors are well made to hammer the same insts on the same memlocs.

ps: of course FP shields you from having to manage correct state update seen in imperative programming, which might require a lot, a lot, of reads and writes to ensure things are not messed up.


I believe a lot of this boils down to so many data structures not being kind to state updates. And most ways of making that work out fine are by adding more data and indirection. Even the ways of making things a bit more "performant" typically do so by adding more data to the mix, not less.

This is even harder if you work with data structures like threaded trees, where there are so many links that even the zipper approach basically falls flat. (And the zipper already has performance questions if you do a deep edit, if I am not mistaken. What could be a single value mutation is now a rewiring of every link down to the value that you wanted to change.)


> 3.) calling a function pushes everything onto the stack

This is a very C-centric view of it.

In FP, sometimes they're functions, sometimes they're closures. Sometimes they're fully applied and sometimes they're partially applied.

You're probably on the heap, not the stack for two reasons: tail recursion, and variables very often outliving the return.


It is both possible and relatively easy. I wrote a game similar to Pac-Man using Elixir, only I used characters from Firefly instead. I used it in several job interviews to prove I can code.


A game as tiny as pacman is trivial to make run fast enough even with the most naive functional style like rebuilding the entire game state every frame.


Should add (2008) to the title.


Yeah, because asking the question in 2023 would make you sound ridiculous.


That's not why we add dates to the titles.


It’s one possible reason of many. The usefulness of context depends on context.


… would it be presumptive of me to think that there are no games which could not be written in a functional language?

I mean like… what’s stopping you?


I mean... I wrote OOP tetris in PowerShell, so I'd say yes? github.com/aask42/particleTetris


I want to know if it can be done such that it performs just as well as the original on the original hardware.


Awesome write up. Great seeing a talk on functional programming with more technical details and examples.


Pretty sure I remember a FRP pacman being written in response to those post, way back then in 2008.


I enjoyed this article. Is there source code to accompany it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: