> There’s nothing wrong with imperative code confined to the scope of a function
Thhat is a fine point which I rarely see discussed in connection with FP. (Pure) FP is commonly understood (?) to mean that there can be no mutable data. You can "bind" a value to a variable but only once. Right?
But within a function you can have local variables. I don't see why it would be bad to assign multiple different values to the same local variable, which only exists during the execution of the function. The variable only exists in the "stack" not in the "heap" and is gone as soon as the function returns.
So if I run a while-loop and repeatedly assign a new value to the local variable "i" for instance, would that be unacceptable according to "FP"? Why? I can't see any ill effects from modifying a variable whose value can not "linger" past the execution of the function.
What am I missing? Why is it (supposed to be) bad to rebind any number of different values to a local variable?
I don’t know about good or bad, but I have a practical example of where rebinding variables necessitates rethinking the language a little. Erlang uses pattern matching for all variable assignments, where unbound variables are assigned and already assigned variables are used for pattern matching, returning an error if the pattern matching fails. In that particular context, allowing rebinding a variable change the language. Elixir behaves like Erlang but supports rebinding, so variables you wish to keep around need to be annotated with a “pin” character.
I prefer the Erlang way, I find it to fairly nicely solve a problem with the only downside being that I can’t rebind variables, which encourages me to write smaller functions. And the language use fewer tokens, which is an nice if you ever want to parse it for some reason.
Pure functional programming is really about tracking and controlling effects by making them first-class, not avoiding them altogether. This has numerous benefits for reasoning, correctness and optimisation. Haskell allows mutable local variables, 'ST' references which can be used to build functions that are observably pure on the outside. The Haskell type checker will guarantee that no mutable state escapes. This is true "state encapsulation". It is completely acceptable for pure FP programming.
So mutable state is not bad, as long as it is "encapsulated"?
I thought (from what I've read so far) that "no mutable state" is the definition of "pure FP".
So should the definition of pure FP be "No un-encapsulated mutable state"? Instead state must always be "encapsulated". Where can I read more on this? Thanks
It’s not that it’s good or bad - mutation can only happen in a monad (IO or ST or similar). The monad is the encapsulation and that encapsulation is enforced by the type checker.
It’s actually quite liberating not having to worry about what’s “right or wrong” and instead just do whatever you like that compiles.
I'm looking at it not from the perspective of "Which language is the best?" but from the point of view of I'm working in a specific language now, what are the principles of "Pure FP" that I could benefit from when using that language.
How should I "encapsulate state" in JavaScript for instance?
If the principles of FP are great surely they must be applicable in any Turing-complete programming language. The (Haskell etc.) compiler is no doubt a great tool, great at type-checking. But I see using any specific compiler or type-checker as a tooling issue, not a foundational insight about FP.
Ah yes, indeed there are benefits that come from programming in Haskell that can be applied to JavaScript, but I think that without the compiler it’s very hard to guarantee success.
Somewhere down towards the transistors, the change has to happen. The registers of the machine are explicitly designed to be mutable. I don't know how functional languages are compiled and maybe somebody who does can chime in, but I would not be surprised if the IR already was imperative. So it may be easy to give some access to that in a language.
I'd say "pure" is if the calculation can be done with a function without side effects and it is a deliberate choice to restrict a language to (almost) only that. No side effects at all would mean no interaction with "the outside world" and no access to input data though.
Haskell compiles down to C- first, which is an even less expressive version of C basically, so you are right there.
But if you go low enough FP concepts pop up again, e.g. out-of-order execution is pretty functional, but to a degree so are vector instructions. And this is no surprise, Turing machines and lambda calculus (and recursive functions and anything Turing-complete) are equivalent. If you think about it, a Turing machine is just as side-effect free in itself, side effects are special to computers.
Deep down it must be, as the data is conveyored past fixed compute engines (which can be seen as pure functions), right? FPGA programming is pretty much functional as well. RAM is capturing the state there.
We can see FP as a tool to for us to express a functional thought so that the computer can understand it and will translate it into something the computer itself can run.
Purity is a poor word. It refers to referential transparency: if I give you an input to your foo function, “A”, and I get an output, “a”, then I can always substitute foo “A” for ”a” in my program and it will not change the meaning.
It’s poor because it invites other meanings: some people think purity means immutability for example; but that’s only part of it.
There is no definition of what FP is that is agreed upon by everyone, but the key is the word "functional" in FP.
This comes from a mathematical function, which is a mapping of input values to output values.
You could write a definition of a function f(x) to be such that the value of f(x) is the result of the execution of some imperative-looking code (including loops that mutate variables).
Mathematicians would be satisfied with such a definition and would confirm that f(x) is a function just as valid as any other.
And so functional programmers should also accept such a function. In fact, Haskell has specific syntactical support ("do" notation) to allow you to write functions using imperative-looking code including mutations (the ST monad, or other State monads).
A "function" is a "function" if it always returns the same result for the same arguments, right?
Now I'm trying to wrap my head around how can I call a function which takes no arguments but which returns a value the user entered on the keyboard? How can I write a function which returns the current mouse-position on the screen, while still abiding by the functional principle of same result for same arguments always?
Can it be done? If not can we still call it "functional"?
Well, in a non-side effecting world these could still be a function of the state of the computer. mousePos(initialState) could for example be used to get the mousePos in a given state (functional reactive programming may also be interesting for you)
Another approach would be to divide the mutating, side-effecting world from the pure one by introducing a wrapper around the former. Let’s call that wrapper IO and create a few helper functions that can sequence these IO wrappers one after the other. The important part is that such a wrapper can provide a value, but that can only ever be read/seen by the next element in a sequence. So any mutating program can be described as such sequence and be executed at the end, while the program you wrote is perfectly pure. This is basically what a monad is, and a haskell function for reading the mouse position would look like mousePos() -> IO(Pos(int, int)) (in some hybrid type notation)
> A "function" is a "function" if it always returns the same result for the same arguments, right?
Yes. This matches the definition of a mathematical function that I gave above.
> how can I call a function which takes no arguments but which returns a value the user entered on the keyboard?
You can't. If your language allows you to write such a function then I would argue you are no longer doing FP (at least in this part of the code).
I believe that this is why many people like languages like Ocaml, F#, Scala. These are hybrid functional-OOP. You can use FP for the parts of the program that it's suitable for, and OOP/mutable-imperative for the other parts("the best of both worlds").
The purist FP programmers might claim that reading input from a keyboard is "uninteresting" and that programming is really about logic, algorithms, and computation, and therefore FP is enough for them (...in their ivory tower).
I personally am a fan of Haskell, but I don't like calling it FP, because it does allow you to write "code" that reads input from the keyboard. They still claim to be pure FP by using a loophole and saying that "getLine" is not a "function" but rather an "IO action". But to me it doesn't matter what you call it, you are still effectively allowed to write code that does I/O and mutates external stuff, doesn't matter if you call it a "function" or something else.
Of course the big advantage that Haskellers claim is that the language enforces the separation of effectful code from "pure" functions. In the other hybrid FP languages it is possible (and happens often) where you have a deep call chain of "pure" functions and then you realize you need some small side-effect somewhere, so you just add it. Haskell does not allow this at the language level, and forces you to refactor your code. In my opinion this does lead to cleaner architecture and more maintainable code in the long term.
But to me Haskell is really interesting for a different reason. I don't like to call it an FP language, because like I said, a lot of the code you write isn't really "functional". A better name would be: "mathematical oriented language". And by math I mean that the code is built using logical systems that have mathematical rules that can be reasoned about.
Going back the I/O (reading input from the keyboard): what the Haskell people have done is realized that you can actually model this behavior of user input as a mathematical model. This unlocks a type of thinking that allows you as the developer to go beyond the thinking of telling the computer: "do this, then do this, then do this, if this happens then do this, ...".
An example: the mouse-position on the screen. There are haskell libraries called FRP that model the mouse position as a "signal function" (a well-defined term they invented) that is a 2D coordinate that changes over time. You can then combine this signal function with others to create behavior such as an image that follows the mouse. You can build complete GUI applications using this approach, and the code looks nothing like imperative programming, and more like an electric circuit of different components connected together.
And that is just one example. The Haskell community has also come up with mathematical models for streams (think unix pipes) that allows you to write streaming code that is much more powerful and cleaner then what you see in other languages.
So my summary: Yes, functional programming is limited, but if we go one level deeper, we could say that the essence of FP is really mathematical thinking. And if we embrace that and run with it then we can open up entirely new possibilities.
> you realize you need some small side-effect somewhere, so you just add it. Haskell does not allow this at the language level, and forces you to refactor your code. In my opinion this does lead to cleaner architecture and more maintainable code in the long term.
I can see it leads to more maintainable code, but there is a cost associated with that. With non-Haskell I can just make a small change into a single "function" to add the side-effect and be done with it. With the Haskell I need to redesign my whole function-pipeline somehow.
That is a good example. Is there some example code you know of that would show how in fact you take a function pipeline of say 5 Haskell functions and then transform it so that one of the functions "simulates" some side-effect?
A common case is when you need to add logging to a function.
There are 2 ways:
1. You cheat and use unsafePerformIO. This is discouraged and I won't go into the details.
2. You convert your inner function from being a "pure" function to being a monadic function over some abstract MonadLogger m. Then you can still call your function in a "pure" context by using a Writer monad for your MonadLogger, but you can also call it from the IO monad and get "better" logging (with real-time output and timing info).
Note that you still do need to convert your entire call chain from being pure to use "MonadLogger".
But I would argue that even in non-Haskell languages you should also do this (meaning you should thread a "Logger" object through all functions). Why? Let's say you don't, and now you want to do logging from some inner function. How do you configure the logging? To which file do you save it? What verbosity level? You will need to use a global/Singleton Logger object. But what if you want to call your function from two different places in your code and use a different verbosity setting for each? You can't. So I argue you are always just better off doing the refactor in all languages. The fact that Haskell forces this means that developers don't have the choice and are forced to do the right thing.
> I thought (from what I've read so far) that "no mutable state" is the definition of "pure FP".
It's much more nuanced than that, certainly no arbitrary and unchecked mutable state is. There are ways though of achieving mutable state whilst maintaining referential transparency.
Yes, but the point is that you can fully isolate any part of the program from having side effects. I.e., the caller is responsible for executing its callee's side effects.
Right therefore it would seem that it is ok to mutate local variables inside a function anyway you want, because that can not have side-effects outside of the function. Right?
This is why it's common for languages to be multi-paradigm. For instance, in OCaml, it's perfectly fine to use while loop and mutable variable, and more generally mutable data-structures (typically records with mutable fields, or arrays).
It's mostly a matter of preference. I think experienced programmers will favour the pure approach unless there's a good reason to do otherwise, and know well how to solve common problems without relying on mutable structures (typically, it's very rare that you need a while loop in OCaml). But sometimes, it makes more sense and is less verbose to use mutable DS.
> This is why it's common for languages to be multi-paradigm
I can agree with that. Yet I somehow perceive an opinion that it is not FP unless it is Pure FP (?). Like even a small amount of impurity can poison the whole well.
> Erlang […] whose internal functions are 'pure' .
Why is that a good thing? You can easily reason with mutable code on a small, local scope, yet the real hard part will be the interaction of all those messages. Sure, erlang has plenty of safeguards for those as well, but I don’t see what would be lost if local mutability would be allowed.
I think it's also true that on a small local scope it is easy to write a function that takes the old state as argument and returns the new state as result. I think that in some sense makes it easier to reason about the 'evolution" of the actor.
I'm not sure I can explain why I think it would be simpler that way, but perhaps it is that conceptually you can then "package" the logic of how the actor-state changes over time into a single (local) function.
You want to understand and describe how the actor-state evolves over time? Read and understand the logic of this single function, whose results depends only on its arguments. Note that such a function does not have any internal state itself, it calculates the new state for the actor, always the same way based on arguments the system gives it.
I think that's the key to understandability here, the programmer does not need to know and keep track of the state and where it is stored and how to modify it. The system/runtime gives the current state to them as argument. It is not hidden into some variables somewhere who knows where. It comes as argument to function local to the actor. It is part of the definition of that actor.
This is how Erlang works, right? Is there a specific name they use for such a state-evolution-function?
Joe motivated this by the 'let it crash' philosophy.
Processes are supposed to crash when something that isn't supposed to happen happens. Inside OTP, this presumes that the reason for the crash was an invalid message, as processes should be black boxes that only communicate via messages (Alan Kay's definition of OOP).
Statefulness inside functions works against this goal. AFAIK, SmallTalk had similar idioms of returning new copies data to be sent to objects.
In all other languages, I agree with you. I can read and understand 100 lines of someones else's Go code faster than 20 lines of my own Clojure. I write Clojure for a living.
I think some people who discovered functional programming after "Pure" OOP, where everything is an object, fell in love with it and got a bit too high on their own supply. But i don't think it's the majority.
Also, functional programming is harder than pure imperative or imperative+object imho, which help gatekeeping, and more impressive (again, only my opinion). There is also a lot a good ideas that (came from/were invented by/were introduced with) FP and are integrated in older languages. This is a perfect setup to have cult-like groups appears, even if the person they revere isn't as "into it" as the group is generally. Imho it's mostly young devs who are/were pretty good for their age group (and thus have a bit of an ego[0]), a bit noisy, and will grow up and be more welcoming in the future.
[0] which is understandable and reasonable, this is absolutely not a shot, i was one of you (lisp/CLOS fanatic instead of FP fanatic, but still) until i worked with people 100 times better than i. The "grow up" is a small shot though, but don't take this seriously :)
I am suspicious of claims that procedural, imperative programming is “easier” than functional programming: in terms of how quickly one can learn it, how accurately one can predict what a program does by reading it, and how accurately one can make a modification. Assuming two developers both of equal experience in their preferred paradigms and languages I expect the functional programmer will score higher in these cases.
The cost of functional programming is that it does take some training to write programs in this style that many people are not familiar with. And I think it’s that lack of familiarity that people are referring to when they say that it is more “difficult.”
Personally I can’t disagree that it is harder to learn if you are already familiar with procedural programming. It took me quite a few months of dedicated work to get productive with Haskell. It was a painful, as I recall, because I had to throw out all of the accumulated experience I had with a handful of other languages.
Now that I am productive with it though I don’t find it any more “difficult” than programming in C or JavaScript or Python. In fact I find it easier: I can rely on the type system and reason about my programs and how they compose instead of stepping through their execution. Functional programming demands that I debug my thinking rather than my program most of the time. The former takes less time in my experience and becomes less frequent the more experienced you become.
Functional programming is routinely taught to complete beginners. Languages like Scheme or (core) ML are very simple and you can do a lot with surprisingly few constructs.
However, Haskell is in a different league in my opinion. If you want to do anything meaningful, you need monads which add some layers of complexity.
I consider myself a very experienced OCaml programmer, and I've been working casually for more than a year with Haskell, but I'm still much slower in Haskell. It still happens to me that I need one hour to figure out how to use some monad transformers, or specific API, to do something that would take me 10 minutes in another language. Well, probably I still need more practice, but the learning curve is certainly much higher for Haskell than Python.
Sure the learning curve demands more but the payoff is worth it. Monads, despite all the tutorials out there, are not that difficult a concept... once you learn it. It's frustrating to teach because of this. Like most things in mathematics it takes some effort to learn and pays off in spades. There are a ton of abstractions you get with learning monads and the rules of how they compose are consistent. It goes a long, long way. I'm better for having gotten through it.
Most abstractions in other languages though? Ad-hoc mish-mash of half-baked ideas with no laws and poor specifications. Each one you learn is absolutely useless when moving to the next program.
Monads are the same everywhere you go. Just like algebra.
> Thhat is a fine point which I rarely see discussed in connection with FP. (Pure) FP is commonly understood (?) to mean that there can be no mutable data.
Martin Odersky was discussing this recently. How they found examples of overly-complicated "functional programming" in their codebase (erroneous, and hard to understand) when an imperative function would be way easier, performant and correct.
The takeout? The imperative version was still pure. Mutability was limited to the scope of the function.
It might be "overly complicated" in Scala, but it works very well in Haskell and allows effects to be controlled, checked and reasoned about. In fact, monads were originally proposed to give formal semantics to imperative programming. Scala is really an object-oriented language, first and foremost, so please do not judge functional programming based on how it looks in Scala.
The point of my comment was not to diss FP, but to explain that imperative code with some local mutability is not necessarily in direct opposition to pure FP.
I have never tried Haskell, so I cannot really hold any truly fair opinion there, but I find Odersky's "values" or approach to programming language design much more in line with my own. Scala 3 seems like a very well designed language (pragmatic, expressive, conceptually solid).
`ST` isn't `State` (though the interface is very similar), it's a restricted variant of `IO`. An `STRef` is an actual mutable reference, backed by actual memory.
> I find Odersky's "values" or approach to programming language design much more in line with my own.
Odersky's "values" primarily appear to be whatever will appease the masses, which is why Scala started with curly braces and OOP. So much of his intellectual capital has been spent on trying to combine objects, subtyping and FP, that I can't help but feel it could have been better spent elsewhere. Folks are moving on from OOP. Scala is in danger of building what people once wanted, but don't want any more. IMHO there are better ways of bringing first-class modules to FP (e.g. see the 1ML paper).
OOP is absolutely a must-have for plenty of code bases though. How would you represent for example some other system’s entity that holds state, let’s say the connection between your video card and your monitor as known by the linux subsystem?
"Using this definition, Rust is object-oriented: structs and enums have data, and impl blocks provide methods on structs and enums. Even though structs and enums with methods aren’t called objects, they provide the same functionality, according to the Gang of Four’s definition of objects."
Haskell also has traits (type classes) and "methods" on them, I don't think anyone would seriously claim it's object oriented. The Rust website is clearly just a piece of marketing. Most languages that are not "OO first" will likely provide alternative mechanisms to encode OO if you want it.
You think the likes of GTK and Qt give a better UI than VS Code? I'll tell you why there are no mature bindings in Rust for these toolkits and it has nothing to do with OOP. It's because there's no demand for it.
Qt? Definitely. But I have also seen very great gtk apps. Electron apps always feel “non-native” and laggish. Honestly, try out the telegram qt app, it is lightning fast compared to any electron app.
> Odersky's "values" primarily appear to be whatever will appease the masses, which is why Scala started with curly braces and OOP.
I would agree that Odersky wants Scala to be popular and wants to accommodate even programming novices. For example Python (and a little bit Haskell and F#) proved significant whitespace to be popular, so Scala 3 now has it. In times of XML's popularity, on Wadler's suggestion, XML literals were added to Scala. Today AFAIK using XML literals is now popular in many frontend/browser frameworks/languages (although XML literals have been dropped from Scala 3).
But that doesn't mean that Scala design is unprincipled. On the contrary, there are 3 pillars that make Scala what it is, all working together in a unified and coherent manner:
* Modularity (objects, interfaces, etc, sometimes called OOP)
* Meta-programming (Scala 3 features powerful Macro system, AFAIK inspired from MetaOCaml)
> Folks are moving on from OOP. Scala is in danger of building what people once wanted, but don't want any more.
More and more languages are becoming more and more like Scala:
* Rich type system instead of unityped
* type inference vs having to manually annotate everything
* higher order functions instead of not having them
* preferring immutability instead of mutation everywhere
* ADTs, pattern matching instead of "OOP style" modelling
* module system (sometimes also called OOP) instead of non-modularity
* strict instead of lazy
* ...
It's interesting to see such convergence. New languages are more like Scala now than before. And already existing languages, even though starting are more different, are getting closer to Scala by adding more features. That is to say, Scala is not magical or prophetical, not even original (most, if not all, individual feature was done before in other language), but it's just ahead of the others with combining these features.
In the meantime, I will continue enjoying Scala that allows me to do Pure FP in a modular fashion while getting paid for it because of its huge (relatively speaking) job market :)
> A quick research suggest that 1ML can't model objects and classes (in the OOP lingo), as per slide 6.
But what does he mean by "Objects" and "Classes"? Everything good about objects that a pure functional programmer would want is captured by first-class modules. The internal mutable state and subtyping/inheritance relationships are much less useful. In fact, even OOP thought leaders are advising not to use them.
> In the meantime, I will continue enjoying Scala that allows me to do Pure FP in a modular fashion while getting paid for it because of its huge (relatively speaking) job market :)
That's a very good reason to use Scala! But please do take a good look at Haskell too, if only just for interest. Many leading Scala programmers ended up moving to Haskell, in order to get deeper into FP.
> But what does he mean by "Objects" and "Classes"?
To be hones, I'm not sure.
> Everything good about objects that a pure functional programmer would want is captured by first-class modules.
I just know that in the talk I linked above, Odersky mentions 1ML, that it converts modules into SystemF (extended Lambda calculus, but you probably already know that) and that he feels this attempt is force fitting where some (important?) things get lost in the conversion.
From 2:56 to 4:55
> The internal mutable state
That is very strongly discouraged in the Scala community.
> subtyping
Yep, that is embraced quite a bit.
> inheritance
Used to some extend but still generally discouraged even though not as strongly as mutable state.
> ... relationships are much less useful
How can you say that subtyping is not useful? In Scala it is one of they key ingredients for modularity. You have signatures (aka interfaces or traits) and than you have multiple implementations (modules, objects, classes as partial abstractions) which you can swap one for each other. That is possible because they form these subtyping relationships. A side note: Scala code which embraces subtyping has better type inference.
> But please do take a good look at Haskell too, if only just for interest.
I don't want to make an impression that I want to offend the Haskell community. Haskell is where I learned FP and the languages is a great success in many regards. Over Scala it has many advantages, like better syntax, better type inference, better control over memory layout, better optimized which turns even advanced idioms into efficient code, etc.
But it also has some downsides compared to Scala. It is lazy which makes debugging more complicated. It doesn't have any satisfactory modularity story (last time I was around there was Backpack, but it hasn't caught on AFAIK). And Scala has all the advantages that come with the JVM and Java interoperability: great debugging, profiling, introspection, telemetry, etc, great IDE, huge gallery of libraries, etc. I'm a software engineer and all these things are _very_ important to me. Compiling to efficient JavaScript with Scala.js is also nice.
> Many leading Scala programmers ended up moving to Haskell, in order to get deeper into FP.
Sure, if you want to write Haskell programs, you'll have much better time in Haskell than with Scala. There were people like that, and so they eventually left. But note that you can be deep in Pure FP in other languages, like Scala or PureScript, Haskell is not the only game in town, each comes with its pros and cons.
That being said, with Haskell, Pure FP is built right into the language. With Scala, while easily possible, Pure FP is built on top of the language with library(-ies) and that comes with some disadvantages.
Well I said less useful. The applications you list can be solved without the nominal subtyping that Scala has invested in. Sometimes Scala feels as permissive as a dynamic language, for example an erroneous integer in an array of floats, is still a valid array, it's inferred as an array of "Any".
> Scala is really an object-oriented language, first and foremost
You're technically correct, which is the best kind of correct. Scala is not based on the Lambda calculus. The DOT calculus (which is what Scala is theoretically built upon) is based on objects.
Yet, at the same time
> please do not judge functional programming based on how it looks in Scala
based on my research into the labour market, I would guess that Scala is the programming language where most of the Functional Programming takes place, both in the general sense and also in the Pure FP sense. It is more widely used than Haskell, F#, Clojure, Elixir, Erlang, OCaml, Racket, etc.
And it works rather nicely in my opinion. There are still areas that I'd like to see improved, of course, like the for-comprehension, but I'm already quite content.
> effects to be controlled, checked and reasoned about
Definitely! We have so called Functional Effect System libraries for that in Scala. Not just one, but two: Cats Effect and ZIO. Check them out. I would recommend anybody using Scala do use either of those.
The problem is that by rebinding the same variable you introduce temporal dependencies. The statement "i = i + 1" writes to be variable i and therefore must be executed before any subsequent reads of the variable i. If the statement is in a loop nest, the subsequent reads might even precede the assignment statement in the program order. This makes it harder for the compiler to rearrange expressions and in general runs counter to fp's principles of being as "loose" as possible. And not requiring a program order is more "loose" than requiring one. Sophisticated optimizing compilers often can figure out and remove redundant/accidental temporal dependencies but not always.
Is that really much different from one function calling another, the called function must be executed before the caller? There would seem to be a "temporal dependency" then as well and the compiler can not re-arrange the execution order of the said functions.
Yes, it is. What is the value of str(5/i)? If i can't be redefined you can always replace i with its definition. If i can be redefined what string str(5/i) will evaluate to may be impossible to deduce a priori. Both the programmer and the compiler can replace i with its definition when it sees the expression "str(5/i)". That freedom is lost when the language allows rebinding.
Thhat is a fine point which I rarely see discussed in connection with FP. (Pure) FP is commonly understood (?) to mean that there can be no mutable data. You can "bind" a value to a variable but only once. Right?
But within a function you can have local variables. I don't see why it would be bad to assign multiple different values to the same local variable, which only exists during the execution of the function. The variable only exists in the "stack" not in the "heap" and is gone as soon as the function returns.
So if I run a while-loop and repeatedly assign a new value to the local variable "i" for instance, would that be unacceptable according to "FP"? Why? I can't see any ill effects from modifying a variable whose value can not "linger" past the execution of the function.
What am I missing? Why is it (supposed to be) bad to rebind any number of different values to a local variable?