Hacker News new | past | comments | ask | show | jobs | submit login

What's the longest recipe you've seen ? 12 steps ? 32 ? are they nested ? do they guarantee the ability to share stove, shelves, ... ? do they guarantee anything at all ?

I wouldn't use the recipe metaphor except to explain what a computer does in less than 60 seconds to someone afraid of electronics.




Of course... how many programs do you have running right now? Do they do anything to guarantee they can each use the main processor in isolation to the others? What about the ram? The keyboard?

That is, the recipe for a single dish could be seen as a subroutine in the entire process pool for preparing a meal. Considered this way, what is the largest dining establishment you have seen? What is a menu, but a declarative listing of the strategies that an establishment can use to satisfy a customer's desire for dinner?

More directly, all of the metaphors of programming have almost direct analogies into the real world. "Worker" threads. "Queues" of "tasks."

As I've said elsewhere, I don't think I'm being particularly grand here. Just find it curious that, though we commonly have recipes and other directions that are mixes of declarative and imperative, the current trend in programming is to be purely declarative/functional. I wish I understood why that was.


Simplicity of recipes (they're blurry, they're not chemistry) , and constant human oversight (your brain takes care of many things not written down) makes imperative instructions feasible for small tasks like a meal. It won't scale, but since it doesn't have to it survives.


But why won't it scale up? This is essentially my question. And, it is not like there are not explorations of cooking in non imperative forms.

When your metaphor is "with these ingredients and devices, perform these transformations on them" an imperative form works really well. This is just as true for any algorithm you will want to write. Consider, write down any algorithm for how to do something. I challenge that key to your algorithm will be the idea of transforming something. Often in a "destructive" way.

An example, with the concept of a browser that has state, the following algorithm will get you logged in to your bank.

    browser = new browser
    browser.location = "www.yourbank.com"
    browser.focusFieldLabeled("username")
    browser.type("your name")
    browser.focusFieldLabeled("password")
    browser.type("your password")
    browser.focusButtonLabeled("login").or("submit")
    browser.click()
Implementing something that does this would be trivial, all things told. Now, do so without the metaphor of a "stateful" browser. Just trying to come up with "pure functional" implementations of each of these is not something that sounds pleasant to me. (I look forward to being surprised. :) )


About the scaling, my take is implicit dependencies expressed by order of statements. It's fragile, not composable, not abstractable.

Your browser example is completely imperative, it's a sequence of mutations. You just don't think that way in FP, you describe the new.

    (def url "http...")

    ;; url -> (html,session)
    (get url)

    ;; (url, form, session) -> (html, session)
    (let ((form {username  ...
    	 	 password  ...}))
	(post url form))
You can thread these calls to pass the http state for you.


Right, my argument is that the imperative form is much more easily reasoned about in this example. It is not as easily decomposed, but the question comes down to how much it needs to be.

To wit, I don't know how I would actually fill out the implementations of what you described such that it really worked. Without actually just manipulating the state of some browser thing.


Have you seen Rich Hickey talks ? he mentions the notion of ease in one, and the notion of mutation in many others. An html browser right now is a large stateful tree, imperative programming so it is easy to think about and write code this way, while FP orientation would need to rethink things, less easy, but probably simpler. In clojure data structure are immutable, you "only" compute new, I think that's how they designed a web app in http://pedestal.io/ (clojure[script] based system). If you really care about accepting small grained input you'd probably write some read eval/validate print loop on the client side before then sending a main request for a http json patch.


Right, but my assertion here is that this really only works because Hickey has basically put forth "mathematics" as the core metaphor for the things he is building. That is, he effectively wants an identity equation for every moment of time in the application.

This idea that "state" is somehow hard to reason about is one that I am growing to reject. Sure, if it is not part of the metaphor that one has built, it can be tough. But this is also true for what many of us mean when we say "mathematics." What is the equation for how you get home every day? How do you calculate what represents you yesterday?

Look at some of the classic algorithms. Is quicksort really that tough to reason about because it is in place (and thus relies on mutation)? Bubble sort? (Any sort...)

Is cleaning your room or office truly difficult to reason about because it involves modifying the room? Fixing dinner, since it involves modifying the ingredients?

This is what I loved about that paper linked up thread. When you build your abstractions around the ways in which things interact by mutations, then the code gets much clearer. This is not because we wall off the mutations, but because they are given names and reasoned about. It is not that we constantly have new instances of a list or collection with the old one hanging around, as that can be just as problematic as otherwise.

Consider, sure this is an obvious error to a casual programmer in Java:

    String x = ...
    x.toLowerCase();
    doSomething(x);
But, why is this a mistake? Wouldn't it be MUCH more intuitive if you wanted to keep both values the code would look like:

    String x = ...
    String original = x;
    x.toUpperCase();
    doSomething(x);
Just from a communication perspective. That there really isn't a "focused" object in languages often strikes me as something that could be addressed. (And, I believe this is the key of stack based languages, where they actually would read like I indicated.)


I think state centric system have been the main metaphor since data is processed through machines. Maybe Hickey and others are living in one side of the universe, but he did use other systems before creating clojure, and does not reject all state, just accidental/unnecessary state.

Sorts are not a good example here I think. QS correctness is uncoupled from its in-place property[1]. And they're encapsulated processes, nobody is supposed to see intermediate internal state. This black-box state is not the problem addressed[2]. All your examples are "single threaded". Two blind people moving chairs around will be a problem when one will try to sit down where there was one. Mutation and sharing is the problem otherwise nobody cares. That's why temporary variables are nasty in imperative friendly languages, any variable became an opportunity to mess up data needed by someone else.

It's also a logical waste, as in the first Java example, you don't need to compute the uppercase variant of a string unless it's needed down the road, if it is required then you pass it as argument to doSomething, that's what function are for. I tried to, but I'm not sure I understand the point of the second code example, because java references means x and original are the same, if you want an in-place uppercase method then both x and original are now uppercase letters, but you want to keep both values ... (Not really communication friendly)

The "focus" object would be a way to denote a place where all methods would accumulate state change ? I never saw this in the light of stack based language, it's interesting.

[1] the FP expression for quicksort is obscure the first time, but when you click you can't "unsee" it [2] many lisp books explicitely say that this is acceptable because it doesn't leak out of the functional interface, referentially transparent as far as calling functions are concerned


The logical waste, to me, is that after I've forced something to uppercase, I probably don't care about the non-transformed version anymore. Making use of it is as likely to be a mistake as otherwise. Consider all of the times you see people modify a list in LISP and accidentally continue to use the previous version.

Yeah, the focus object, to me, follows naturally with the way we do imperatives in natural language. Though, I guess there we have prepositions and such, too. (Are there programming languages with prepositions? I've seen creative use of them in testing libraries.)

And, ultimately, what I want is a system where a reference essentially tracks the state of what it references? Functions on objects, then, would be tracked not just in terms of input/output, but whether they affect the item the are operated on? (This make sense?)

That is, few people would expect the following "program" to work:

    rocket = new rocket()
    rocket.launch()
    rocket.addFuel(fuel)
I have seen fancy usage of generics to have it such that launch could not be called on a Rocket<NoFuel> instance. This makes it possible to possibly do chained calls that would work as expected, but chaining quickly gets cumbersome. Especially if you want to do some things unrelated to the state of the rocket in there at some point.

Do you know of any such systems?

Also, apologies or not addressing the QS stuff. Wrote all of the above and realized I didn't touch it. :( I simply meant that most "cute" QS implementations you see in functional languages (and, hence, readable) are actually not the same algorithm, since they require a ton more memory.

And, yes, all of this is a shared state problem. Though, again, I think if objects had ways of indicating what functions change state and what "state" is required of an object, much of that confusion would go away.

Consider, if you have a team of cooks working, there is no confusion about when the "onion" that must have been diced is added to the pot. Pretty much any assembly line working area with many workers. A worker can do a spot check that the item they have is in the appropriate state for them to do their job.

(Apologies, I would like to clean up this post, but know I will have a hard time getting back to it. Hopefully it is at least mostly intelligible.)


The first paragraph is a non issue, either you care about the original string and thus you have a variable (1) or you don't (2) or compute it from another one (3)

    (1) String focus = "aaaa"
    (2) String focus = "BBBB"
    (3) String focus = oldString.toUpperCase();
I very rarely see list mutation in Lisp, I'd need an example.

The issue with a focus object is that it's cool to thread over one "object", but if you deal with more than one, you'll need distinguishable names, back to square one. What kind of prepositions did you see ? is like using 'it' as a generic variable name in closures/macros?

The first part of your state tracking idea really reminds me of Ref/Atom constructs in clojure.

The generic "subclass" to encode state categories as types is very nice, I toyed with it in college but I then lost interest in OOP (people were too crazy about GoF patterns), I don't know if it's used in production.

The smart cook is akin to prolog goal based programming, here we're quite far from state mutation. I wish there was more systems built on this.

ps: dont worry, my ideas are often blurry, as is my english.


I've seen many procedures that are essentially:

    public Blah foo(String in) {
        in = in.toUpperCase();
In shops where there is an insistence on final parameters, this gets tripped into

    public Blah foo(String in) {
        String upperIn = in.toUpperCase()
And then the mistake is to accidentally use the "in" variable later on. Or, simply in the first to forget to "reassign" the in to itself.

As far as list mutations, I'm mainly referring to intro work. Whenever working with someone that is new to scala/lisp it is common for people to not understand why appending to a list does not "work."

I'm embarrassed to admit I can not find the test library I meant. I believe it was "specs" basically. Looked something like:

    Foo foo = ...
    lookingAt(foo)
    it.shouldEqual(bar)
    it.shouldNotThrow(Exception).onCallofFoo
    ...
Where "it" returns some context style object with basic assertion ideas on it, and some return a "foo" to do calls on it.

Thinking of this in terms of a Prolog style machine did occur to me. Basically define your "transformations" and let the system search for the ones that could be used to go from the ingredients given to the outcome desired.


You do realize that these mistakes are made by people learning the immutable approach ? For people having started with functional programming or other systems (the first computational systems I used were lazy reactive computer graphics) you have close to zero confusion from beginning.

Beside if you want to thread modification in a cuter way, Clojure ->> macro allows

    (upperCase (first-three-letters string)) 
    to be written
    (->> string
         first-three-letters 
         upperCase)
More here http://clojuredocs.org/clojure_core/clojure.core/-%3E%3E.

There was experimentation for the 'it' style, but it's fragile in most code, thinking of with statements:

  - http://stackoverflow.com/questions/1931186/with-keyword-in-javascript
  - http://msdn.microsoft.com/en-us/library/wc500chb.aspx
They had non-intuitive side-effects hard to notice, so nobody use them anymore.

They're present in so-called anaphoric macros (which were popularized by Paul Graham OnLisp, also Doug Hoyte Let Over Lambda), see:

    https://github.com/magnars/dash.el#-map-fn-list
    https://github.com/rolandwalker/anaphora
Here the context is limited to single argument pure functions so a 'it' pronoun won't cause much weirdness.


I have heard claims that if you start people with the "functional style" they are less confused by these tracks. My scepticism goes back to just how many imperative "mutable" instructions people will have learned outside of programming. (That is to say, I'm curious how much research is into this question. Most of the people claiming one way or the other seem to have something to sell.)

Even the example you give, graphics, is somewhat interesting to me. It goes back to my thought where the "metaphor" of what you are doing is more math based already. In those cases, I fully expect a more rigorous "functional" approach to make sense.

And yeah, I actually thought this was sort of like the "with" syntax as I wrote my example. Couldn't remember the actual drawbacks of the with syntax right off. Similarly, I've seen a few times where you can use an import in scala to similar effect. Looks like:

    def foo(x:Bar) {
        import x._
        ...
    }
I thought this could make reading some code a little less cluttered. Haven't seen it in wide use, though. (And... I would expect it should be used with care.)

I can't help but lingering on the thought that this is dynamic typing where the "type" of a variable encodes the state. Seems if the compiler could track the state and provide indications to whether or not the desired state could be achieved, then the code could be clearer.

Consider the directions for driving cross country. The required "actor" of the system is a vehicle with enough gas. If at any time the vehicle does not have enough gas, the system performs the steps to return the vehicle back to the state of having enough gas, then continues the trip.

That make sense?


I'm selling my frustration. I experienced generic uncoupled UI for lazy reactive dataflow around the time I entered college, where I then suffered through C, Java where things quickly become dark magic. The clean thinking process of dataflow looked like a long forgotten dream that was weirdly close to what your code ends up doing, streaming data through transformations. But in imperative code it's obscured. Sure some people have the right view and the right brain (I believe more people before the 80s had this skill on average) and they can compile the idea into algorithmically clean imperative code where things don't step on each others (just like algorithms in Cormen), but I didn't met them often.

Graphics and other domains (sound, ..) had easier time finding an algebra to combine atoms and filters in arbitrary fashion to create complex results. I'm betting all high end (meaning hollywood) graphic software are lazy dataflow systems. But if you look at Photoshop, it's still imperative, and people are required to get skilled at managing dozens of layers to store intermediate results in case they wanna change something. This is pure madness. Find tutorials about Houdini where the artist builds its dependency graph as it fits, and if he something needs to be modified he just tweak the right upstream node and everything recomputes on demand.

Software is different ? not so much, Brett Victor demonstrated how you could develop a platform game in such fashion, lazy reactive source code with different dimensions/views. You can see the output of a trajectory function, change the code and observe the computed difference on the fly..

A lot of projects are trying to reduce the loop between idea / source / result. DevOps with testing/deploy vms (even storage systems are approaching functional idioms like btrfs, zfs, logfs), js client frameworks with databinding .. all the same. It's not everything though; a friend pointed to me that sometimes you need to research an algorithm, it won't be large so this won't help you understanding the problem faster.

The type encoded state is not imperative programming anymore, as I said, to me it's akin to prolog.


> As I've said elsewhere, I don't think I'm being particularly grand here. Just find it curious that, though we commonly have recipes and other directions that are mixes of declarative and imperative, the current trend in programming is to be purely declarative/functional. I wish I understood why that was.

Ever read a recipe designed for concurrency and/or parallelism? Functional idioms are becoming in vogue at the same time as people are meeting walls when it comes to single threaded applications.


Are you kidding me? Of course I have. Hell, the example I gave was. It begins with "begin heating oven..." This is not a task you do and wait for it to finish before you do the next step. Then there is the entire concept of fixing not just a dish, but an entire meal. If you have ever seen a recipe with "while the onions are cooking..." you have seen a recipe with parallelism.

Seriously, sometimes the blindness of our field to just how much we share with other fields is very troublesome.


Touché.


Rereading this thread, apologies for my tone on that post. I do hold my assertion that recipes have both concurrency and parallelism, but there was no reason for me to be so glib about it.


Often reference books end up suggesting practices that amount to what function based programming is. (thread safe C or Java)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: