Hacker News new | past | comments | ask | show | jobs | submit login
Functional Thinking after OO (squirrel.pl)
97 points by fogus on March 15, 2011 | hide | past | favorite | 62 comments



Very interesting article. As a primarily OO programmer that's been playing more and more with functional concepts due to javascript, I definitely see the theoretical advantages mentioned in the article.

However, I also see that there are very hard parts to purely functional as well. For instance, to be entirely side-effect free your language has to be able to enforce immutability (which JS lacks). It's also hard (or at least not obvious) how to do certain types of logic flows in a purely functional.

For small domain spaces I can visualize how purely functional bottom-up programming effectively causes you to write a DSL for your problem.

However I don't have the experience/wisdom to understand if this holds true as the size of the domain increases. Basically, I have concerns that purely functional solutions don't scale (not from a performance perspective, but from an organizational/architectural one).

My experience with complex business logic makes me worried that complex new business rules would be difficult to implement in purely functional way and would cause frequent, painful refactoring.

I would love to hear from someone that's done a purely functional application with a large surface area that has been subjected to a real-world business environment to comment on the subject.


> I would love to hear from someone that's done a purely functional application with a large surface area that has been subjected to a real-world business environment to comment on the subject.

I worked at a betting exchange where the backend was written entirely in erlang. There were a lot of areas in the code where we had to deal with incredibly complicated business rules.

For example, we were not allowed to allow customers to gamble money they didn't have in their account but we did want to allow them to sell out of a position. This meant keeping track their minimum possible gains from all their current positions to figure out how much money they could safely be allowed to use. Money laundering regulations meant we also had to keep track of where different pools of money came from (eg different bank accounts, paypal etc) and these pools could not be allowed to mix (eg you can't deposit money into the site using paypal and then withdraw it into your bank account). Users are allowed to pick their own odds so we always had to be very careful whether money gets rounded up or down. And then there were promotions giving extra money for new members or for large deposits and any bets had to be taken from bonus pools before other money pools.

Functional programming is a very overloaded term so I will talk about specific things that made this easier. I personally find it much easier to construct complex data structures and maintain invariants on them when using algebraic types + pattern matching rather than objects. The use of immutable data structures made it much easier to verify that no operation ever changed the total amount of money in the system and to safely implement highly concurrent systems. First class functions were used throughout the code but we probably could have replaced most uses by objects with little change to the overall structure.

I think that in general OO conflates the management of code and the management of runtime state. In erlang modules are the unit of code management and processes the unit of state management. I would also point out that an entirely purely functional program is impractical with current software and probably not desirable in general.

Bear in mind that most of the advocacy for functional programming tends to from people who have only just encountered the idea and are extremely excited. There are much more nuanced viewpoints available if you dig a little eg Rich Hickey's ideas on the separation of state and identity: http://wiki.jvmlangsummit.com/images/a/ab/HickeyJVMSummit200...


I would love to read more about that system and its implementation. I think it would make an excellent blog post - although I know you may not be allowed to talk more about it.


I don't work there any more so its pretty hard to write about it without the code in front of me. They were intending to open-source some of the more generic systems if they ever got time. Maybe if you gave them a nudge they would at least write about some of it: https://smarkets.com/about/contact/

I did write some vaguely related stuff about the benefits of their declarative web framework here: http://scattered-thoughts.net/one/1280/511009/453845. I may also ask if I can open source the transactional actors library I wrote there. That would probably be worth writing about.


> I would love to read more about that system and its implementation.

http://news.ycombinator.com/item?id=2332633


Functional programming does scale very well conceptually, and you've been using it for quite some time. It's just had a different name:

Unix Pipes

I believe using them to compose several small programs together to get a new solution has been fairly well documented :)


Unix pipes aren't functional programming (which emphasizes immutability and referential transparency) - they're more like the actor model, which creates a system out of lots of small programs communicating via message passing. And yes, they do scale very well, conceptually.

Erlang does both, maybe that's where the confusion is coming from. They're very complementary, though: having separate, concurrent processes seems to be an excellent pressure release for difficulties that can build up doing pure functional programming.

Also, Erlang has an novel (and extremely effective!) error-handling system for the actors, while error handling for Unix pipelines (or asynch/event-loop systems!) can be tricky.


>Unix pipes aren't functional programming (which emphasizes immutability and referential transparency) - they're more like the actor model, which creates a system out of lots of small programs communicating via message passing.

Unix pipes are lazy lists.

Operations on lazy lists allow you to use Mealy machine [1] created from pure functions. Mealy machine uses pure functions that transforms input and previous state to output and next state, which state then goes back on the next cycle.

And then you suddently have Turing completeness (because of arbitrary state).

My argument is supported by the fact that Unix pipes could not change the topology of computation structure, so do lazy lists, while actors could.

[1]: http://en.wikipedia.org/wiki/Mealy_machine


That makes sense.


And they're point-free, to boot! :)


Finger slip vote on phone, sorry.


Fixed :) But really, isn't it time for a mobile Hacker News stylesheet?


My process is something like this:

1. Start with code centered around pure functions. This is a really good default for your project. It gives you flexibility, completely DRY code, and an easy way to think about concurrency.

2. Store large datasets or shallow objects inside maps. They can be efficient and work really well with functions, especially in languages like Python and Clojure. They're even more useful when, for example, they have a literal dot notation (Javascript) for keys.

3. Sometimes, though, it's just easier to structure data inside of objects. Having a map of maps of maps is fine, but projects can quickly become confusing--at least in my experience. The problem is especially bad if multiple people are coding around maps without any enforced structure or validation on creation. In these cases, objects can make things simple and (often) more unified. When I use objects, though, it's good to avoid mutable state. I think of an object as a handy, unified interface to a map, but with the following extras in a unified way:

  - A constructor, potentially with sensible defaults
  - Validations and sensible fallbacks
  - Data-specific error messages
  - Methods that are closely related to the data it 
    holds (and always return the object itself, so 
    chaining is simple)
In my perspective, building code using functions and organizing it around objects is a nice middle point. It's optimized for both productivity and order, especially if you're careful to avoid state.


I think generally object validation should remain separate from construction. The documentation for Python's FormEncode library explain this well:

> Validation is contextual; what validation you apply is dependent on the source of the data.

> Often in an API we are more restrictive than we may be in a user interface, demanding that everything be specified explicitly. In a UI we may assist the user by filling in values on their behalf. The specifics of this depend on the UI and the objects in question.

> At the same time, we are often more restrictive in a UI. For instance, we may demand that the user enter something that appears to be a valid phone number. But for historical reasons, we may not make that demand for objects that already exist, or we may put in a tight restriction on the UI keeping in mind that it can more easily be relaxed and refined than a restriction in the domain objects or underlying database. Also, we may trust the programmer to use the API in a reasonable way, but we seldom trust user data in the same way.

-- http://formencode.org/Design.html#validation-as-directional-...

Lately I've even played with the idea that maybe Clojure style strict separation of data and functions is just taking the single responsibility principle of OO design to it's ultimate conclusion as the responsibilities of storing a piece of data in memory and the responsibility of deriving new data from it are separated...


This sounds like how I am currently programming, but I consider myself an FP novice. In Python, I avoid writing classes until I need a data structure too complex for a dictionary of tuples. In C# I have to have more classes, but I am still aiming towards a more functional style by preferring passing parameters and returning values over manipulating instance variables. I would like to learn F# and start using it for my .net work, but I haven't had the time yet.


> The problem is especially bad if multiple people are coding around maps without any enforced structure or validation on creation.

Using a functional style doesn't mean you can't use constructors, and constructors don't necessarily mean you have to use objects.


Sure, you can have all of the things I listed in FP with some effort. My point is just that OO seems to be good at bringing them all together in a really useful way. (And that way isn't inconsistent with FP.) I aim to strike a balance, to use each style where it is strongest.


Ah, I see your point. But... whilst I can see that a lot of the functionality objects provide is useful, there is also a lot of functionality one might not necessarily need in all instances.


I'm not quite sure what to make of this article. On the one hand, I completely agree that bottom-up programming is a great approach.

On the other hand, I completely disagree that top-down programming is strongly tied to OO programming. At the very least, it certainly is possible to do bottom up programming in OO languages. That's my normal working mode...


I agree. I'm guessing that the author doesn't even know what his database tables look like. Starting top-down doesn't make sense. As an OO programmer, it's not best practice to start a project using concrete classes. We should always code with scalability and flexibility in mind.


Did you forget to put a sarcasm tag at the end of that?


I'm curious about what I view as two contradictory points in the article: If I create my own language/syntax layers as I go through the various layers of my program, how does that make it more maintainable?

This obviously works for the single programmer, who understands the code he's written (assuming he/she hasn't been away from the code too long), but what happens when you have to maintain someone else's code? If I have to jump into a particular spot on the code to solve a problem, how do I know you've defined <+- as some special monad?

As a maintenance programmer, it seems like I'd have to learn a whole new language with each program I have to maintain. (and I, the hypothetical maintenance programmer, am probably not as smart as the original programmer).


"If I create my own language/syntax layers as I go through the various layers of my program, how does that make it more maintainable?"

It is isomorphic to "creating an API". It's really more a perspective on the situation than a literal description of what's going on. No matter what you're programing in, you're building up some sort of language that subsequent programmers will have to understand to work on your code.


From On Lisp by Paul Graham (pages 59-60):

If your code uses a lot of new utilities, some readers may complain that it is hard to understand. People who are not yet very fluent in Lisp will only be used to reading raw Lisp. In fact, they may not be used to the idea of an extensible language at all. When they look at a program which depends heavily on utilities, it may seem to them that the author has, out of pure eccentricity, decided to write the program in some sort of private language.

All these new operators, it might be argued, make the program harder to read. One has to understand them all before being able to read the program. To see why this kind of statement is mistaken, consider the case described on page 41, in which we want to find the nearest bookshops. If you wrote the program using find2, someone could complain that they had to understand the definition of this new utility before they could read your program. Well, suppose you hadn’t used find2. Then, instead of having to understand the definition of find2, the reader would have had to understand the definition of find-books, in which the function of find2 is mixed up with the specific task of finding bookshops. It is no more difficult to understand find2 than find-books. And here we have only used the new utility once. Utilities are meant to be used repeatedly. In a real program, it might be a choice between having to understand find2, and having to understand three or four specialized search routines. Surely the former is easier.

So yes, reading a bottom-up program requires one to understand all the new operators defined by the author. But this will nearly always be less work than having to understand all the code that would have been required without them.

If people complain that using utilities makes your code hard to read, they probably don’t realize what the code would look like if you hadn’t used them. Bottom-up programming makes what would otherwise be a large program look like a small, simple one. This can give the impression that the program doesn’t do much, and should therefore be easy to read. When inexperienced readers look closer and find that this isn’t so, they react with dismay.

We find the same phenomenon in other fields: a well-designed machine may have fewer parts, and yet look more complicated, because it is packed into a smaller space. Bottom-up programs are conceptually denser. It may take an effort to read them, but not as much as it would take if they hadn’t been written that way.


I think bottom-up programming is great, but I have a bone to pick with that On Lisp quote.

"So yes, reading a bottom-up program requires one to understand all the new operators defined by the author. But this will nearly always be less work than having to understand all the code that would have been required without them."

Here's the problem: it's hard to build good abstractions. Every abstraction has a cost and an overhead in learning it, getting used to it, managing the cognitive overhead. The binary black-and-white phrasing of this argument utterly sidesteps any mention of the tradeoffs involved. Most abstractions we encounter in real-world code fail to take into account the cost of using an abstraction. You built it, you're used to it, you can't empathize with others who need to learn it.

Here's my stab at articulating the tradeoffs: http://news.ycombinator.com/item?id=2329613

Common Lisp is a great language, but it's littered with crappy abstractions: too many kinds of equality http://www.nhplace.com/kent/PS/EQUAL.html, an inability to override or extend coerce, redundant control abstractions like keyword args to reverse traversal order (http://dreamsongs.com/Files/PatternsOfSoftware.pdf, pages 28-30), the list goes on and on.


There's even worse than abstractions that are used by their creator only: those that are not used by their creators at all.

I see that in my code all the time: my abstractions tend to suck until I use them myself, at which point I fix them.

The problem is, we often have to build abstractions that others will use before we use them ourselves. At that point we're kinda stuck, because the necessary changes will break code, and that's scary.


That perspective is, I think, most applicable to Lisp. Because the syntax is so simple, language-level constructs in Lisp appear identical to user-defined constructs - it's all just macros or functions. From the perspective of someone using what you've provided, there's no difference between calling your functions and calling Lisp built-in functions. The ideal is that the application logic will be relatively high-level, only calling these utility functions.

Note that this is different from overloading the same name to mean different things depending on context.


"As a maintenance programmer, it seems like I'd have to learn a whole new language with each program I have to maintain."

I used to think the same thing. But I'm starting to realize that you have to make some assumption about what tools future readers of your code have at their disposal, and that these assumptions change the cost and overhead of abstractions.

Here's a degenerate example. We all know that where you draw your function boundaries hugely impacts readability. You don't want one single function for your entire program, and you don't want every function being one-liners either. There tends to be a sweet spot that takes into account how often a code fragment is needed before extracting it into its own function. But if you take into account that a java programmer reading it in his IDE can jump to the call with a keystroke, that tends to move the sweet spot downward. Now I need less reuse to justify method extraction.

A less degenerate example: ruby has monkey-patching, where you can change a function or class's behavior without touching the file where it's defined. This tends to be viewed as a bad thing because now it difficult to visualize a function's b does. But I'm starting to realize this depends on how hard it is to search for the function. It's painful when you have a load path and have to search multiple disjoint directory trees for a monkey-patch. But it would be fine if everything's in a single flat directory.

I've been working on an arc-like language, and I find I can be extremely promiscuous with my monkey-patching extensions because the language implementation is designed from the ground up to live in the same directory as your app, and finding monkey-patches takes just a simple grep. (Hopefully it's not just that it's just me hacking on it.)

Here's my implementation: http://github.com/akkartik/wart


Related submission today about overheads imposed by abstraction boundaries: http://news.ycombinator.com/item?id=2330206


pg quote from the article: you don’t just write your program down toward the language, you also build the language up toward your program. As you’re writing a program you may think "I wish Lisp had such-and-such an operator." So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on.

I remember thinking when I first read through pg's essays a few years back and again now that you can do this in declarative languages as well using functions.

For example, my programs often looks something like this:

  main(){
    var data = getData();
    pushData(data);
  }

  function getData(){
    var data = fetchDataFromSource();
    var errors = validateData(data);
    if(errors != null)
      //handle errors
    return processData(data);
  }

  function fetchDataFromSource(){
    ...
  }

  function validateData(var data){
    ...
  }

  ...
Could anyone explain why functional languages are better for this? It seems to me this is the same as "building the language up to your program".


Look at all the duplicated code in this python redis client: https://github.com/andymccurdy/redis-py/blob/13850b1b9ed34ee... . The python version has duplicated 'self' tokens, 'def' tokens, arguments, execute_command, converting keys in each function, ...

Compare to the non-duplicated code in my Lisp redis client: https://github.com/mattsta/er/blob/681e4f91d0601d2997b2b5b7a... . In the Lisp version, each redis-cmd-* call is a macro created by a macro which calls another macro to create the function in a polymorphic way.

You are one step closer to Lisp enlightenment.


To be fair, you could have done much the same thing in Python:

  basic_execute_command = {
      # snip
      ### SET COMMANDS ###
        "sadd": "Add ``value`` to set ``name``"
      , "scard": "Return the number of elements in set ``name``"
      , "sismember": "Return a boolean indicating if ``value`` is a member of set ``name``"
      , "smembers": "Return all members of the set ``name``"
      , "smove": "Move ``value`` from set ``src`` to set ``dst`` atomically"
      , "spop": "Remove and return a random member of set ``name``"
      , "srandmember": "Return a random member of set ``name``"
      , "srem": "Remove ``value`` from set ``name``"
      # snip
  }

  def _basic_execute_command (self, cmd):
      return lambda *args: self.execute_command(cmd.upper(), *args)

  # snip

  def __init__ (self, *args, **kwargs):
      # snip
      for cmd in self.basic_execute_command:
          self.__dict__[cmd] = self._basic_execute_command(cmd)
      # snip

This is pretty memory-inefficient if you have several redis objects, but I'm sure you could come up with something better than what I hacked together in a few minutes.

I know that Lisp ranks above Python on the metaprogramming scale, but simply adding easy object introspection can allow for cool factoring constructs that a Java (or similar) programmer wouldn't even consider. It's possible to make copy-pasta in any language.


Yeah, if you can interpret a data structure at runtime, you don't need to write out the corresponding code at all.

You could have all the redis connection instances share the same command dict, or make it a variable in the package.


There's nothing specific to Lisp in that, though. I did something similar with my Lua redis client, sidereal (http://github.com/silentbicycle/sidereal/blob/master/siderea...):

    ---R: Return the string value of the key
    function Sidereal:get(key) end
    cmd("GET", "k")
    
    ---R: Set a key to a string returning the old value of the key
    function Sidereal:getset(key, value) end
    cmd("GETSET", "kv")
    
    ---R: Multi-get, return the strings values of the keys
    function Sidereal:mget(key_list) end
    cmd("MGET", "K")
(The comment and function lines are only there for luadoc; it could just be the cmd lines.)

FWIW, I need to update it - antirez changed the protocol in a recent version, and I haven't gotten back to it yet. It should remove a good portion of the code, though - he switched everything to using the bulk form.


You're example is too small for the differences to become clear. There isn't any OO in your code for example.


I can't edit my comment anymore, but I wanted to add some more detail.

Building larger programs is about clean abstractions. What your doing only needs basic functions for abstraction, but when you try to do something larger you'll begin to need/want more ways to abstract your code. Then the differences will become clear.

But, you punted by not actually handling the errors. As others comments show, your example may need more abstraction tools once you try to handle the errors :)


for example, getData() could be written something like:

  getData = fetchDataFromSource . validateData . ProcessData
See http://www.haskell.org/haskellwiki/Function_composition


Can you just use a callchain?

  function getData() {
    return processData(validateData(fetchDataFromSource()));
  }


It's not quite equivalent due to differences in the semantics; a functional language can far more easily do loop fusion: http://en.wikipedia.org/wiki/Loop_fusion . An imperative language is likely to not even try in that situation, or in the case of a good C compiler, end up failing to do the optimization at the slightest hint of pointer use. In Haskell it's robust and can be composed many times.


How would you handle validation errors in this case?


It would be specifically handled by the return type from validateData and the argument to ProcessData. It sounds clunky, but with an algebraic data type it would be fairly clean - just another pattern to match in ProcessData.

For a simple example like this, that might be the cleanest solution. For anything more complicated (like recursive descent parsing), a processing chain like this is an ideal application for a monad, where the binding between functions would be customized to avoid calling ProcessData altogether.


Two ways: (1) (avoid) something similar to exceptions; (2) (much better) your datatype has an 'invalid' value. See Maybe in haskell, or Option in Scala (I think that last one's right...).


With error handling in Haskell your code would look like

   getData = fetchDataFromSource =<< validateData =<< ProcessData
By the way, the original getData with (.) is backwards. It should be:

  getData = ProcessData . validateData . fetchDataFromSource


No doubt functional programming has its place and is very powerful for many types of problems. Just curious, how do you maintain your lovely side-effect free structure while programming a GUI or other heavily event-driven environment?

I have a genuine interest in the answers to this as it is one of the things putting me off learning an otherwise very interesting paradigm.


GUIs themselves aren't the problem; the problem is that all GUI toolkits are written imperatively and since you don't want to rewrite your GUI from scratch you have to end up interfacing your functional program to a huge hunk of imperative code. But it is important to be clear that the problem is actually just the impedance mismatch between the functional code and any huge hunk of imperative code, not special to the problem of GUIs. Functional has some interesting and even good answers to the problem of GUIs, but they all look way klunkier in practice than they "actually are" in some sense because they still have to work through those imperative toolkit bindings.

For a clean answer to function + event-driven, see Erlang, which will blow pretty much any other attempt to solve the problems Erlang solves out of the water. Functional doesn't have a problem with event-based, functional blows imperative and OO out of the water at event-based. (Partially because it's a lot easier to write sophisticated runtime like Haskell's or Erlang's that can transparently deal with sync issues that drive imperative programs crazy.) Not enough people are good enough at both OO and functional for this to be commonly understood.

You'll know functional languages have entered the mainstream to stay when someone writes a useful functional GUI.


This is all true, but, in any case, whatever happened to MVC? After all, you can (and arguably should) have your business logic happen in functional and reduce the GUI or whatever (which might or might not be OO) to more or less a thin layer of piping which ships data between the user and the pure parts of the program.


You might want to look into functional reactive programming, and (unrelated) have a look at how xmonad does the IO-heavy work of a window manager, which is all about affecting external state, in a pure language like Haskell.


I don't think anyone (sensible) is advocating that the entire program must be free of side effects, just that side effects have a cost in terms of clarity and maintainability and so should be avoided when possible.

> programming a GUI or other heavily event-driven environment

Nitrogen is a nice event-based GUI library written in erlang. Have a look at the demos here: http://nitrogenproject.com/demos


> I don't think anyone (sensible) is advocating that the entire program must be free of side effects, just that side effects have a cost in terms of clarity and maintainability and so should be avoided when possible.

Indeed. Haskell for example, just `tags' all side-effecting code in the type system. You are free to use lots of side-effects in Haskell. (Even though that would miss the point of the language.)


Functional programming is a good conceptual fit for data binding in UIs, while functional reactive programming is its application to event streams. I've found the former more practical than the latter, so far.


GUI isn't really the place to do functional programming imho.

But check out node.js for the rest of your pain. It is very event driven and can easily be coded completely side-effect free due to closures.


I think that lightweight dynamic OO languages with first-class functions (JavaScript, Lua) are an interesting middle point. They avoid some rigid features of structural programming inherent to most other OO languages, such as encapsulation and class definitions. Instead, objects serve the role of generic data structures, that can also carry code (behaviour) with them.

I feel that a language that would allow both styles to mix even further (immutable objects, pattern matching, tags (for ADTs), multimethods, concurrency) would be really cool to use.


Ruby belongs in the list you gave as well, I think, but they all have tradeoffs, just in terms of the convenience of using first-class functions in them. Ruby's block syntax (|x,y| x * y) is a lot more convenient than the more verbose "function" syntax of JS and Lua for expressing inline functions (Lua: function (x,y) return x*y end), pretty comparable to something like Haskell. However, Ruby requires you to use either coroutine-style yield syntax or the call() method to actually invoke a function value, while JavaScript and Lua let you write parentheses after the value's name, in the same way that you would call any other function.

For first-class functions to be convenient, minimally you need a terse syntax for in-line functions, the ability to call function values as you would normal functions, and closures. (this last, fortunately, all of the above support) I'd argue that language support for currying is also extremely valuable and difficult to live without, because it greatly increases the situations in which a given function is useful. It's frustrating to me that mainstream "scripting" languages like JS, Lua, and Ruby can't get all of these aspects right - I hope for pattern matching and ADTs in a mainstream language, too, but I'm still waiting for a good implementation of first-class functions to arrive!


FWIW, I wrote a pattern-matching library for Lua, Tamale (http://github.com/silentbicycle/tamale/).

It was an interesting project, because almost everything I could find about implementing pattern-matching assumed the patterns could be analyzed at compile time, rather than building some kind of dispatch tree at runtime. After trying some more elaborate methods (e.g. building a decision tree via CPS), I figured out that just indexing (on the first field, by default) and then doing linear search within the index's patterns got 90% of the benefit with very little runtime analysis.


Very cool!


so Common Lisp? :)

The Problem with putting everything in a language and supporting everything equally is that people tend to do the wronge thing (like in JS up until the last years) and languages get to big (scala, d, CL).

I think a language that tends to FP (and makes it idiomatic) but allow other styles but makes them a bit harder to use. (in a lisp you can always make it look good again if you really know that, that is the right thing to do).


This is off topic. Is there a language that automatically generates code when the missing functions are referenced? It would be good for top-down style and test-driven style programming.

E.g. I start with a high level flow of a program.

  int main() {
    outputData( processData( getData() ) );
  }
When compiled, the compiler realizes all the called functions are missing and generates provisional stub code for them, with sensible default for parameters and types.

  int @getData() {
     return 0;
  }

  int @processData(int param1) {
  }

  void @outputData(int param1) {
  }
The program can compile and run. I then go ahead to fill out the content of the functions. The provisional functions are marked as such (the @ marker) where the compiler can run in a validation mode to flag all functions not implemented yet. When a function is done, the provisional marker can be removed.

As more function calls are added, each compilation generates more provisional functions, which can be filled out. Also as the completed function's parameters and types become concrete, the compiler can update the provisional caller or callee functions.

This kind of help from a compiler can really make test-driven development easy. I can write the tests first that make calls to yet-to-exist functions and the compiler generate the provisional version for me. And the tests run right the way.


In statically typed languages, that's usually avoided, because a typo in a function name can escalate to subtle bugs.

Also, working with partial information complicates the type inference; in your example, at best, it could infer () -> `a for getData, `a -> `b for processData, and `b -> () for outputData.

In some languages (such as Smalltalk, Lua, and Ruby), there is an explicit "message not understood" hook that is called when a nonexistent function is referenced, and you can define stub behavior there.


Good point about the function name typo in static type language. Provisional and complete functions are supposed to be different. The compiler can have a validation mode to flag all provisional functions, which would catch the wrong function name intended for a complete function.

The "message not understood" hook in dynamic languages is good for runtime handling of missing functions, but I actually want the compiler to generate the source code for the provisional functions, to make exploratory development easier.


IIRC, Haskell has some explicit marker for adding stub functions ("doThatStuff x y z = undefined" or something like that). You still write the names for the function and any arguments, though.

I tend to write code bottom-up rather than top-down, testing it in a REPL and/or with tests, but have Emacs functions to generate boilerplate for languages that need it.


If you use an IDE like eclipse and a method isn't defined, one of the auto-correct options will be to create a placeholder method stub for it, and it will flag it with a //TODO stub to keep track of autogenerated ones. Not quite as automated, but it's pretty convenient.


functional programming naturally provides for reducing the complexity of the system being modeled by the discovery of common patterns in the data and the algorithms applied.

The foundation of OO approach is modeling system as it is with all its complexity. This approach is cheaper, ie. easy on the brain, yet it results in the much more unnecessary complex, bloated, system. The same, structurally, code is multiplied through the system because it is bound to different data. Please spare me lectures about how it is an incorrect approach - you'd insult hundreds, if not thousands, of smart people whose source code i've seen during the last almost 20 years. You can look yourself at any OO project in the open source or at your workplace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: