Hacker News new | past | comments | ask | show | jobs | submit login
The Evolution of a Haskell Programmer (2001) (willamette.edu)
137 points by teriiehina on Feb 8, 2016 | hide | past | favorite | 109 comments



Phase 1: There are no bugs because strong typing, no side effects, functional nature.

Phase 2: OK, but those bugs are my own programmer errors.

Phase 3: I admit it, I have no idea what I'm doing.

Phase 4: OK, I can't even figure out what the Haskell I did two years ago was even trying to do. I was smarter then.


As Rich Hickey says often, there are two truths that are true about all bugs:

1) They passed unit tests. 2) They passed a type checker.


Well no, a lot of bugs didn't pass a type checker because a lot of languages don't have them. And some projects don't have unit tests.


> And some projects don't have unit tests.

If a bug did not cause a unit test to fail, then it passed the unit test suite. (Less cryptically, an empty set is still a set)


If we're getting that pedantic, the quote was "they passed unit tests". If the bug passed a 0-size set of unit tests, then it didn't actually pass any unit tests.


They didn't pass any, but they passed all.

In any case, Rich Hickey probably had more context for his words.


We should take this discussion and publish it in the world-famous (and extremely popular) "Journal of the empty set." You will find most of my work published there, too. http://pic.blog.plover.com/math/major-screwups-4/emptyset.pn...


Still though, 0% test coverage.


Division by zero..


How so?

10 (say) lines of code, 0 lines covered by tests, test coverage is 0/10. No division by zero, nothing undefined happening.


Meh, that's an odd statement since it unifies the classes of bugs which pass varying qualities of (1) and (2). Certainly, "what's true of all bugs (seen in production) is that they passed the unit tests and type checkers levied against them" is nearly tautological, but leaving out those classifiers makes it into a much less meaningful statement.


Which tells us something about the nature of the bugs we will find in our programs.

The type-checker prevents certain classes of bugs, inconsistencies in types, lots of silly typos and mistakes. But it doesn't fix errors in our logic or algorithms. The type-checker doesn't know you should have added instead of multiplied, nor does it fix conversion errors at the borders of our programs where we, say, write a Haskell data structure into a JSON file.


"The type-checker doesn't know you should have added instead of multiplied"

Dimensional analysis is a type system that knows exactly that.


Well yeah, of course, but what conclusion are are you drawing from this?


Good thing type checkers stop many bugs from happening in the first place. I do agree with him that conceptually simple dynamic (Clojure) > complex typed (Java).


He's assuming that the language the code was written in was strongly typed, and that unit tests were actually written. That is not always the case.


He's saying that a strongly typed language with unit tests is not enough to prevent bugs. And he's right.


Strongly typed languages, and unit testing, do prevent or detect many kinds of bugs less expensively than the alternatives. That is why they're worth using.

Similarly, not all diseases are prevented by vaccination, but that's not considered a good argument to stop vaccinating people against those diseases that are prevented by vaccination.


I agree with you, but you're being argumentative. No one here is saying that we should stop unit tests because they can't find all the bugs.

Instead, the focus should be on what else we can do to improve the bug detection rate. This is an area that needs further research (and I would start with the observation that the quality of the unit test varies dramatically depending on who writes them).


The debate on dynamic vs strong is more important than testing vs no testing.


Why? Explain yourself.


Yes, this was the point of his quote. Strange how this point was missed in the other comments. Of course, Hickey is aware that not all languages are strongly typed.


There might still be far fewer bugs.


There's always at least one type checker, the one running in your head. Some languages provide another type checker to help offload some of this cognitive load.


I see you've never worked in vanilla JS.


Apparently this

  s f g x = f x (g x)
  k x y   = x
  b f g x = f (g x)
  c f g x = f x g
  y f     = f (y f)
  cond p f g x = if p x then f x else g x
  fac  = y (b (cond ((==) 0) (k 1)) (b (s (*)) (c b pred)))
is faster than this

  facAcc a 0 = a
  facAcc a n = facAcc (n*a) (n-1)
  fac = facAcc 1
and these are the first and second fastest.

> Interestingly, this is the fastest of all of the implementations, perhaps reflecting the underlying graph reduction mechanisms used in the implementation.

Could anyone elaborate on this a bit more?


The former is equivalent to

    fac' f 0 = 1
    fac' f n = n * f (n-1)
    fix f    = f (fix f)
    fac = fix fac'
so if the second is indeed faster then explicit fixpoints beat tail recursion


I am not sure these performance results are true any more.


Can someone explain to the non-Haskell folks how that first example works?


The author had (too much) fun using a combination of the SKI combinator calculus [1] and the "B, C, K, W" system [2].

[1] https://en.wikipedia.org/wiki/SKI_combinator_calculus [2] https://en.wikipedia.org/wiki/B,_C,_K,_W_system


Isn't Haskell internally using a variant of the SKI combinator calculus to represent that all functions are data? This would mean less reduction for an already reduced program.


That would be true for a toy Haskell compiler. “The standard” Haskell compiler, GHC, can do a whole lot of optimisations and produces really good machine code nowadays.


Other people have provided good answers, but I'm just gonna add another link explaining Y Combinator ``Why of Y" by Matthias Felleison[1] , which is so far my favorite explanation.

https://xivilization.net/~marek/binaries/Y.pdf


S,K,B,C and Y are some famous operators that should be familiar to lambda-calculus nerds.

https://en.wikipedia.org/wiki/SKI_combinator_calculus

https://en.wikipedia.org/wiki/Fixed-point_combinator

The Y combinator (which is behind the name for this website) lets you define recursive functions without actually using recursion (the only recursive definition is Y itself, which is often available as a language primitive)

A less obfuscated version of the factorial using the y combinator would be this one:

    fac' f n = if n == 0 then 1 else f (n - 1)
    fac = y fac'
note how fac' is not recursive and instead calls its "f" parameter when it wants to do a recursive call. The y combinator then ties everything together by passing fac' to itself.

The other obfuscating factor in that program are the S,K,B and C combinators. The purpose of these one is to let you write a program without any variable definitions or lambdas. Notice that when that program finally defines `fact`, it has the form

   fact = y {some big expression}
where the big expression only has function applications and references to the previously-defined combinators. There are no mentions to the "f" and "n" variables that we had before. The neat thing about the SK family of combinators is that you can use them to build any function, not just the factorial:

https://en.wikipedia.org/wiki/Combinatory_logic#Completeness...

As for the speed comment, one way to implement lazy evaluation, as found in Haskell, is to compile programs down to those SK combinators and then use graph reduction to evaluate the result. I would expect a reasonable Haskell implementation to be able to automatically generate the unreadable combinator version of the program from the usual definition of factorial that we all know and love but maybe there was a Haskell implementation back in 2001 that wasn't as reasonable?

https://en.wikipedia.org/wiki/Graph_reduction

https://en.wikibooks.org/wiki/Haskell/Graph_reduction

https://wiki.haskell.org/Super_combinator

That said, dunno how the more recent Haskell compiler handle things. Hopefully some other helpful HN-er can enlighten us here.


To be clear, Y itself doesn't have to be recursive either. You can express it as an ordinary lambda, and write a recursive program without any named values or functions.


The problem with Haskell is that you can't express Y as an ordinary lambda in a typed language unless your type system allows recursive datatypes (at which point you are also pushing the recursion back into a language primitive)


Every time I see this a get further down the list of examples that I understand. Unfortunately, I still don't make it very far before I'm lost.


Church of the No Fixed Point programmer:

    succChurch n f = n f . f

    facChurch n f = n g (const f) id where
            g f n = n (f (succChurch n))

    toChurch 0 = const id
    toChurch k = succChurch (toChurch (k-1))

    fromChurch n = n succ 0

    fac = fromChurch . facChurch . toChurch


Are there actually any use-cases where haskell would be recommended over other languages? There shear level of thinking and multiple different ways to do what seems like trivial tasks is mind bending...perhaps I never did it enough to a level this would not be the case.


I mean, the multiple-ways-to-do-things bit is possible in any language. Even Python, with its guiding principle that 'there's only one way to do it', would allow you to do it in different ways:

    def factorial(n):
        if n == 0:
            return 1
        else:
            return n * factorial(n-1)

    def factorial(n):
        return 1 if n == 0 else n * factorial(n-1)

    def factorial(n):
        return eval('*'.join(map(str, range(1, n+1))))

    def factorial(n):
        return reduce(lambda x, y: x * y, range(1, n+1))

    def factorial(n):
        def facts():
            n, r = 1, 1
            while True:
                r, n = n*r, n+1
                yield r
        return next(islice(facts(), n-1, None))
The difference is: historically, Haskell was an experimental language, a test-bed for trying out new and weird ideas, which meant that experimentation with style has been pretty normal thing. Haskell "culture" encouraged being clever, because that's how you found new, interesting ideas. On the other hand, Python doesn't have that legacy: most of the implementations above are non-idiomatic because they're too clever.

So yes, you can write things lots of ways in Haskell. But, if you had production code in Haskell, you wouldn't try to be clever, in the same way that you wouldn't (or shouldn't) try to be clever in production code in Python. You'd write simple, straightforward code, just like

    fact n = prod [1..n]


The slogan "one obvious way to do it" applies to the design of the language itself. It means that in terms of language constructs, no value is placed on coming up with a creative profusion of operators and whatnot (as in Perl's "there's more than one way to do it.")

It doesn't mean that the language design is going to somehow ensure there's only one way (even only one idiomatic way) for users to implement any particular algorithm. It doesn't mean everyone has to use the same web framework. It's really not prescriptive, except for Python itself.


More like are there any use cases where it wouldn't be recommended? (Answer: yes, cases where you need very consistent runtime performance, or cases where you absolutely need manual memory management - both a lot rarer than people think they are).

You can overcomplicate things in any language. Haskell gives you better tools for dealing with complexity - which some people use as a reason to push the complexity to the limit. But if you stick to abstractions that are actually providing value (e.g. by letting you reduce code duplication) you'll find it's a very good language.


> yes, cases where you need very consistent runtime performance

Though, if you need hard real time, even C can be tricky.

Haskell can be written for soft real time things, but it's also tricky. Rust might be a better bet?


If I really needed to do hard real time, I might use Haskell... to generate C: https://hackage.haskell.org/package/atom


Yes, Haskell makes for a great compiler and language manipulator. Not too surprising, given its ML ancestry.


> yes, cases where you need very consistent runtime performance

A lazy language is probably not the answer to this particular problem.


I write Haskell in production. One of the things we've built is a unit testing harness that denies application-level code any access to unrestricted IO. If a proper mock is not in place, type checking fails before the tests even begin to run.

All tests are thus 100% reliable and parallelizable.

This capability of isolating and controlling side effects makes Haskell very uniquely well-suited to applications that need to withstand a lot of maintenance by a lot of people for a very long time.


Could you elaborate more or point to some sample code?


Sure. I've previously written up a short description of the approach here: http://andyfriesen.com/2015/06/17/testable-io-in-haskell.htm...

The general high-level approach is that your business logic functions run in some monad, constrained by the set of capabilities it requires. Each capability is represented as a typeclass.

Your type signatures say that your business actions work in every monad that supports those capabilities, so the type checker rejects any code that tries to use "unblessed" side effects.


Thanks, that's a cool approach! I'm wondering if a free monad approach would have any advantages over this.


It's similar and there's some endless debate over comparisons of various effect handlin systems. I like the one the op recommends because it's fast and lightweight, but free works just the same. Free is really necessary if you're going to be doing lots of introspection of the monad values where you'd still probably want to write your functions against such a typeclass stack but then translate the values out as free transformer stacks.


I think the way to interpret this page is not "these are all the ways to do this task in Haskell" but rather "Haskell allows me to express this problem in these ways".

The vast majority of examples on the page are curiosities which appeal to some people and arise from all the academic attention both given to Haskell and which Haskell permits. You can be a competent and highly-effective Haskell programmer without understanding the majority of these.


Yeah that's true...although in non-functional languages it seems there are less ways still...well, it feels with Haskell there is multiple ways to think about writing the solution the problem whereas other languages there are multiple ways to just...writing the problem.

"Maybe" still blew my mind away first time I saw it...MAYBE?? MAYBE?? :D. Sure it would all be beautiful if I spent time learning it.


Did you read the "But Seriously" section of that page? It explains that how the different examples actually highlight some advanced functional programming techniques.

In practice, there are some of the examples that are more important than the others:

- The "accumulating" version might use less stack-space than the naive recursion so its a very important technique to learn.

- The foldl version is a more structured version of the accumulating version, kind of like how a for loop if is more structured than a goto.

- The memoizing (aka caching) version is useful if your funcion is very expensive to compute.

- And the "tenured professor" obviously makes the best use of the existing syntactic suger :)

The rest of the entries are a bit more overkill for a factorial program.


I also write Haskell in production. Haskell's strongest strength, in my experience, is the confidence I have when refactoring. I've never experienced that kind of confidence making large changes to working code in any other language, even with fairly exhaustive test suites; and without the overhead of maintaining a fairly exhaustive test suite.

In light of that, I would definitely recommend Haskell over other languages for any use cases where you expect you're going to be wrong, repeatedly, about how best to structure your program. Especially in domains that aren't known very well or when the demands on the program are likely to be highly dynamic.

Though really, at this point I would tend toward using Haskell when there is not much reason to pick something else.


I implemented https://threegoodthings.xyz/ in Haskell (code: https://github.com/thedufer/threegoodhaskells). It took longer (although be less than I expected) than my first implementation, and the type system has cut down on bugs by a lot when I add new features. The lack of libraries for things that I'd expect to be easy has been a bit of a pain occasionally, though.


You can always make something more abstract in Haskell. The challenging part is creating a good abstraction, but not going so abstract that only a few people in the world can understand your code.


Every year since I've started to read about programming (maybe 10 years), Haskell (and pure functional programming in general) has been the Next Big Thing. I don't have anything in particular against Haskell, but its failure to "arrive" makes me think the answer to your question is no.


Evolution is slow. Java imho brought garbage collection to the mainstream. lisp had been doing that for 20 years or so.

I don't think Haskell itself will ever be the language of choice, it'll probably fall into (remain in?) a smalltalk like life.

That said, functional idioms seem to be creeping into lots of mainstream languages, so it's not like never. Rust seems pretty heavily influenced by ml, and somewhat by haskell.


Only on Hacker News is Rust considered a mainstream language, but I see your point :).


Ha! Totally agree. I do think rust has a lot of potential to take a big bite out of c/c++'s current share. But that's 10 years away, because evolution is slow.


Well, functional programming has a lot of good ideas in it, it's just that people generally want the functional stuff in addition to, or at least only partially replacing, the imperative stuff. Laziness has gone out of style. Object orientation is going nowhere.

Functional programming is arriving right now, but not as Haskell.

As an aside, I think of Haskell as a sensei/guru saying, "Young grasshopper, if you want access to the glory that is my wisdom you will have to wash my dishes and scrape off my calluses for the next three years" to which my personal reply is "Hell no". It¨s unfriendly and it doesn't respect my time. Is there wisdom in Haskell? Probably, but I don't have the patience to deal with the bullshit.


I don't think Haskell will ever become mainstream but concepts that originated in research languages like Haskell are increasingly common in more pragmatic mainstream languages. Swift owes a lot to the ML & Haskell family of languages, for example, and even C++ and Java have lambdas now.


... which Lisp had in 1960.


When I look at some of the fundamental differences between lisp and the mainstream languages (like everything-as-expressions and explicit scoping of variables and functions), I realize that Lisp is centuries ahead of its time. How long is it going to take mainstream languages to implement explicit scoping like let and flet? 75 years after lisp was born? How long for everything-as-expressions (which gives code a logical structure and eliminates the need for throwaway variables)? 100 years? 200 years? Will we even have programming languages by then?


Ruby counts as mainstream, right? It is expression based.


> explicit scoping like let and flet?

I'm not sure how this is different than introducing a block in C.


I haven't programmed in C, but my view comes from this video: https://www.youtube.com/watch?v=QM1iUe6IofM&feature=youtu.be...

What would be the C equivalent of:

    (let ((last-response))
      (defun get-response ()
        (print last-response)
        (setf last-response (read)))) 
    
    (print (get-response))
    (print (get-response))
Notice that no other function can use last-response. It only exists for get-response, but it exists outside the scope of get-response. Does C have this?


As used here, there's two ways you can do this in C.

Since the variable in question is only used in one function, you could declare the variable static to that function. It takes some care to make this reentrant, but that's possibly the case in lisp as well.

To share between multiple top-level functions, you can only limit scope to the individual file.

The bigger thing C can't do that lisp can here is defining new functions in arbitrary places. In C variants where you can define local functions, you can indeed capture variables in local scope. And within a function, you can limit scope by creating a new block:

    int test() {
        int foo = 7;

        {
            int foo = 9;
        }

        return foo; // returns 7
    }

Even in those C variants, you can't place a new function in global scope from within a function.


I did not know C had static variables at the function level (thank you for pointing this out), but static variables are not the same thing as what I'm thinking of when I say "explicit scoping". It would basically be like writing curly braces for variable declarations the same way one does with function definitions. Because lisp has this it's trivial to extend the scope of the variable to cover multiple functions:

    (let ((x))
      (defun func1 ())
      (defun func2 ()))

This is what I mean when I say explicit scoping. And it means that the reader/debugger/maintainer knows that this variable only exists for these functions. A static variable's scope is implied by the scope of whatever it's defined in (not sure if that's limited to functions in C), in this case X exists outside of C and has it's own scope, not just it's own extent.


"Because lisp has this it's trivial to extend the scope of the variable to cover multiple functions:"

Yes, I mentioned that static variables in a function would not extend to multiple functions.

You can share a variable between a restricted set of top level functions by grouping those functions in a single object file that does not expose the variable in question.

I think my only objection is that your phrasing implied what's different is the explicitness of the scoping, when it's actually the flexibility of how functions can be defined (unsurprising, from a lisp).


I used the word explicit because I was thinking something along the lines of: the scope in lisp is explicitly defined with parentheses (). Where as something like a static variable (or pretty much any variable that's defined in Algol like languages) has implicit scoping, it's implied that it shares the same scope as the scope it's defined in (or some other implication like C#'s public). I'll have to find a better way to word it because you're right. Thank you for the discussion.


Many, perhaps most, programmers from mainstream languages (i.e. Algol derivatives) find it difficult to get their heads around Haskell in particular and functional programming in general (including Lisp). This is not a problem with Haskell.


This is the excuse we've been hearing for the past 20 years. "It's not a problem with the language/paradigm, it's a problem with you/humanity".


I learned Haskell in school. Then I worked with it in a web startup. I've done fun hobby projects with it. I really enjoy it a lot. I'm not super smart and I don't know any advanced math. I'm not sure what the problem is at all. The language works fine, has a vibrant community, it's growing, and it's inspiring lots of programmers and language designers too.


I don't think I'll ever be a great concert pianist, but I just accept this as one of my limitations. I have never put in the necessary effort, and even if I did, I doubt I would have the required amount of talent. It would never have occurred to me to blame it on pianos.


And if the piano were an instrument almost nobody used and on which almost no popular or well-regarded music were played or composed, I'd agree with this snarky reply. This isn't a particularly apt analogy because concert pianists are judged by their output (i.e. piano music). The argument from the Haskell community seems to be that almost nobody is interested in using the tool (Haskell) for its intended purpose (writing software) because the people themselves are deficient. Haskell is a character-building exercise to write blog posts about, even after all these years, rather than something to be adopted in production.

And before you think I'm just hating on Haskell, the same is essentially true of Lisp as well.


My position is that learning functional programming lies within the grasp of many if not most programmers if they put in the necessary effort, and well worthwhile as it will make them better programmers as they will then see ways of solving problems that hitherto wouldn't have occurred to them, or are painful to implement in other languages. I wouldn't be put off a language (whether Haskell or Lisp) simply because some of its programmers come across as a bit condescending.


Usually, in any other area, an interface made intentionally to be unintelligible to the general public as well as a professional target audience would be derided as bad design.

In the case of Haskell, it is taken as a badge of virtue, and it is a matter of faith that the problem is just that all other programming languages have mis-educated everyone who doesn't "get religion" about Haskell.

If Haskell's interface is hard to understand, that is a problem with Haskell.


Haskell's interface isn't intentionally unintelligible. In fact, it's not unintelligible at all (don't confuse playful exploratory code with standard code, just like you wouldn't think obfuscated C is standard). It's just that you're unfamiliar with it. Most ideas behind Haskell are quite logical and systematic.


Haskell's "badge of virtue" is that it's unpopularity has freed the language creators to improve the language, to get closer to solving the extremely sophisticated problem it is trying to solve (completely safe, efficient, correct programs), without breaking anyone; not that others are miseducated


...and yet, some people really like programming in Haskell, and use it for all sorts of problems. Do you think they're all just pretending to have fun, or that they're obliviously unaware of the merits of mainstream languages like Java, Python, or Javascript?


> Usually, in any other area, an interface made intentionally to be unintelligible to the general public as well as a professional target audience would be derided as bad design.

Evidence that Haskell was deliberately made difficult to understand? And supposing this were true, why hasn't anyone made the interface more intelligible?

> If Haskell's interface is hard to understand, that is a problem with Haskell.

That's a bit like saying that if the mathematics used in the General Theory of Relativity is hard to understand, it is a problem with the General Theory of Relativity.


> That's a bit like saying that if the mathematics used in the General Theory of Relativity is hard to understand, it is a problem with the General Theory of Relativity.

Well, but that's true; if there were easier-to-understand mathematics that had equal utility, it would be superior. The math being hard to understand is a problem, but (as far as anyone can tell), its a necessary cost with General Relativity.

So, for Haskell -- granting, for the sake of argument, that it is particularly hard to understand (which I'm not sure is really the case) -- the question is the complexity a necessary price for some benefit that is worth the cost?


The more functional/immutability/strong typing/etc that gets into mainstream languages, the less compelling haskell becomes.


Do note that Haskell is moving forward all the time, and at a rate faster than mainstream is adopting these ideas.

Mainstream languages have lambdas, more immutability and stronger types? That's great (non-sarcastically!). Haskell now has GADTs, rank-n types, polymorphic kinds, type families (poor name for type-level functions), and more.

These things make Haskell still quite compelling over other more mainstream languages, if you're willing to learn.


> Do note that Haskell is moving forward all the time, and at a rate faster than mainstream is adopting these ideas.

In some sense, yes. On the other hand, there are diminishing returns. As an example, the increase in software quality from a Java-like language adopting any of algebraic data types, parametric polymorphism or first class higher order functions is probably much bigger than Haskell moving to full on dependent types.


I have yet to find a mainstream language which implements any of those features well. Show me one which didn't have and now has proper ADTs.


Depends on how high you set your standards. Ie Python mostly has higher order functions that work, even though for mostly syntax reasons they interact badly with mutating variables---a bit of sugar would help a lot.

In any case, your point stands and complements mine.

(Apropos ADTs, did Scala always have them? From what I can tell their syntax is pretty awkward, though.

Google's Protobuf, which in some sense denotes a type system, even if not a programming language, did get something like ADTs: https://developers.google.com/protocol-buffers/docs/proto3#o...

I am pretty lenient, and would go for anything that resembles a compiler-enforced tagged union. (Think C-style union.))


> I am pretty lenient, and would go for anything that resembles a compiler-enforced tagged union

http://www.boost.org/doc/libs/1_60_0/doc/html/variant.html ??? (and I think it's supposed to be coming in c++17)


That's interesting, thanks! It seems this one discriminates the different possibilities by type, and not by some extra tag (like the constructor in Haskell's ADTs). I wonder how that works, if you want to write something like the `either' function (or even just write down its type in C++):

    either :: (a -> c) -> (b -> c) -> Either a b -> c


Haskell is a nice language and you can build regular stuff with it. I wish people shared more prosaic Haskell rather than shock-value postdoc thesis material. I know this article is a joke, but it plays into that stereotype.

There's this helpful site that suggests libraries for various things: http://haskelliseasy.com Someone should make a companion site listing snippets of code for common tasks showing plain ways of doing things.


Another post convincing me that writing versions of the factorial function is the primary focus of Haskell. I keep hearing about Haskell in production, I would like to see examples of the language actually doing something useful.


Here's a screencast of me making a blog application with the Yesod web framework: https://www.youtube.com/watch?v=SadfV-qbVg8

While I would also like more production Haskell examples, I wouldn't read much into this webpage—it's basically just a 15 year old joke.


For our last 2 clients (with both, offering a "data team as a service"): the reporting tools were written in Haskell, using Servant [1] which got started trying to do the same at Zalora to set up a customer service management tool. The motivation was that it made painful stuff much simpler and faster to write (performance was a bonus).

It's also used for a bunch of tricky data manipulation problems that were harder to solve using pure SQL. Example: identify a unique customer based on attributes (email, phone, address, etc.) one or more of which can change when a customer tries to create a "new" profile to grab the $10 new customer signup voucher. This ran recursively on the entire customer dataset (12 countries, some customers had created as many as 250 profiles) in about 4 seconds on a m3.medium instance.

We don't/didn't write blog posts or tweets about it though. I think the set of people who write blog posts (i.e. both have free time to do so, the talent, and the motivation) is relatively small, and the number of blog posts and other visible stuff being published is proportional to the size of the community, so you'll see many more Node.js/RoR posts than you will on using set theory to reduce a 5,000-table flawed data model systematically or the kind of everyday stuff Haskellers are doing everywhere. Also, personally at least, I don't feel like I know enough to have much to offer by writing a blog, so I stick to making these semi-anonymous HN comments.

Interviewing Haskellers, I found a lot of examples of similar work - CRUD tasks, web services, etc. built in Haskell within a larger organisation and running quietly in the background. Something small and modular that can be tacked on quietly.

One of the creators of Servant moved on to Tweag [2], a French big data company, where he's working on PB-size distributed machine learning projects for large corporate clients, which I think is where Haskell really shines today. I suspect NDAs will stop them from talking much about it... I've always wanted to do similar work for clients but so far, no dataset was "big" enough to justify moving away from well established R libraries.

[1] https://github.com/haskell-servant/servant

[2] http://www.tweag.io/


This is such a trite criticism. This post isn't evangelizing the language, its describing coding styles of different haskell engineers using a simple toy problem (that happens to be a canonical algorithm for teaching recursion and FP) to make it somewhat comprehensible to a layman.

Every time I read a comment like this it reminds me how much fashion and an unsubstantiated cult of "viable for production" publicity steers the industry instead of curiosity and experimentation.

LMGHTFY: https://github.com/search?l=&o=desc&q=language%3AHaskell&ref...


First google result: https://wiki.haskell.org/Haskell_in_industry

You'll probably recognize some companies, such as Facebook, Google, Microsoft.


> I would like to see examples of the language actually doing something useful.

A Haskell program I wrote to find and match subtitle files for a particular release of the movie I have.

https://hackage.haskell.org/package/hastily


Sounds like confirmation bias. You can find lots of code written in Haskell on GitHub. Try searching something there while adding "language:haskell".

Anecdata: Haskell pays my bills and I've never coded a factorial function at work.


Won't product [1..n] lead to a space leak?


[1..n] is not an array but a lazy-generated linked list. So you'd get a space leak only if it loaded the entire list in memory, or it kept all of the intermediate calculations (called thunks) in memory. The question is does it do that, or can it use tail-call optimisation, which would effectively compile to a loop with an accumulating variable, discarding the thunks

I admit now I am stuck looking at http://hackage.haskell.org/package/base-4.8.2.0/docs/src/Dat... as it uses foldMap which I am not used to. According to https://wiki.haskell.org/Fold foldl can make use of TCO. When I get some time I will see if I can produce the thunks as I am very interested i this!


there are various ways to view exactly what ghc produces (i have one in my comment below)


With a decent compiler (I think GHC is up to the job, but I've not checked), stream fusion should ensure that the intermediate list never actually exists.


the prelude implementation is foldMap on list. as i understand it, a cons cell will be created with the head pointer pointing at 1, and the tail at a lazily evaluated thunk. + tries to evaluate, which creates a cons cell pointing at 2, and a thunk. 1 and 2 are added. so we've got two cells hanging around, but no references to the first one, so the gc can grab it whenever. it'll just kind of chug through the list.

it's kinda wasteful to keep making the cells, but it's easy to read. shrug

however ghc is very smart. it might be clever enough to optimize away the cells. it might also do immediate ints, rather than pointers to '1', i'm pretty hazy on when there's an int, and when there's a bigint.


here is the output of

    main = print $ product [1..100]

    ghc-core-html --ghc-option=-ddump-simpl --ghc-option=-dsuppress-all fact.hs
https://rawgit.com/jonschoning/c4ad2129aa7d258e65f9/raw/f8ed...

you can see

    main_go (plusInteger x_a3Sq main4) (timesInteger eta_B1 x_a3Sq);
where it compiles to a recursive loop with an accumulator

typicall the [1..100] gets compiled to (enumFromTo 1 100) and it can be desugared further..

so, yeah, foldmap compiles to foldr which may compile down further


foldr usually gets deforested in GHC, so the intermediate list probably doesn't exist.

GHC rewrites most of the common built-in functions to eliminate intermediate trees and lists. You can see a list https://downloads.haskell.org/~ghc/7.0.1/docs/html/users_gui... (section 7.14.4; ctrl-f "good producer").


It could; depending on the size of n.


Any updates since ? with FTP.


The last 5 or 6 examples clearly captures what is wrong with programming in J^Hgeneral.

Tendency to pile up useless crap^Wabstractions is the cause of suffering.

Imagine that your doctor and financial adviser are doing this. Hey, wait a minute...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: