Hacker News new | past | comments | ask | show | jobs | submit login

"Reason 1: Haskell shines at domain specific languages.

"[...] if you can embed a domain specific language into the programming language you’re using, things that looked very difficult can start to appear doable. Haskell, of course, excels at this. [...]"

Says who? Just because I can write `f(x)` as `f x` with full featured operator precedence and associativity tables doesn't mean I can introduce new syntax into the language. I can't even find a moral equivalent of ELisp's rx package for Haskell.

"[...] Lisp is defined by this idea, and it dominates the language design, but sometimes at the cost of having a clean combinatorial approach to putting different notation together."

This is either weird nonsense or the old variable capture argument. You can write crappy functions in any language too, that doesn't mean we should take away the programmer's ability to write new ones.

"Reason 2: Haskell shines at naming ideas. If you watch people tackle difficult tasks in many other areas of life, you’ll notice this common thread. You can’t talk about something until you have a name for it."

I applaud the willingness to appeal to natural language, but this isn't even close to true.

This blog post seems like an argument for Haskell's value based on specific times the author has enjoyed programming in Haskell. I call post hoc ergo propter hoc shenanigans.




"Just because I can write `f(x)` as `f x` with full featured operator precedence and associativity tables doesn't mean I can introduce new syntax into the language."

There's two definitions of "DSL" and I find mixing them to be a bad idea. I prefer to confine "DSL" to a fully-fledged actual language, with its own parser and evaluator. Haskell is just about the only language I know that makes this easy enough to actually consider as the solution to a problem.

Then there's "glorified API to look like a language", a definition I don't like but is clearly in current use, and the one used in this post. Haskell is pretty decent at that, but of course you can't escape from the fact you're using Haskell. But then, people pretty freely use the term "DSL" in Ruby where you can't escape the fact that you're in Ruby. In either case, "adding new syntax" is not actually part of the general definitions of DSL nowadays. The "adding of new syntax" is an illusion, and I guess that's part of why I don't like this definition of DSL, there's actually an element of deception in it and in practice I have found people not 100% comfortable with the base language the DSL is in often end up deceived and less effective. And either way, if you really, truly want some new syntax, Template Haskell can mostly keep up with Lisp anyhow, but not entirely. There may not be, at this moment, a precise match for the rx package already on Hackage but I believe Haskell has all the necessary functionality to create one, up to and including compile-time regular expression compilation. Though given the way Haskell works, it is also the case that compile time RE compilation is way less useful, because it's easy to make it compile only once even at run time with the way Haskell works. As is so frequently the case, what takes a macro in Lisp does not require a macro in Haskell.

"This is either weird nonsense or the old variable capture argument."

No, it's a functional purity argument. Lisp in practice has the same problems with mutation and state as pretty much any other language. Insert debate about importance of functional purity here. (That is, I'm not making the argument now, I'm just letting you know that's what was being referenced.)


>No, it's a functional purity argument.

I'm pretty sure it's a reference to the fact that combining macros can be risky; one of the risks is variable capture.


That's a subset of the functional purity problem. It's a relatively-commonly-understood one, but Haskell has taken the argument far further than most, such that that is now only a subset of a larger argument about state and mutation. The argument is not merely that you've got some state being mutated in a bad way, the argument is that the fact you've got state mutating at all is a bad thing. In this case, it's state in the compiler phase, but it's ultimately the same argument. The argument would be that combining any two things that mutate state is inferior to trying to combine two things that don't, at any level.

I'm personally still skeptical of this general argument, but I mean that in the "true" sense of skeptical. I think in a lot of ways the benefits have panned out as promised but there's some costs still swept under the rug. Rather a lot of the development ongoing even now in the core Haskell community can be viewed as various ways of lowering or eliminating the costs associated with their approach.

(For instance, the still-ongoing, but rapidly converging, changes associated with trying to create a decent string library that does not treat strings as linked lists of integers. On the one hand you can see it as normal feature work, but on the other it's a way of trying to go from having an unusually bad string story induced by functional purity to potentially having an exceptionally good one. It's this sort of work that actually keeps me interested in Haskell; I don't know of any other community doing so many actually new things to prove out their philosophy, instead of rehashing mutation-based OO in yet another way.)


Imagine Haskell lacked an "if then else" expression. Then:

   my_if :: Bool -> a -> a -> a
   my_if True  ifTrue _       -> if_true
   my_if False _      ifFalse -> ifFalse
Usage (those 2 lines are equivalent):

   my_if (x > 42) (foo x) (bar x)
   if (x > 42) then foo x else bar x
That's the basis of what we call combinator libraries. Application syntax and infix operators, used wisely, actually feel like ad-hoc syntax. Combined with lazy evaluation, they makes fully fledged macro much, much less useful.


Right, but I can add a sub-language with lazy evaluation, infix operators, and application syntax to Lisp...

I fully recognize Haskell can do a very useful subset of the features of real dynamic syntax. It's still a proper subset though.


What do you mean by "dynamic"? Syntax happens at compile time, right? Different syntax can apply to different part of the source code, but that's still static. Did I miss something?


He probably means that the syntax can vary at compile time. An example:

Actual code I wrote (for https://github.com/stucchio/Mp3FS ):

    type Mp3fsM a = ReaderT Mp3fsInternalData IO a

    runMp3fsM  f r = runReaderT f r
    runMp3fsM1 f r = \x -> runReaderT (f x) r
    runMp3fsM2 f r = \x -> \y -> runReaderT (f x y) r
    runMp3fsM3 f r = \x -> \y -> \z -> runReaderT (f x y z) r
    runMp3fsM4 f r = \x -> \y -> \z -> \t -> runReaderT (f x y z t) r
In lisp, this sort of thing could be handled by a macro.


    In lisp, this sort of thing could be handled by a macro.
So why not apples-to-apples and use Template Haskell to do it?


Because it doesn't fit my original argument, that Haskell's combinatorial approach can do much (though not all) of what Lisp macros do.


I get it. `my_if` doesn't require separate recompilation of the functions that use it. Well played.

I still see a subset of the functionality of a language with macros over sexprs as the syntax. It will take me some time, however, to come up with a good terse reply to that issue that fits in these tiny text boxes. The gist would probably be; if one is going to really change the syntax of a language, the programs that use the changed bits have to be re-parsed at some point. That's somehow tautological.


I actually agree. If you want for instance some weird scoping rule like

   ; example taken from PG's On Lisp
   ; "it" refers to the big-long-expression
   ; "aif" stands for "anamorphic if"
   (aif big-long-expression
     (foo it)
     (bar it))
then combinator libraries won't cut it. Template Haskell might, however (but its' not standard, and besides our point).

Now my point is that the subset we speak of is easier to achieve through functions with lazy evaluation. Macros are more capable, but harder to deal with. This is magnified by Haskell's paranoid type system: functions are checked in ways macros aren't.

So my guess is that neither system dominates the other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: