Hacker News new | past | comments | ask | show | jobs | submit login
Why Lisp is a Big Hack (And Haskell is Doomed to Succeed) (axisofeval.blogspot.com)
110 points by yewweitan on Jan 3, 2011 | hide | past | favorite | 87 comments



Let me see if I understand this: The argument is that a hypothetical language descending from Haskell will be (a) superior and (b) more popular than Lisp as we know it today and to any Lisp that might evolve from it.

I think there's too much hand-waving going on to agree or disagree with the premise. Language popularity is notoriously fickle and driven by considerations that don't even earn a footnote in the OP.

For example, one of the most popular languages in the world today is Javascript, a language that owes a little of its heritage to Lisp and none at all to Haskell. Why is it eating Lisp's lunch and Haskell's lunch today? Because it ships as the default runtime for the most popular application in the world, the web browser.

Another language that has the same breadth of reach is PHP. Again, its popularity is due to the fact that it is a default runtime for most of the world's hosted web server platforms.

If we are to guess which language is going to gorge itself on everyone else's lunches in the future, I would suggest we spend our time thinking about which platform is going to be insanely popular and what new language might be its default runtime.

I'm not providing any fresh insight, of course:

Let's start by acknowledging one external factor that does affect the popularity of a programming language. To become popular, a programming language has to be the scripting language of a popular system. Fortran and Cobol were the scripting languages of early IBM mainframes. C was the scripting language of Unix, and so, later, was Perl. Tcl is the scripting language of Tk. Java and Javascript are intended to be the scripting languages of web browsers.

http://www.paulgraham.com/popular.html


I agree with your handwaving point.

But there is one point worth underlining a bit, which is that you can't start with (traditional) Lisp and then build in something that can only be built based on total, guaranteed restriction, because Lisp always lets you out. Haskell's STM is probably the best example; so far I don't think anyone has built a successful imperative STM because if you let people out even a little bit the STM abstraction tends to break. You have to be restricted for that to work in practice, even if in theory you could load that all onto the programmer; the constraints are too hard for a programmer to just implement, let alone a team.

You can write macros in Lisp to implement these restrictions, of course, but they will be hard to make water-tight, and then you will be left unable to use anybody else's libraries because they will not honor these restrictions.

If there is anything Lisp can't do, any language in moderately common use that it is not strictly superior to, it is Haskell. You can not just turn Lisp into Haskell with a few well-chosen macros, or even a lot of macros, without making it Not Lisp Anymore.

It may have taken a while, but we finally have a modestly practical language that is not merely Lisp warmed over or Lisp horrifically subsetted and bastardized; it's genuinely something different.


I was only talking about (a), the language being "superior" in a very "idealistic" and fundamental sense - i.e. being "more expressive".

I do believe that the son-of-FP I hand-waved about will also attain (b), being the most popular language, but that's much further out - say 50 years.


It could happen sooner: If there are enough posts like this, someone may be developing a programmable platform of some sort and think "I know! I'll borrow some ideas from Haskell for the DSL!!"

That's all it takes.


This is the first time I feel PG's `The Hundred-Year Language' may have answer different than Lisp. On the other hand, `a witty blogpost proves nothing', to misquote somebody.


"A witty quote proves nothing" -- Voltaire


I wonder if Voltaire expected people to be quoting this ...


The great thing is: if and when Haskell ever evolves as far as described in the post, we'll have Lisp again - inside "Haskell". ;)


Looking at it that way is ignoring all the benefits that Haskell can offer over Lisp. If all you can think in is Lisp then you're doomed to write Lisp in any language, to turn a phrase.


No. Haskell has type safety which was never intended to be included in Lisp. On the other hand, Lisp has the s-expression syntax which allows for usable compile-time macros. Neither language is a subset of the other.

How useful type checking and macros are in creating real-world software is not a question I've seen answered well. Smalltalk-style languages (Python, Ruby) don't really practice either.


"Haskell has type safety which was never intended to be included in Lisp."

Racket has a dialect with static types. http://docs.racket-lang.org/ts-guide/


Haskell has Template Haskell, which is (almost) as usable as Lisp macros.


I think that without macros, any language claiming to "replace lisp" is only doing so superficially.

EDIT: Just to clarify, I don't think the Haskell designers are claiming that it "replaces lisp," just that this blog entry does.


I used to think so too until I learnt about Haskell's very lazy evaluation. It's worth reading up.


As a Haskell programmer I never feel the need to add any form of dynamic typing. In practice dynamic typing makes it harder to build programs, not easier.

I'm the first one to admit that Haskell's type system has a somewhat steep learning curve. But after a while, when you internalize it and learn to pattern match on GHC's type errors, day-to-day programming will become easier.


> In practice dynamic typing makes it harder to build programs, not easier.

OTOH, static typing could make writing some kinds of programs impossible. In an interview, Joe Armstrong of Erlang fame said that they tried hard to put a type-system on top of Erlang, but they have not been able to.

IMO, static-typing should be optional, that is: you should be able to run a static-typing checker on your programs, without being constrained by it at every step.


No, JA told that they planned to add type system, but their users didn't feel they need it.

And, actually, they added types into Erlang: dyalizer.

As for optionality of static types... What is your experience, precisely? My tells me that type systems are, actually, your partners in program development. Quite intelligent partners - Haskell type system, Agda2 type system...


> No, JA told that they planned to add type system, but their users didn't feel they need it.

I know a different story.

http://learnyousomeerlang.com/types-or-lack-thereof

Begin reading from "Through the years, there were some attempts to build type systems on top of Erlang..."

> And, actually, they added types into Erlang: dyalizer.

Yes, and they did it right: type-checking should be a task performed by a different tool.

> As for optionality of static types... What is your experience, precisely? My tells me that type systems are, actually, your partners in program development. Quite intelligent partners - Haskell type system, Agda2 type system...

I do agree that static type systems are very helpful. What I don't like about them is that most incarnations of them force your program to be type-correct all the time. This is not the case when you are sketching a solution, or sometimes while fixing/improving code. Sometimes I was working on some part of a software, breaking the static typing of the program as a whole along the way, and the compiler kept complaining until I had fixed or commented out some other (currently not impacted) part of code. I've found that to be very annoying. I would have very much preferred such type errors to be just warnings until I was ready to finalize my fixes.


>What I don't like about them is that most incarnations of them force your program to be type-correct all the time.

I think that they require your program to be correct at the time of checking.

>compiler kept complaining until I had fixed or commented out some other (currently not impacted) part of code

Looks much like Visual Studio. Am I right?


> I think that they require your program to be correct at the time of checking.

I don't understand: static typing requires all the source code - and I stress "all the source code" here, which encompasses code not reachable too, rather than the just the code you are working on - to be type correct, otherwise your program will neither compile nor run.

> Looks much like Visual Studio. Am I right?

Yes, but aren't all static typed languages the same? I've found that to be the case with C++, Java, Haskell.


> OTOH, static typing could make writing some kinds of programs impossible.

Often, these turn out to be incorrect programs or programs without a good structure/architecture. It is my experience that static typing makes it slightly harder to write the first version of a program, but subsequent alterations become an order of magnitude easier. For me, this is worth it, since alteration and maintenance are the largest part of most program's life cycles.


> Often, these turn out to be incorrect programs or programs without a good structure/architecture.

Then this critique would apply to most - all? - Erlang programs too ;-)


There are times when you wish you had dynamic typing, but this is usually a fault of the type system and there are better answers than ripping the type system out entirely, once you ascend to a mathematical way of looking at it.

Suppose for example, that you wanted an existentially qualified list, something like forall a. Ord a => [a], and to be able to mix variables of different concrete types in that list. Well it's a slight pain to do that right now.

This goes back to the classic adage, the answer to broken maths is... MOAR MATHS!

Anyway, I would like a switch that would let you do that, even if it means adding a type sig. A better switch might be a LANGUAGE pragma, because this is one of those things you either really don't want at all, or you want it really badly.


> There are times when you wish you had dynamic typing, but this is usually a fault of the type system and there are better answers than ripping the type system out entirely, once you ascend to a mathematical way of looking at it.

Indeed. Having a static typed language means that you are tied to a single static-typing system. OTOH, I guess you could apply different type checkers to the same dynamically-typed program.


Last time I looked Haskell and Lisp were totally different languages, with different communities, different technologies, different application areas, ...

Why should this change?


If he favors type safety and switching between modules of different type systems, Racket already has that, doesn't it?


He argues that some features - e.g., dependent typing (http://axisofeval.blogspot.com/2010/10/notes-on-dependent-ty...) - won't emerge from Lisp, because they approach the problem from the wrong angle; no safety towards more safe vs. Haskell (and others') reverse approach.


Good point - I forgot to consider this approach. I'll have come back to it at another point.

Generally though, I think Haskell's approach of re-adding freedoms is more fruitful than trying to add bondage to Lisp. And "the Haskell movement" certainly has a different kind of manpower behind it.


>And "the Haskell movement" certainly has a different kind of manpower behind it.

Can you clarify what you mean by the words "movement" and "different kind of manpower"?

How is progress in languages like Racket different from Haskell?

Why do you think that adding "interesting" type systems in Lisp would be "heroic" and less "fruitful"?


There are probably hundreds, if not thousands of researchers, who use Haskell (and related PLs) as their vehicle for type system research. People working on "gradual" type systems like Typed Racket are probably around a dozen, which means that progress in Haskell happens much faster and with more breadth.


While i think you make valid points, i do not agree with the conclusion.

First, i don't think associating innovativity with numbers of researchers is a safe predicate since it's clearly not linear.

Second, the fact that haskell is a testbed for researchers doesn't make it the ideal language for "real world programming" in my opinion. It could be that it is, but again the correlation is less than clear.

And third, and that's the most important point in my opinion, the ideas developped in haskell are fully available to other language implementors once they exist, and the experience of researchers is available. It means that a language like typed racket or any other could possibly implement the best/most usefull of those ideas in much less time than was needed to first find them.


>Second, the fact that haskell is a testbed for researchers doesn't make it the ideal language for "real world programming" in my opinion.

Once upon a time Lisp was a language for researchers.


It still is.


and what about http://en.wikipedia.org/wiki/Qi_%28programming_language%29 (via @fogus)?? Choose haskell as the only way to programming nirvana could be eliminate other differents points of view. In fact choose whatever will do


When my post started a heated conversation in ru_lisp LiveJournal community, someone told there about his personal experience with Qi. He managed to convince Qi that 42 is of type String.

That's the price of freedom of Lisp.


   Of course you have to subscribe to the idea that this
   safety and verification is something that's good and
   superior. I do.
Why is it superior? I've always believed that a superior language is one that is as close to natural language as possible. Natural language doesn't have strict type checking, and a great many things were told with 'incorrect' grammar.


To take as an example Epigram (also mentioned in the blog post). Epigram is not Turing complete, as such the language can be made strongly terminating. This means all programs are guaranteed to terminate (no Halting problem) programs are also strongly (and statically typed) meaning that when your program compiles it will be guaranteed to behave correctly (barring hardware malfunction). Essentially this means we have made the large majority of bugs impossible to be made in this language.

Of course creating a trivial language that is strongly terminating and statically typed is pretty easy. What Epigram is attempting to do is trying to add enough flexibility to create all programs we care about. Not being Turing complete we can't, by definition, create all programs. Of course if we can make all programs we want to make, the inability to create all programs is irrelevant.

Now you might not think these benefits make a language superior to other languages, but I (and the blog post writer as well) do. As he mentions in post adding the ability to lessen these guarantees (i.e. make the languages more dynamic on request) then we have all upsides (less bugs!) and no downsides. Since we can adjust our guarantees down to be less strict and provide more freedom as requested. Lisp can't do the same in reverse as adding the same static guarantees to Lisp is a beyond Herculean task.


> Essentially this means we have made the large majority of bugs impossible to be made in this language

Oh, please. Such a statement ignores the actual kinds of bugs that plague real complex systems. The abilities of static type systems to prevent bugs is dramatically overstated. Consider this:

http://hackage.haskell.org/trac/ghc/query?status=new&sta...

Now, we all have our bugs, and this is not to beat up on Haskell for having them, but presumably they use their own tools to the greatest extent possible to ensure correctness. What type system is remotely close to being able to help prevent the kinds of bugs in this list? None, and none will any time soon. Because the biggest problems in software are problems of misconception - we've (usually correctly) implemented an incomplete or incorrect idea!

Type systems focus on solving the problems we can solve. Unfortunately, they're not the problems we have.


I spent 2 days last week hunting a bizarre bug that turned out to be ultimately due to a type ambiguity in how JDBC maps Java types to Oracle types. Admittedly this is not a language issue (well, it is on a deeper level), but it's the same sort of problem.

I don't know how much time I've lost in Java to checking, catching, and debugging stupid RuntimeExceptions that would easily be fixed by having a good type system, or writing idiotic boilerplate design patterns that you simply would not have to write if you had a good type system (or were using Clojure).

No language in the world will save you from having to think about your program's desired behavior, but that doesn't mean we should make it hard on ourselves to even implement the behavior we think we want.


> No language in the world will save you from having to think about your program's desired behavior, but that doesn't mean we should make it hard on ourselves to even implement the behavior we think we want.

Therein lies the rub. Is there no cost to static typing? Is it not complex? Is it sufficiently expressive? Does it never make it harder to implement the behavior we think we want?

Should everyone write everything in every program in as provable a way as possible? If not, why not?

As the example here shows, there is a cost in effort expended and complexity:

http://news.ycombinator.com/item?id=2062910

I'm quite glad people are working on these things, and they have genuine benefits. But to overstate the benefits and ignore the costs (as did the post to which I replied) is not productive.


>Does it never make it harder to implement the behavior we think we want?

I should ask why you're aiming at "never"? I prefer to quantify "how often it makes writing harder". It is more useful.

I think that types could make writing behaviour we want even easier. For example, you could use "session types" and be sure that your routers always understand each other. You can write complex algorithm while establishing its complex invariants in types and be sure you're correct.


Getting back to the original blog post, this is where an adjustable type-strictness system would come in handy.


I agree that this won't solve all bugs, maybe not even the majority. But I am convinced it solves a significant amount of bugs. Personally I find that an expressive type system can aid significantly in discovering your own misconceptions and clarifying what you intend to do.

When coding Haskell I tend to start by envisioning the type I need my program to have and then fill the code from there.


> meaning that when your program compiles it will be guaranteed to behave correctly

For very weak notions of "correctly".

If my task is to build a factorial function, epigram will not prevent me from writing a fibonacci function.


If the Epigram is not Turing complete, then what kind of programs cannot be expressed in it? Is the aim of the developer to add TC at some point, or is it a deliberate design choice to remain non-TC?


By the way, I am currently doing a literature study into type theory for my study and I was annoyed by the lack of introduction material for programmers wanting to learn more about it. I've been thinking of writing a "Type theory from first principles for dummies" but I was unsure how much interest there'd be in something like that. If you (or other people here on HN) would be interested in that let me know...


I think that would be a very valuable article. Right now people are basically stuck with TAPL [1], which (while being a fantastic book) is pretty meaty and not exactly cheap. I've been examining the type theory literature myself over the last couple of months (admittedly more from a mathematical angle than a programming one), and there is definitely a dearth of good introductory material. Girard's Proofs and Types is a great book, but it's not easy going and doesn't have a huge amount of direct relevance to programmers. In case anyone else is interested (merijnv, I assume you've glanced at all of these) I have a small bibliography of type theory books [2].

[1] http://www.cis.upenn.edu/~bcpierce/tapl/

[2] https://gist.github.com/716834


I'm also currently interested in this area, and I would recommend Interactive Theorem Proving and Program Development. At the same time, it is practical and theoretical.

http://www.labri.fr/perso/casteran/CoqArt/index.html


Benjamin Pierce has a course on the mathematical theory of programming languages, using the Coq Proof Assistant:

http://www.cis.upenn.edu/~bcpierce/sf/toc.html

Adam Chlipala's 'Certified Programming With Dependent Types' is a textbook about practical engineering with the Coq proof assistant. The focus is on building programs with proofs of correctness, using dependent types and scripted proof automation:

http://adam.chlipala.net/cpdt/



> If the Epigram is not Turing complete, then what kind of programs cannot be expressed in it?

I think one quick answer for questions like this is "an interpreter for $LANGUAGE". The reason is that, for a language in which all programs terminate, a type-checked program is a proof that the algorithm terminates. Thus, an interpreter for $SOMEPL in $LANGUAGE is a proof that there is an interpreter for $SOMEPL that terminates on all input.

If $SOMEPL is $LANGUAGE, you get a Goedel/Halting-problem problem.

This is all, of course, assuming that "interpreter" means "function taking as input a valid program and outputting some finite data containing its normalized form". If you define an interpreter to just be a function that performs one step of evaluation, you can get away with it.


EDIT: HN ate my asterisks

Disclaimer: This post contains broken pseudo syntax and pseudo type theory, feel free to make corrections if you are smarter then me.

Epigram (as of yet) is not truly aimed at being a practical language in the hacker sense. It is a research vehicle trying to establish a practical type theory for programming in (i.e. trying to find the sweet spot of how much freedom we require to write useful programs). My knowledge of the underlying theory is still somewhat limited so I don't dare say which programs cannot be written in it.

I would say the lack of Turing completeness is a deliberate design choice. Turing completeness and strong termination are mutually exclusive. Introducing Turing completeness automatically means introducing the Halting problem.

As mentioned in the blog post (one of) the theories behind Epigram is "dependent typing". I agree with the writer that this will be the most exciting thing in programing since...well, ever. What does dependent typing mean? It means the type of a function can depend on the values of the input. To abstract? Let's see if I can clarify:

Haskell currently supports types depending on types (note a lot of this won't be actual valid Haskell, but use pseudo-Haskell syntax), a classical example being the polymorphic list:

    data List a = Nil | Cons a (List a)
This is a type which depends on another type. We have "List a" for any given type a. Canonically the type of a type is called * . That is 1:Int meaning "1 has type Int" and Int:* meaning "Int has type * ". Then what is the type of List? It is of course "forall a : * . * " which you could interpret as a function taking a type and returning a new type List :: * -> * .

However, this is not flexible enough to describe somethings we want to be able to describe. For example in Haskell we have the function "head" returning the first item in a list:

    head :: [a] -> a
    head (x:xs) = x
Now, this function has a problem. What to do when we call head with the empty list as argument? Haskell right now throws an exception, but the compiler should know how long a given list is and it should know that we cannot call head on a list of length 0.

Whatever shall we do? We could painstakingly verify we never call head on a list of length 0 by hand. But we programmers are lazy and hate doing these things. We could waste our lives writing TDD tests with 100% coverage to ensure we never accidentally call head with an empty list, but as lazy programmers we are also to lazy to do this. If only we could make the type of a list depend on its value, thus encoding in the type system whether calling "head" with any given list was safe or not.

Dependent types to the rescue! As mentioned earlier with dependent types the type of a list can depend on a value. Haskell as yet does not support this (partial lie, Haskell has some limited dependent typing support). In Haskell I could not say:

    data List a n = Empty | Cons a (List a n-1)
However, in a dependently typed language such a definition is in fact possible. The type of List would then be something like List :: * -> Int -> * . We can then redefine head to have a type that only accepts lists of type "List a n" where n > 0. This means that passing a list which potentially has n == 0 is a type error.

The compiler will at compile time verify that in your code it is never possible that you pass a list with length 0 to head. If it is possible this is a type error and your program will not compile, if it is not possible we have no effectively eliminated the possibility of an empty list exception form our program, without any tedious manual testing. Yay for science!


Thought 1 - but doesn't that mean you immediately run up against the halting problem? Surely we could create a type representing Program, and a function 'halts'. No compiler is going to be able to tell us which programs halt.

Thought 2 - Aha, is this the reason why the language isn't turing complete, so we KNOW that we never run into that difficulty? In other words, the language is constrained somewhat so that we can do these dependent types, because we can then guarantee there's always some calculation the compiler can do ahead of time to ensure all cases will work.


You are correct in the sense that the language isn't Turing complete to guarantee that the language is strongly terminating. The language being strongly terminating means that 1) there is a Normal Form for the type of an expression and 2) the number of steps to reach Normal Form is finite. All programs definable in Epigram have a normal form (i.e. result) and this normal form (result) is reachable in finite time. Therefore hanging and infinite loops are impossible.

However, these aspects are unrelated to the fact that Epigram is dependently typed. We could easily implement a Turing complete dependently typed language or implement a non-dependently typed language which is not Turing complete.

I'm not entirely sure what the problem is you are thinking of in thought 1. Could you elaborate?


I started to write out what I was asking and then I realized the flaw... never mind.


It seems to me that it would be possible to create a NonEmptyList type in any language simply by inheriting the regular List type and making sure the constructor and the mutators (if any) enforce the non empty constraint. Presumably a language that supports dependent typing would make the definition of such a type easier (I'm not familiar with any language that supports dependent typing).

However, the problem I see with that kind of fine grained typing is that you would have to convert types in many places as many lists that have to be non empty in one place might be naturally empty in other places. If that is the case, the potential defect is just moved to some other part of the code.


For any given Haskell Compiler I would think that I can construct a program where the Haskell Compiler won't be able to determine, in polynomial time, whether the list has length 0 or not.


Have you read the post your are commenting on? Epigram does not even try to be a complete language.


> For any given Haskell Compiler I would think that I can construct a program where the Haskell Compiler won't be able to determine, in polynomial time, whether the list has length 0 or not.

Ok, sure, but in total languages (that is, languages in which all functions are total), compilers generally don't just continue running if they can't figure out that some list you've applied "head" to is non-empty. They generally exit with a type error, complaining that you haven't proven that your list is non-empty.


> when your program compiles it will be guaranteed to behave correctly

This is only true if you give your program a strong enough type, and giving your program a stronger type may make it harder to write.

For instance, here is a 14-line version of insertion in red-black trees, due to Okasaki:

    data Color = Pink | Gray
    data RBT a = Emp | Full Color (RBT a) a (RBT a)
    
    insertRB x s = makeBlack (ins s)
        where ins Emp = Full Pink Emp x Emp
              ins (Full color a y b) | x <  y = balance color (ins a) y b
                                     | x == y = Full color a y b
                                     | x >  y = balance color a y (ins b)
              makeBlack (Full _ a y b) = Full Gray a y b
    
    balance Gray (Full Pink (Full Pink a x b) y c) z d = Full Pink (Full Gray a x b) y (Full Gray c z d)
    balance Gray (Full Pink a x (Full Pink b y c)) z d = Full Pink (Full Gray a x b) y (Full Gray c z d)
    balance Gray a x (Full Pink (Full Pink b y c) z d) = Full Pink (Full Gray a x b) y (Full Gray c z d)
    balance Gray a x (Full Pink b y (Full Pink c z d)) = Full Pink (Full Gray a x b) y (Full Gray c z d)
    balance color a x b = Full color a x b
I munged the names a bit so it can be put in the same module as the code below: the same algorithm, but in a way that ensures that the balance conditions are met. This is 95 lines:

    {-# LANGUAGE GADTs, ExistentialQuantification, EmptyDataDecls, StandaloneDeriving #-}
      
    data Z
    data S n
    data R
    data B
    
    data Tree count color a where
        Nil :: Tree Z B a
        Red :: Tree n B a -> 
               a -> 
               Tree n B a -> 
               Tree n R a
        Black :: Tree n lc a -> 
                 a -> 
                 Tree n rc a -> 
                 Tree (S n) B a
    
    --deriving instance Show a => Show (Tree n c a)
    
    data High n a = forall c . High (Tree n c a)
    data Set a = forall n . Set (Tree n B a)
    
    --deriving instance Show a => Show (Set a)
    
    data T n c a where
        N :: T Z B a
        R :: Tree n cl a -> 
             a -> 
             Tree n cr a ->
             T n R a
        B :: Tree n cl a -> 
             a -> 
             Tree n cr a ->
             T (S n) B a
    
    data H n a = forall c . H (T n c a)
    
    lbbalance :: H n a -> a -> Tree n c a -> High (S n) a
    lbbalance (H (R (Red a b c) d e)) f g = High $ Red (Black a b c) d (Black e f g)
    lbbalance (H (R a b (Red c d e))) f g = High $ Red (Black a b c) d (Black e f g)
    lbbalance (H (R a@Black{} b c@Black{})) d e = High $ Black (Red a b c) d e
    lbbalance (H (R Nil b Nil)) d e = High $ Black (Red Nil b Nil) d e
    lbbalance (H (B a b c)) d e = High $ Black (Black a b c) d e
    lbbalance (H N) a b = High $ Black Nil a b
    
    rbbalance :: Tree n c a -> a -> H n a -> High (S n) a
    rbbalance a b (H (R (Red c d e) f g)) = High $ Red (Black a b c) d (Black e f g)
    rbbalance a b (H (R c d (Red e f g))) = High $ Red (Black a b c) d (Black e f g)
    rbbalance a b (H (R c@Black{} d e@Black{})) = High $ Black a b (Red c d e)
    rbbalance a b (H (R Nil c Nil)) = High $ Black a b (Red Nil c Nil)
    rbbalance a b (H (B c d e)) = High $ Black a b (Black c d e)
    rbbalance a b (H N) = High $ Black a b Nil
    
    lrbalance :: High n a -> a -> Tree n B a -> T n R a
    lrbalance (High (Red a b c)) d e = R (Red a b c) d e
    lrbalance (High (Black a b c)) d e = R (Black a b c) d e
    lrbalance (High Nil) a b = R Nil a b
    
    rrbalance :: Tree n B a -> a -> High n a -> T n R a
    rrbalance a b (High (Red c d e)) = R a b (Red c d e)
    rrbalance a b (High (Black c d e)) = R a b (Black c d e)
    rrbalance a b (High Nil) = R a b Nil
    
    hhigh :: High n a -> H n a
    hhigh (High Nil) = H N
    hhigh (High (Red a b c)) = H (R a b c)
    hhigh (High (Black a b c)) = H (B a b c)
    
    insertTree :: Ord a => a -> Set a -> Set a
    insertTree x (Set y) =
        case insert x y of
          H N -> Set Nil
          H (R a b c) -> Set (Black a b c)
          H (B a b c) -> Set (Black a b c)
    
    insert :: Ord a => a -> Tree n c a -> H n a
    insert x y@Nil = hhigh $ insertBlack x y
    insert x y@Red{} = H $ insertRed x y
    insert x y@Black{} = hhigh $ insertBlack x y
    
    insertRed :: Ord a => a -> Tree n R a -> T n R a
    insertRed x (Red l c r) =
        case compare x c of
          LT -> lrbalance (insertBlack x l) c r
          EQ -> R l c r
          GT -> rrbalance l c (insertBlack x r)
    
    insertBlack :: Ord a => a -> Tree n B a -> High n a
    insertBlack x Nil = High $ Red Nil x Nil
    insertBlack x (Black l y r) =
        case compare x y of
          LT -> lbbalance (insert x l) y r
          EQ -> High $ Black l x r
          GT -> rbbalance l y (insert x r)


I agree that this is only true when the language is given a strong enough type and that giving such a type may be to cumbersome. In fact languages that can do most of these things already exist (Coq and Agda for example) but they are indeed to cumbersome.

That is why we need people experimenting with new languages like Epigram, attempting to find a way of writing these programs in such a way that these strong types don't become to cumbersome.


These strong types will always be somewhat cumbersome, because they are the proof that your program is correct in certain respects, and non-trivial proofs are not free. When you hear someone use the phrase "correct by construction," you should ask you do when you accidentally construct the wrong thing. I'm afraid the answer is that you "correct by reconstruction."


Natural language has ambiguity. That kills its effectiveness as a programming language.


It would be interesting to see how many Haskell programmers are also Lojban speakers.


Funny you should mention that. There was a survey of favoured web frameworks[1] in the Lojban community, and Haskell with snap won[2].

1: http://www.surveymonkey.com/s/T6B69LK

2: http://teddyb.org/~rlpowell/public_media/lojban_survey.PNG


Solvable if the compiler can initiate an intelligent conversation. Like, "What you said is ambiguous. Did you mean? ..."


That would be awesome! Perhaps we could have an animated character have that conversation with us... how about an intelligent paperclip or something?!

Wouldn't programming become so cool!?!!


I hope in 20 years we look back and say 'duh, of course'. And yeah, it would be so cool.


I feel like I should introduce you to Agda2: http://wiki.portal.chalmers.se/agda/pmwiki.php

You write something, Agda2 checks it and says that you forget something, you wrote an inconsistent code or you wrote ambiguous code (it is possible to hide some parts of code).


Do you envision search engines becoming Turing complete at some point in time? ;-)


I don't know why you ask that. Are you suggesting that it is difficult to determine whether a statement is ambiguous or not?


It was half-joke, actually -- but there is point to it, to some extent.

I was referring to the fact that (web) search engines often display `Did you mean X?' when query string contains a symbol they don't know. Also, in a way, a search engine does interpret the query as a (simplistic) program of sorts. And, once in a while, it'd be handy to be able to issue query to search engines in a more powerful query language... So I saw a weak analogy there, between search engine and a compiler with the `Did you mean X?' feature.


You appear to be confusing "Turing complete" and the "Turing test". These two concepts are only related by the man which proposed them. They are quite separate classes of concept.


Natural language is hardly appropriate for things that we use computers for, primarily because of its inherent ambiguity. Its not a language for expressing algorithms, which is precisely what one uses a computer for. If we expect to code in language that is close to natural, we should also expect an error rate of natural systems.

But then Dijkstra said it much better: http://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EW...


I recall a story about some programming language which tried hard to correct errors. The programmer wrote 'Hello World' with a minor syntactic error. The program output '7', because it fixed the syntax.

If you have ever used Latex, you experience the pain that is the compiler trying to fix your program. The result is a large number of errors with little correspondence to the actual location.

Additionally, it seems that a common theme in style guides is to write for the person who is trying to update your code in three years, after the requirements have changed. I would expect that for that you want as much static as possible, as it makes for less state to keep in your head.


What we really need is for some people to turbocharge ghcjs (http://news.ycombinator.com/item?id=1818820), so that we can get optional static type checking in Javascript.

Haskell is great at parsing[1] and great at type checking. So how about it parse the relatively simple JS grammar[2] and do some static checking on JS code?

You could augment existing JS code with an optional accepts/return syntax to provide type annotations:

  foo = function(x) { return [x+1,x+2]}
  foo.accepts = Numeric
  foo.returns = [Numeric, Numeric]
This is similar to Collin Winter's typecheck for python (http://pypi.python.org/pypi/typecheck/), which has some nontrivial bugs but which is conceptually interesting.

[1] You could use Matt Might's new derivative parser if you want to have some fun while doing this (he has a Haskell implementation to boot!)

http://matt.might.net/articles/parsing-with-derivatives/

[2] Real World Haskell has a good JSON example that should be easily extensible to the full syntax: http://book.realworldhaskell.org/read/writing-a-library-work...

EDIT:

Doctor JS by Mozilla is a (very) good start here -- maybe it already does everything we need...

http://doctorjs.org


  Greenspun's Tenth Rule of Programming
  Any sufficiently complicated C or Fortran program
  contains an ad-hoc, informally-specified bug-ridden slow 
  implementation of half of Common Lisp.
Replace C by Lisp and CL by Haskell? Rinse, lather, repeat every few decades?


FWIW,

> "Any sufficiently large test suite will contain an ad-hoc, informally-specified, bug-ridden, slow implementation of half of the Haskell type system" -- Smith's Law


This point has been argued far more eloquently and accurately by Cardelli and Wagner, "On Understanding Types, Data Abstraction, and Polymorphism".

http://lucacardelli.name/Papers/OnUnderstanding.A4.pdf

See Section 1.3 "Kinds of Polymorphism" (there is a typo in the paper and section 1.4 is mislabeled as "1.3" as well.)


Can you please give us some historical examples where this quote could be applied? This would be interesting to know.


AutoCAD and Emacs would be literal examples of "sufficiently advanced" C programs by this definition.

My guess is that it is intended to cover any C or Fortran program that includes an ad-hoc interpreter, an ad-hoc dynamic type system, or an ad-hoc garbage collector. That would include e.g. sendmail (at least an ad-hoc interpreter) and GCC (all three).

My guess is that most C programs (I don't know about Fortran) we would intuitively call "complex" contains one of the three technologies, making it harder to come up with a clear counter-example. Some might avoid the "ad-hoc" part by linking with a generic extension language like tcl or lua. Or even Common Lisp.


Well, almost any big traditional application (not really web apps) eventually grows a scripting language for extensions and / or user macros. Basic (in Office), Javascript and Java (in web browsers), Lisp (in the above applications, and probably many more), Lua (in a lot of games), AppleScript, that C thingy in Quake ...

Lisp is relatively easy to implement, powerful, and intuitive to non-programmers (compared to OOP - Python, Perl, and Ruby are also easy but nowhere near as old); so it's a common choice.


I didn't believe this "Easy to implement" line but over Christmas I wrote my own LISP implementation on top of .NET and it comes to about 350 lines codes.

That includes all the list operations, an arbitrary precision number type and a host of the usual built in function (add, subtract, multiply, divide, and not or etc).

Eye-opening to say the least!


A real-life example. I worked on the call routing system that is used by large cable companies and a large satellite TV provider. When a call comes in, the computers dynamically routes the call to the best place to handle the call. It needs to use all kinds of business rules to make the determination. For example, is there an outage in the customer's area? Are we running a special promotion in that area that this call is likely related to? Is the customer's account on hold with our credit department?

All of these decision points, along with what could happen depending on how those decision points get evaluated, end up being things that are defined by the end user (that is, by the IT department at the call center, rather than by us, the original developers of this software). For example, if there's an outage in their area right now, play this specific outage message, then prompt them to see if they want to do something else-- a dynamic rule that'll only be in place for a few hours till the outage is done. Before you know it, you've essentially implemented a programming language within the programming language.

This pattern ends up presenting itself again and again, in many systems that I've implemented. Other systems that I've done with user-defined rules that become essentially programming languages include fraud detection systems, call center flow guides/scripts, systems that help users customize their products for things like: engraving (for trophies or stationery), building computers, building ten-speed bikes, or configuring options for custom-built caskets.

Regarding the customization of caskets or couches or any other custom-built good, this also becomes very much like a programming language. For example, if the user selects leather, then these fabric protection options become available and cost this much, but if they select fabric, then Scotch guard and some other options become available. If they select this group of options, then the manufacturing will have to be outsourced to such-and-such company because they're the only ones that can do that process, but otherwise we manufacture it over here. And so on. All of these things we want to be set up by the people who purchased the software from us, and are not to be coded directly by us when we write the software.

This pattern of needing to express what amounts to code from within the application data becomes quite common once a system gets beyond a certain level of complexity.


Another example is that complicated systems often get to the point where a great deal of their functionality becomes highly configurable. Where they are essentially either creating their own scripting language (less common today, but this was much more common a decade ago), or reading a great deal of data to determine their runtime configuration. Taken to an extreme, you get to where the lines between data and code start to become blurred. When the code being executed at runtime is really a series of functions that are dynamically strung together, you're starting to make a list-like system, arguably.


I'm not sure that Haskell could, given time, subsume dynamically typed programming. But even if it is possible, why would it be desirable? Most Haskell people don't think very highly of dynamic typing, and the extensions to the language reflect that. They generally add more typing, e.g GADT's, multi-parameter type classes, extensible kinds e.t.c. And dependent typing is certainly not a move towards dynamic typing! Casting it as such shows poor understanding of both concepts.


There's always a trade-off between the expressablity of a programming language and its complexity. I think Lisp may (still) have a slight advantage when it comes to that balance, i.e., it is more accessible for novices. And adding more things to an already complex language like Haskell doesn't sound like changing that balance in the latter's favor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: