Hacker News new | past | comments | ask | show | jobs | submit login
Loop: a programming language for the JVM inspired by Haskell and Ruby (looplang.org)
87 points by dhanji on May 29, 2012 | hide | past | favorite | 37 comments



Syntax looks very clean, but probably more like CoffeeScript mixed with Ruby. "Inspired by Haskell" is going a bit far. You took pattern matching and `where` and none of the semantics.

Object patterns are absolutely awesome. Every OOP/FP hybrid ought to have that. Especially if you're not going to support algebraic data types. That point is worth bringing up, though: are they missing because of the presents of objects? I'm concerned about the interaction between pattern matching, duck typing and Java's static typechecker. Based on a quick glance, it's hard to see what that interaction is going to be like. At the very least, it does raise the question: what makes the pattern matching on lists and other built-in types special? In Haskell, for example, there's nothing magic about the cons operator and lists; it's just regular pattern matching on algebraic types.

I suspect your magic Nothing type is also necessary to make object pattern matching powerful, and I have some concerns about this kind of null safety. I wonder if programs are going to wind up turning into "gray goo" of Nothing, and what the debug scenario is for that case.

Exception handling looks a bit odd to me. Maybe it all works out in the end, but it looks like you're going to wind up with code like this:

    foo(x) except handler
      where
        handler(e) => 
          IOException : e.stuff
          ...
I think this is going to wind up being a bit gassy for local exception handling, where in Java you would see something like this, that avoids the intermediate name:

    try {
      foo(x);
    }
    catch (IOException e) {
      e.stuff
    }
    ...
Is it going to look better with closures?

    foo(x) except @(e) -> {
      IOException : e.stuff
      ...
    }
It's probably fine, but I dunno, most languages use more syntax in this area.

Overall, this is fairly impressive. It looks a bit like the greatest hits of Ruby and ML. The JVM has been host to a surprising number of interesting languages lately, and this is clearly one of the better recent offerings. Definitely worth a deeper look.


All the major OOP/FP I know support matching on objects. It is indeed a very powerful feature. See active patterns and extractors.


You must mean Scala and F#, which I haven't used. Haven't seen this feature in OCaml or Clojure.


I take issue with

   Haskell programmers may be familiar with this as a do 
   block, and for Schemers, this is the equivalent of a
   begin sequence. However, unlike Haskell, in Loop, a 
   sequence is guaranteed to execute in order.
A do block is syntactic sugar over >>= (bind) and is monadic in nature. And the IO monad (which is relevant to the example that involved printing things.) is a construct which guarantees that the parts will execute in order. That's kinda the point. That he got it wrong makes me not trust the language. He shouldn't claim to have been "Inspired by Haskell" if he doesn't really know it.

Also in Haskell the pattern [x:xs] matches a list with one element, which is a list with at least one element, so [[1,2,3]] (== (1:2:3:[]):[]) gets matched with x = 1 and xs = [2,3]. That differs from loop's pattern matching. But [x] still matches a list with only one element. What does [x:[]] match? If you are going by [x:xs] matches [1] with x = 1 and xs = [], then surely [x:[]] will match as well.

Also, if you are making a functional programming language, you know one that uses recursions instead fo for loops, why do you call it Loop? I didn't see one loop in any of the examples. It's just silly.

EDIT: Oh, and if Nothing really is a subtype of everything, then 5 + Nothing should typecheck right? Not " It is a type error to attempt to compute with Nothing." Since Nothing is a Integer, since it subtypes everything. I don't understand. Is + special in hating Nothing? Nothing.add(5) would return Nothing, right? So why not Nothing + 5. Oh, but wait, Nothing is a subtype of string, so Nothing + 5 means string concatenation, right? You did say a + b where a is a string casts b to a string, didn't you? It's inconsistent and I don't think it has the proper semantics.


I think it's quite misleading to list Haskell in the title: the only common thing between Loop and Haskell that I've found is pattern matching, and that's not even exclusive to Haskell -- other languages have it. Besides, it's not clear from the intro that Loop's pattern matching is as powerful as Haskell's (especially with all the extensions). Finally, pattern matching is really a syntactic sugar over the case/switch construct present in many, many languages. Although, I liked the type-patterns.


Pattern-matching is not sugar over "case".

Pattern-matching means that the branching primitive not only dispatches to different code based on an input tag, but also that it places different values of different types in scope according to the branch.

Most languages only branch on booleans, without gaining any type information at all. This is actually a big problem and relates to the nullability problem, explained at: http://existentialtype.wordpress.com/2011/03/15/boolean-blin...

For example, in C:

  switch(ptr) {
  case NULL: ... handle null case ...
  default: ... use ptr as if it weren't NULL ...
  }
This is unsafe -- because nothing prevents you from using ptr in the "NULL" case, and the compiler does not give you anything in the non-NULL case.

In Haskell:

  case ptr of
    Nothing -> ... can't use ptr as a value here,
                   it's wrapped with Maybe ...
    Just x -> ... pattern-matching gave us "x" of
                  the correct type.
                  We can now safely use it.


As Tyr42 has so astutely observed, I have meant that pattern matches in function declarations, let-bindings and list comprehensions are desugared into the case expression. Also, it would be kind of pointless to compare C switch and Haskell case. Especially, in your example Haskell, probably, 'wins' because primarilt because a more powerful type system and algebraic data-types (and absence of pointers, hehe). That said, it's not like you can't emulate the pattern matching (with placing nested values/subtrees in scope): https://gist.github.com/2832755 (it's in JavaScript, because I felt like it -- but I'm pretty sure you can devise something similar in C). Of course, it's plain ugly and doesn't carry any nice static guarantees that the Haskell type-system will give you (like warning about incomplete patterns)... but, hey, it works. Anyway, the point of my comment was that pattern-matching is hardly a feature characterizing and exclusive to Haskell (ML/OCaml/F#, Coq, Scheme have it, probably other functional languages too). Although, it's still the one I enjoy every day :)


In a dynamically typed language, whether you do pattern matching or boolean-blind branching doesn't matter that much. You'll catch any error at runtime anyway.

In the presence of static typing, "emulating" pattern matching defeats the purpose, because the purpose of pattern matching is removing boolean blindness. With the "emulated" code you still get runtime-failing boolean-blind code.

By the way, pattern-matching is indeed very much related to sum types (part of "Algebraic data types"), which C lacks. To have pattern matching in a statically typed language, you would need to have sum types as well.


Well, he could be talking about

    reverse [] = []
    reverse (x:xs) = reverse xs ++ [x]
being sugar for

    reverse list = case list of
               []     -> []
               (x:xs) -> reverse xs ++ [x]
Since in that case, pattern matching on arguments is really just sugar for a case statement.


case is pattern-matching, though.

He said pattern-matching was sugar for "case/switch construct present in many, many languages". That implies pattern-matching adds only syntax to the game, and that it doesn't add any useful things beyond the "switch" you find in C or Java, for example.


I was looking for a forgiving interpretation of his statement. It's better to assume someone is right, and a little unclear, then just outright wrong.


Sure, and I'd do the same if he said it was just sugar over "case", but he explicitly said "case/switch found in many languages", which makes it pretty definite IMO.


Thanks a lot for sharing this - particularly the full source code of your implementation. It looks very clean and well written.

I personally like that you wrote your own recursive descent parser and lexer rather than using some tool to generate one. This is my preference too.

The result is that the source code makes for a great reference for anyone thinking of implementing a language on top of the JVM.

Pay no heed to the "bah humbug" comments elsewhere on this story - I think what you've done is great.


Perhaps this isn't the place for the question, but I'll ask anyway. Is there any JVM-based language that excludes any code from being compiled that is not directly implemented in that language? For example, any code that would deny the use of java.io or java.lang?

The reason I ask is because Clojure and Scala both suffer from the ability to call legacy code. While some think this is a good idea, it makes the whole scalable, safe/functional paradigm completely unstable. Is there any language out there that forces the developer to use only things written in the specific language its since that would be the only safe platform?


I haven't heard of one that does this. In fact an oft stated selling point of JVM based languages is the ability to call and interact with existing Java code and libraries.


Haven't had the chance to download and look through it but an example caught my eye.

Was anyone else confused that this,

# filter list => [ 'ipod', 'ipad', 'iphone' ]

('i' + p) for p in ['mac', 'pod', 'pad', 'phone'] if p.startsWith('p')

applies a filter against the list, rather than limit what gets prefixed with an 'i'?

I'm assuming then there's some code to keep the mapping and the entire list as well. Something like this?

('i' + p) if p.startsWith('p') for p in ['mac', 'pod', 'pad', 'phone']

It just seemed surprising to me that an if statement can cause a change in the list length.

Edit: Ah this seems to be how filtering works in python


The list comprehension rules seem to be lifted straight from Python, where the "foo for bar" part always comes first.


> Closures, or anonymous functions, are quite useful...

Apparently these days one doesn't need to know the difference between closures and anonymous functions to be a language designer.


The name is unfortunate, especially when the documentation sometimes omits to capitalize "Loop", which introduces even more confusion than there already is.


The name is unfortunate indeed. There were languages like LOOPS (Lisp Object Oriented Programming System) which evolved into CLOS. Common Lisp loop macro is a kind of special DSL of its own. While the oldest LOOP language may be from 1967 article by Meyer and Ritchie (DMR as in UNIX), a kind of mini language for computation of primitive recursive functions and dealing with computational complexity.


Good to know there are newer takes on JVM based languages! I like its minimalist design as opposed to being a kitchen sink language. Good luck!


Being functional in nature, Loop doesn't have any of the baggage of the host platform (Java)

I don't see much that's functional in this language, but this statement made me laugh. It looks like a lot of Java's baggage is still alive and well in this language, like exceptions (quite un-Haskell-like)


> like exceptions (quite un-Haskell-like)

Haskell has exceptions [1].

[1] http://www.haskell.org/haskellwiki/Exception


It has them, but I don't see them being used very often. More commonly just a Maybe will do. And really, monadic exceptions aren't like java exceptions. I would call them un-Haskell-like.


Looks great. I wonder why the author went for a specific syntax for interned strings, as opposed to doing what Java does (automatically interning String literals).


Another cool language from the past, 'nice', had a similar issue that sentenced it to obscurity. A totally ungooglable name :-(


Do you really believe that was the reason? Is C very googlable? Or how about python or ruby?


  Sequences  

  increment(num) ->
    print(num),
    num + 1
This is considered to be one of the worst parts of Erlang syntax. Too bad loop borrows it.


Now available on Arch Linux via AUR: https://aur.archlinux.org/packages.php?ID=59624


Capital-J Java as in 'Java -version'? Examples should be tested. @ is heavily overloaded. Other than that, worth a second look.


I wish it was ported to Android platform.


there is nothing like out there resemblance with haskell :D


Is awesome!


Hmm, going to have to look into this later.


> Polymorphism in this instance is simply allowing you to call the same function with an integer and string respectively.

Er... no, that's not polymorphism, that's overloading.


In computer science, polymorphism is a programming language feature that allows values of different data types to be handled in a uniform manner.

This may be ad hoc polymorphism (often synonymous with overloading) - or parametric polymorphism (as in ML or Haskell); but it's unclear what they actually do without some language semantics.


And to be more complete, OO-style polymorphism is usually referred to as "subtype polymorphism" and the parametric polymorphism dons talked about is like C++ or Java "generics"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: