Hacker News new | past | comments | ask | show | jobs | submit login
Why Perl 6 is different (perl.org)
152 points by fogus on April 20, 2010 | hide | past | favorite | 99 comments



There's a presentation that Larry Wall gave at Google that has a lot to do with the internals of Perl 6: http://www.youtube.com/watch?v=JzIWdJVP-wo Actually, to be honest, I don't recommend it, I found it surprisingly dull in delivery (sorry, Larry), but the contents were very interesting.

I actually think describing Perl 6 as "having macros" is accurate, but gives the wrong impression. It doesn't "have" macros, it's made of macros. The whole language is made of macros and you are nearly begged to extend it with more grammar. Or you can throw theirs away and write your own for something. Or you can throw theirs away, take some of it as a base, and extend it.

It's a wild, wacky experiment that needs to be done. Whatever happens, we'll learn a lot. My intuition says "collapses under its own complexity" (especially after trying to add a few grammar-modifying CPAN modules) is the most likely result, but I fully support with no sarcasm someone trying this out in case my intuition is wrong. It's not just Lisp redux, it's something new.


especially after trying to add a few grammar-modifying CPAN modules

How do modules compose with respect to syntax? I remember encountering a big frustration in Common Lisp when I used two libraries that depended on defining '[' as a reader macro for pretty syntax. Whichever library I allowed to use the '[' reader macro, my usage of the other library became verbose and hard to read.

Not to mention that neither altering a module's source nor depending on the sequence of module loading is scalable across many modules.


they are lexically scoped, so as a non expert, I guess they are closer to scheme than to CL


The whole language is made of macros and you are nearly begged to extend it with more grammar. Or you can throw theirs away and write your own for something. Or you can throw theirs away, take some of it as a base, and extend it.

And people wonder why Perl is known as a "write-only" language!


Because it's a powerful language and during the dot com boom was used by a lot of very poor programmers.

Rails is running into the same problem with overuse/abuse of monkey patching, method_missing etc. that perl ran into with overuse/abuse of rich syntax.

It's sad. But it's really nothing to do with either language. They just handed a large chunk of power to a developer community that grew too large too fast for most of the membership to have learned how to use that power responsibly.


So, do you think the key to Perl 6's success is to avoid that? That is, do you think it's better for Perl 6 to have a smaller, more seasoned developer community in the beginning (to establish some best-practices before the rest of us join in)?


Unless you manage to pull every novice, dabbler, and new developer into the community and teach those best practices from the start, you won't solve the problem on a large scale. Programming -- especially the realm of informal, ad hoc programming so well served by dynamic languages -- rewards individual exploration. You don't have to write elegant Python or Ruby or PHP or Perl to get your job done, and you don't have to learn a lot of theory to accomplish a task. You may make a mess, but it's not clear that novices care about that. Why should they?

Perl 6 has an advantage in that most of the tutorials and examples aren't full of bad examples and muddled thinking like most of the Perl 5 tutorials and examples are. I don't expect that to last, but hopefully some thoughtful language design will make abusing the language much more difficult.


No, the key to Perl 6's success is to convince all of the Perl 5 developers to move on to the new language. Also, if they (the Perl coders) go on a binge and port all the CPAN libraries over to Perl 6, that will help.


To be fair he is talking about the new Perl 6 language. Not the current Perl that everybody uses.


Well quite. Who wants to take over maintenance of a project that's effectively written in the previous developer's personal language?

Perl's not alone here, you can do the same thing with C++ using operator overloading, but Perl actively encourages this to an order of magnitude greater extent.


I use Python as my primary scripting language but I spent some time looking at Perl 6 in the context of Parrot and it really has a lot of good ideas that post mentions:

  - multi-methods
  - rules (builtin parsing capability)
  - macros
  - and a whole bunch of other good stuff.
I really made me want to use Perl6 and Parrot. That said the only reason I couldn't what perhaps the same reason I use Python over Perl5. I somehow couldn't get myself to go with such a _rich_ syntax. But if Perl5 is your thing then Perl6 is something that you will definitely like.


The industry as a whole errs too much on the side of syntax as the focus of the community. Syntax should fade out of the way. It should be infrastructure, not features. Libraries are the proper object of focus. Libraries can be modified, forked, debugged without dividing a language community. Doing that to a programming language syntax weakens and divides such communities. In contrast, those actions on libraries actually strengthen programming language communities.


Syntax has two problems: (1) it is the most visible thing in programming languages, and (2) there is only one per language.

I believe we should be able to modify, fork, or debug syntax the same way we do libraries. So what about a syntax meant to be read by computer instead of by humans?

    # Factorial example, machine syntax
    fac
      -> int int
      \
        0 -> 1
        n ->
          *
            n
            - n 1
It could be displayed in human readable form by an IDE or converted back and forth by suitable preprocessors:

    -- Factorial example, Haskell syntax
    fac :: int -> int
    fac 0 = 1
    fac n = n * fac (n - 1)

    // Factorial example, C syntax
    int fac(int 0) { return 1; }
    int fac(int n) { return n * fac(n - 1); }
No more quibbling over prefix vs infix, curly braces vs indentation, familiarity vs terseness… That would take a flame war away from language design and put it where it belongs: the IDE.


I've seen some people try this. It has problems once you extend past Ye Olde Factorial Example. Try converting larger chunks of syntax and it rapidly develops that you are quite obviously just putting different skins on the same semantics and it also quite rapidly becomes clear that having multiple skins is a disadvantage, not an advantage. All putting a new language in a skin that looks more "familiar" to you does is fool you into programming Old-Language-in-New-Language; you really need the syntactic differences to remind you that you are not in OldLanguage.

Try translating at the deeper semantic level and you also discover that you basically can't. The C-semantic equivalent of the Haskell is closer to a for loop, and even that isn't an exact match. (An exact match probably involves "goto".) Non-trivial examples just explode in complexity, and that's long before you get to useful code.

This is in the class of "things that have been possible for decades but haven't taken off for good reasons", right up there with the purely-visual languages and other classic "good ideas".


The "syntax as infrastructure" idea has been around in many moderately successful incarnations. (Lisp and Smalltalk, to name a couple.) Another way to think of this: shift Syntax into Meta-syntax, so it gets out of the way of the libraries.

No need for "syntax skins" in that case.


> you are quite obviously just putting different skins on the same semantics

That was exactly my intention. Translating "at the deeper semantic level" is reserved for actual compilation. Now you may be right about syntax being useful for differentiating languages. My stance right now is that we should try (or look at the failed attempts you speak of, do you have any link?).

Testing my idea will require quite a bit of work: I need to write a compiler and an IDE with some kind disciplined editing. I also will have to bootstrap all this stuff, so it passes the minimum credibility threshold. If I ever get to this point, I will (at last) be able to test my language for actual (f)utility.


What a great comment: deep, suggestive, and a pleasure to read from start to finish. I just read it three times.


I'm pretty sure the C example will not compile with a C compiler.

Which may seem like a nit, but the reason is because the semantics of Haskell and C are extremely different. Furthermore, the semantics strongly influence the design of the language. In Haskell, whitespace denotes function application, because the most common thing to do in Haskell is to apply a function. C has semicolon delimited statements, because imperatively executing statements one after the other is a very common thing to do in C programs. In Haskell, you cannot guarantee the exact order in which things happen without going to great lengths (monads, etc.).

In short, I don't think offering a skinnable syntax buys much, and is likely to just generate greater confusion.


Of course it won't compile with a C compiler. It wasn't meant to. I just invented a C-like syntax for pattern matching. I had to, so it could be translated to the machine version, which will be compiled. (As you may have guessed such a compiler doesn't exist —yet.)

On the benefits of skinnable syntax: look at C++, Java, and Javascript. They have two things in common: their popularity, and their syntax. Coincidence? I think not. By stealing the syntax of C, they build on it's original success.

I painfully know that C syntax isn't suited to functional code with gazillions of nested function calls. However, this is a known syntax. It makes learning far easier (or at least looks that way, so people are actually willing to try).


> C has semicolon delimited statements, because imperatively executing statements one after the other is a very common thing to do in C programs.

Actually that would be an argument for making newlines delimit statements, like in, say, Python. Instead of requiring extra work by the programmer.


Why? "imperatively executing statements one after the other" does not imply that they have to be on different lines.


They don't have to, but they are, most of the time. Knowing that, "newline as delimiter" yields the lightest syntax.


Plus optional semicolons to put some on the same line as in Python.


Congratulations, you have just invented... the compiler. ;-)

More seriously though:

Interestingly of the two languages that come close to this model - lisp with minimal syntax and perl with very flexible syntax - one remained in academic obscurity while the other became wildly popular.


You're comparing apples to oranges with Lisp to Perl based on popularity. Perl came on the scene just as Lisp was beginning to lose ground to Unix. They grew up in different worlds. Even then, Lisp never languished in academic obscurity. It was, for a while, very popular in the marketplace.


Wait, Lisp is a programming language. Unix is an operating system. Category error?


I think he's referrring to the old Lisp Machines, which were in direct competition with the UNIX machines for a while. I recall reading some old rant about the new-fangled UNIX machines by a die-hard Lisp machine hacker a long time ago (can't find it now). There was something in it about how amazingly quickly a UNIX machine could boot, which was good, since it had to do so rather often.

EDIT: it's the preface to the UNIX haters handbook, of course. A copy is up at http://www.art.net/~hopkins/Don/unix-haters/preface.html


Lisp Machines were never that much in direct competition with UNIX machines as such. They were competing with UNIX + LISP. Research labs who programmed in Lisp bought Unix systems plus a Lisp environment (like Allegro CL) or bought a Lisp system (like Allegro CL) for their existing Unix environment. But Unix systems were used for quite different things than Lisp programming or running Lisp apps.

When Lisp Machines lost in the market, the Lisp users moved to UNIX + Lisp. Companies like Franz and Harlequin/LispWorks that came from that market still exist today.

Lisp was always used on different systems. When Lisp Machines were 'popular' (we are talking about less then ten thousand machines that were ever build and sold), Lisp was used on Windows, Macs, Mainframes, Unix and other operating systems.

I should also mention that almost all Lisp Machines were single user personal workstations - initially specially aimed mostly at AI programmers.


Sounds like what Charles Simonyi has been trying to do for the last two decades with Intentional Programming.

Of course, I couldn't make heads or tails of the small demos they've given.


Lots of things are human readable. Chinese for example.


Libraries are heavily influenced by the semantics of the language they're written in / designed for. It's quite easy to see this in the Clojure community today, by Java libraries are good from the clojure perspective.

The goodness of a java library has little to do with syntax (because clojure can easily call any java library), but semantics. For instance, java commonly uses mutable state. A fundamental principle of Clojure is that data should be immutable. Immutability makes testing easier, it makes parallelization easier. Clojure was built to make pmap easy. If I have a java library where I have to remember that Foos are thread-safe, but Bars are not, that adds to the incidental complexity of my solution, and reduces my ability to bring Clojure's new tools to bear on the problem.

As another commenter pointed out, syntax of a language is heavily influenced by the semantics of the language. A language starts with a few high level principles, and a favorite hammer or two. The syntax of the language is designed around making those common operations easy. Libraries written in that language will tend to follow those same principles.

Having Java libraries available to Clojure is a huge advantage, but it's not the be-all end-all, because the semantics of those libraries may be incompatible with your new language.

Instead, focus on semantics. Figure out which high level principles are good. What abstractions and design principles result in better, easier to understand solutions? From that follows your libraries and syntax.


While I agree in general thrust with your focus on infrastructure -- one may argue that a lot of improvements of ignoring language have occurred. .Net allows for multiple languages on a single infrastructure, I believe Java VM does as well, and there is Parrot too.

I do think that there is a healthy focus on syntax -- at least my own experience bears it out. It was the syntax of doing things the python way that allowed me to grok polymorphism and MVC despite programming in an OO language for .Net for years. I find that if I want to learn a concept, the syntax is extremely helpful. I also find that if I have to program quickly, syntax being close to how I think is also helpful, and will get frustrated having to make yet another translation of my thought into a syntax that wasn't well thought out. I would also bet that a lot of people find a different syntax to intuitive, and others counter-intuitive than I do. And I'm fine w/ that! Just because there is difference doesn't mean there has to be division!


And Perl5 already has some of those features:

* multi-methods - There are two implementations on CPAN, Class::Multimethods (old) and MooseX::MultiMethods

* rules - Ditto. Perl6::Rules (old) and Regexp::Grammars

* macros - Have a look at Devel::Declare. Not Perl6 macros but still powerful stuff!

* and a whole bunch of other good stuff - Lots have been backported to Perl5 (direct and into CPAN). For eg. smart match, given/when, Perl6::Junction, Perl6::Gather and also things like Moose & Class::MOP which are heavily influenced by Perl6


For some value of "already":

* multi-methods: MooseX::MultiMethods and MooseX::Method::Signatures add a lot of overhead to subroutine calls (blogged in http://blogs.perl.org/users/aevar_arnfjor_bjarmason/2010/02/...). Sometimes this doesn't matter, but they're certainly not a replacement for having a native type system. All the magic is done at runtime.

* rules: Perl6::Rules and Regexp::Grammars are both depended on by a grand total of 0 CPAN modules. They're interesting experiments but not something in use, and not a replacement of rules with its "your regex opcodes are a normal method" model.

* macros: Devel::Declare and the B::* modules are usable but they're a long way away from the Lisp model of being able to easily write macros. TryCatch is 800 lines of Perl and C to implement try { ... } catch { ...}. In any Lisp this would be trivially done with a macro in under 50 lines.

As for the other stuff pretty much anything except smart match (which is now in Perl 5 core), Class::MOP and Moose falls under the "neat but some combination of: underused, slow, unstable, epic hack nobody wants to use".


MooseX::MultiMethod and MooseX::Method::Signatures and MooseX::Declare were apparently slow for the purposes for which you used them.

However I've used them in a number of production applications now and they've never shown up anywhere high on my profiles.

Perl is "very slow" compared to C. That doesn't make it universally a bad idea.


Perl is "very slow" compared to C. That doesn't make it universally a bad idea.

The same could be said of his "epic hack" comment. Everything is an epic hack. Especially Perl.

Devel::Declare is really not that scary.


...list of modules are neat but some combination of: underused, slow, unstable, epic hack nobody wants to use

Thats a sweeping statement. Lets have a look...

* My opinion is only just that. The list is informative to what prior art there is. Always go and test and come to ones own measured conclusion.

* I postfixed old to modules that may have issues.

* Everything else I use and have no issues with (as listed elsewhere in thread)

* Haven't used MooseX::MultiMethods much recently but I do use MooseX::Declare regularly and find this and whole Moose type system a godsend that I couldn't live without now.

* re: Regexp::Grammars is new because its dependant upon 5.10 (replaceable RE engines). So it will have no other modules dependant on it yet but it looks like a worthy successor to the venerable Parse::RecDescent to me.

* re: Devel::Declare - I've already said its no "Perl6 macro" so comparing to a Lisp macro is a tad unfair :)

Hopefully that covers my value of "already" :)


Huh, Moose is underused? There a lot of modules on CPAN using Moose, and a lot of modules extending Moose.

Also, in what way is Moose unstable? Moose, and modules which use Moose, are used in production at a lot of places (if you use a recent Catalyst, you're using Moose).

Edit: doh, totally misread the OP's sentence.


It's not. I was talking about Perl6::Gather, Perl6::Junction etc.: "pretty much anything except smart match [...], Class::MOP and Moose falls under the ... underused".

So Moose is not underused. But some of the other modules that draegtun cited are. It's disingenuous to cite some crazy module Damian wrote that nobody uses as an example of Perl 6 features in Perl 5.


I use MooseX::Declare, Perl6::Gather, Perl6::Junction and Regexp::Grammars in production regularly so on that basis my listing them is totally sincere.

However stating that I'm disingenuous to list them at all... is well a very disingenuous slant directed at me :(


Not what he said.


My understanding is that Perl 6 is the polar opposite of lisp, in this sense: In lisp, the code is always and everywhere already in a parse tree, and can be manipulated easily by, e.g., macros; Perl 5, on the other hand, is the most unparsable language possible: see http://www.perlmonks.org/?node_id=663393. What perl 6 does is own the parsing problem, and respond by incorporating super powerful parsing tools. Unfortunately, I believe that perl 6 retains its dynamic syntax, so parsing of perl 6 itself is still an uncertain adventure, and not something one can incorporate unconstrained into one's metaprogramming. I'd love to be wrong about that; am I? If so, can someone recommend a tutorial on perl 6 metaprogramming?


You don't have to parse Perl 6 's grammar ... the parser is written for you.

How it goes ... you add rules for new syntax, and your macro-handler receives a syntax tree. And you transform that into a regular AST that the interpreter can understand ... more or less as in LISP.

The difference is that in LISP you work directly with ASTs ... while in Perl you can access the compiler's pipeline.

This idea is not new ... for instance it's also implemented in Boo (boo.codehaus.org) ... although I don't know how effective is.


Well, I'm just learning Perl (work related) and it certainly feels like they tried to include a lot of ready-to-use lisp-like macros in an otherwise traditional syntax when inventing Perl.

And Perl 5.12 has this jewel in its changelog:

New experimental APIs allow developers to extend Perl with "pluggable" keywords and syntax.


Which is already doable from CPAN using Devel::Declare, hence Method::Signatures::Simple, Sub::Curried, MooseX::Method::Signatures etc.

The stuff in core is, however, way, way more elegant than the way Devel::Declare currently does it - which was rather the point, Devel::Declare was "retrofit to existing perl5 VMs what will hopefully be core later".


With Perl 5.12 comes a working test example already using the pluggable feature: http://search.cpan.org/dist/perl/ext/XS-APItest-KeywordRPN/K...

Here is the code example from the docs:

    use XS::APItest::KeywordRPN qw(rpn calcrpn);

    $triangle = rpn($n $n 1 + * 2 /);

    calcrpn $triangle { $n $n 1 + * 2 / }


I have one simple test to determine is something is a Lisp. Does it fit the statement "The program is the data"? It is a simple statement that implies a whole lot of power in a language, like the ability for a program to create a new program.


Neither Io & Ioke are Lisp but both fit the statement The program is the data

Also I think the the ability for a program to create a new program (I assume you mean on the fly) is probably inherent in all dynamic languages.


"Ability" doesn't imply "first-class support" or "good idea". In languages that aren't Lisp this usually ends up being done by passing a raw string to eval() which is extremely error prone. It's not something you do lightly.

Constructing macros in Lisp however /is/ something you do lightly. A typical lisp program might have a 20/80 division of macros/functions. You aren't going to see a sane program in other languages that do the same thing.


Io & Ioke are like Lisp here. You have full access to the parse tree and don't need to anything as crude as passing raw strings to eval().


> Constructing macros in Lisp however /is/ something you do lightly

For all the ease with which one can create macros in LISP, I haven't seen anything comparable to LINQ yet.

And while I've seen useful macros here and there, I haven't seen anything groundbreaking ... macros are mostly used when you need laziness.

Powerful? Yes. Easy to use? No.


If you haven't seen anything, did you actually look around?

Common SQL : http://www.lispworks.com/documentation/sql-tutorial/index.ht...

    (loop for columns being the records of
          [select [*] :from [SpeciesList]
                      :where aardvark]
          do (print columns))
Common SQL provides some kind of embedded SQL that gets expanded with read macros and macros to more low-level Lisp code.


I don't know anything about Perl 6 (or much about earlier versions for that matter). Can someone explain how close it has gotten to Lisp? What macro facilities does it have? Can you munge the syntax tree after code is parsed?


Very close from what I gather from the web. It has macros similar to Lisp. They take an AST and return an AST. http://perlcabal.org/syn/S06.html#Macros

Do you think that Lisp style languages with sexps are the end in language design, or do you think that something better will be invented? (or has been invented)


I would never say never, but I haven't been able to think of a better way to represent programs than s-expressions despite many years of thinking about the problem.

I wouldn't be surprised if sexprs are the end in the sense that the integers are. I.e. you might go on to discover new things but they'd ultimately reduce to sexprs.

Is there a textual representation for ASTs? Could you write programs directly in ASTs?


To your question: yes, the Perl syntax (the macro system supports quasiquotation). This is exactly the same as in Lisp. The difference is that Lisp syntax looks more like how the data structures are represented in memory. I think you can also build up your programs with data constructors (i.e. like doing (list 'lambda (list 'x) (list '+ 'x 1)) in Lisp). But note that I haven't used Perl6, this is information from the web.


"AST" is actually just compiler lingo for "tree representation of s-expr representation of parsed code".

In other words: sure, you can write code directly in {ASTs, s-exprs}; it's called "Lisp". ;-)


You can add syntax rules to Perl's parser. And you can add handlers for when a new syntax rule is triggered. As a result the handler either returns a string or a syntax tree.

Don't know how Perl is doing it and to what extent (only saw some samples), but I've seen it implemented in Boo (http://boo.codehaus.org/) ... where you have access to the entire compiler's pipeline.


I love Perl. I am most fluent in Perl. I can hack things together in Perl using CPAN modules without pausing, without thinking, and I am most efficient using it. I am happiest hacking Perl at 3AM. I hate whats happened to Perl because Perl 6 has taken so long to ship. Perl 5 is outdated. Moose helps, but its gotten so bad for so long that I can't code in Perl and collaborate with anyone. I write clean Perl, but it doesn't matter - everyone else moved on years ago and disdains it. So I hack Ruby, Python, etc. because I have to.

I hope Perl 6 changes that. Is it ready to use?


No, but this is a good time to play with it ... Rakudo Star should be released in the next 2-3 weeks or so.

Problem with Perl 6 was that it's too ground-breaking, without having the resources of a huge company (like Sun, MS).

But it doesn't matter because you can't find the combination of features it provides in other languages ...

  * full OOP
  * optional typing
  * multi-dispatch
  * macros
  * cool pattern matching
  * functional features ... map, fold are at home ;)
  * continuations / coroutines
  * transactional memory
  * eventually it will be compatible with every package on CPAN
Personally I don't like Perl (either 5 or 6), I'm more interested in Parrot because what is widely viewed today as being general-purpose VMs (the JVM / .NET) are a poor choice for dynamic languages.

Consider that the JVM and .NET are stack-based. Because of that efficient continuations passing style is impossible on top, and that's a PITA because you can build any control structure on top ... tail-calls, closures, coroutines, exceptions, green threads ... can all be implemented efficiently in terms of CPS.

Another thing against the JVM / .NET is that they'll never earn optimizations for dynamic languages that could impact the performance of first-class citizens. You can forget about an efficient tracing-compiler on top of those, simply because such optimizations are not cheap and can impact the performance of Java / C#, and those VMs are not low-level enough to implement it at a higher level.

And for instance, personally I'm skeptic that InvokeDynamic on the JVM will deliver that much performance. Yeah, it provides a mechanism for a monomorphic inline cache, and from there the JVM can do stuff like code-inlining of call-sites.

But it doesn't do tracing of types, which means that code like the following ...

  sum = 0
  for i in range(1000):
      sum += i
will always pay for the price of boxing/unboxing of those primitives, an optimization that current Javascript VMs and LuaJIT 2 are doing.

So if Parrot delivers, it will be one heck of an alternative. Either way, its interface is nice and you can use it right now for prototyping new languages.


Actually, Rakudo Star is probably about two months off at the moment. Q2 was the originally announced timeframe, then April was batted about, but the lead developer has had major family health issues taking up his time, so we elected to push it back towards the tail-end of Q2.

On the other hand, it's already a pretty good time to play with Rakudo Perl 6. It's still got some rough edges, and it's slow, but it's also expressive and powerful.


A tip: It seems the Perl job market varies a lot depending on where you are. If you're willing to move, you can probably get a Perl job. (See history at jobs.perl.org for ideas.)


Language is much less important to me than the actual work, and I can use whatever language I want in my job. The problem is there are so few people who code Perl anymore it makes it hard to collaborate if I use Perl.


My point was that if you check job boards etc, there seems to be lots of Perl jobs/people around. The closest hub to me is probably London.

But if popularity is a factor, then go PHP...


Right, and my point was I don't want a Perl job. I've had Perl jobs, from jobs.perl.org. I don't want a PHP job, nobody would work with me in that language either. Most people use Ruby/Python in my circles these days.

I find language specific jobs incredibly lame. Language is a small factor in what I do. I just want to be able to say, "I'm gonna do it in Perl." and not have everyone groan and not want to work with me on it.


You could have written that "in my circles" part from the beginning. :-)


If you don't define your circle as 'workplaces from jobs.perl.org' then you have trouble using Perl ;)


Well, I do -- at least for a subset of those geographic areas.

Something similar goes for most any language, except (probably) Java and PHP. Unless you already live in e.g. London or some other hub for a given environment.


  It has been said that Ruby is Smalltalk with a perly syntax. 
  Perl6 extends on that: Perl 6 is Lisp with a perly syntax.
Interesting idea, but I would have loved to see some examples or comparisons. For all of the mindshare that Perl has, I haven't seen much content (especially code) about Perl 6 over the last couple of years.


Here are some links showing recent Perl6 content/code:

* Perl6 advent calendar: http://perl6advent.wordpress.com/

* Moritz Lenz perl6 blog: http://perlgeek.de/blog-en/perl-6/

* Carl Masak perl* blog & github projects: http://use.perl.org/~masak/journal/ http://github.com/masak

* Perl6 planet: http://planetsix.perl.org/

* Perl6 book: http://github.com/perl6/book

* Perl6 examples: http://github.com/perl6/perl6-examples


Official Perl 6 homepage with links to (nearly) all of the above: http://perl6.org/


Thanks—I'll check those out!


You're very welcome.

This link may also be of interest: http://transfixedbutnotdead.com/2010/01/14/anyone-for-perl-6...

From the links in the post you can find Perl5, Ruby, Javascript and Python comparative examples.

Been playing a bit with Io (http://www.iolanguage.com/) recently so was planning to add a comparative post for that as well.

Update: I just copied over my Io example i had brewing into a gist (http://gist.github.com/372559)


I may have missed the point, but it seems to me that the gist of the post is, "Perl 6 is different because it's the same as Lisp".


I think you missed the point. The point is that Perl 6's "rules and grammars" are a profound innovation that will affect the future of language design.


As far as I can tell grammars are a way of organizing a bunch of regexes together so you can run them against a document to match and extract all the data. It might make it easier to write parsers but that is a small, though admittedly annoying, subset of programming problems. It doesn't seem like a general enough feature to qualify as "profound".


Parsers are already fairly easy to write in lispy languages. They just have a different name in that context -- "recursive functions".

Pick a famous computer scientist, and the odds are better than even they've written a paper or two on this topic.


I have no idea what you've just said, but it doesn't make any sense.

Of course parsers are described by recursive functions, that's because recursive functions usually describe a push-down automata or a DFA, which you know, is needed for a parser.

But describing a parser is not easy, even in lispy languages ... you still need a declarative syntax (like BNF or PEGs) for those rules, you still need a strategy (like LL(*) or LALR(1)), you still need to deal with context-dependent constructs, and you still need an AST that must be optimized.

Text parsing is also not one of Lisp's strengths. Many implementations don't even have proper regexp engines ... but indeed, it's easy to translate any language to LISP as long as you're goal is not to generate something human readable.


Lisp has portable libraries for regular expressions.

Like this one: http://weitz.de/cl-ppcre/


Thanks for the downvote ... I was more interested in what you meant in your comment, rather than proving me wrong about regexp engines.


I recognize that's the point the author made, I'm just not convinced of it. Show me the power through code. Right now it's just words.


Larry Wall himself said that:

Perl 6 is Lisp with a decent syntax, and minus a lot of assumptions about how lists should be represented :)

ref: http://irclog.perlgeek.de/perl6/2010-01-15#i_1905210

He then goes onto to mention that despite all the trappings of OO, Perl 6 is fundamentally more in the FP camp


"Perl 6 is Lisp with a decent syntax,"

Calling perl syntax "decent", and by implication better than lisp syntax (such as there is) is mind bending.


Perl's syntax is designed to be more close to natural language.

Both Lisp and Perl are in the same camp ... I could hardly read an algorithm written in them before learning and actively working with them.

Either way, I'm tired of languages that can only be extended by adding functions. Declarative APIs require much more than that.


> Perl's syntax is designed to be more close to natural language.

s/natural/English/. Some languages can be parsed by computer without problems. For example, simple 15kb C program can parse 100% of technical text in Esperanto, 98% of technical text in Ukrainian (my native language), 84% of technical text in Russian, and so on. Not so simple parser can parse more than 100% of text, i.e. it can fix errors in text.

For me, shell is much closer to my native language than perl or SQL.

PS. AFAIK, there is no natural language in the wild nature. :-/ All human languages are artificial.


I think bad_user might be referring to Larry Wall's post Natural Language Principles in Perl

ref: http://www.wall.org/~larry/natural.html


Perl has a lot of syntax, and Lisp has almost none. Which one is better just depends on your opinion about syntax itself.


Lisp also has a lot of syntax, but it is ON TOP of s-expressions.


Hold Common Lisp next to Perl 5, and you tell me which one has "a lot" of syntax.

I realize that Lisp has some syntax, and of course the fact that everything fits into S-exps is important, but I would not say that it has "a lot" of syntax compared to the vast majority of active languages.


Let's just look at the syntax of Common Lisp's LOOP construct:

    loop [name-clause] {variable-clause}* {main-clause}* => result*

    name-clause::= named name 
    variable-clause::= with-clause | initial-final | for-as-clause 
    with-clause::= with var1 [type-spec] [= form1] {and var2 [type-spec] [= form2]}* 
    main-clause::= unconditional | accumulation | conditional | termination-test | initial-final 
    initial-final::= initially compound-form+ | finally compound-form+ 
    unconditional::= {do | doing} compound-form+ | return {form | it} 
    accumulation::= list-accumulation | numeric-accumulation 
    list-accumulation::= {collect | collecting | append | appending | nconc | nconcing} {form | it}  
                         [into simple-var] 
    numeric-accumulation::= {count | counting | sum | summing | } 
                             maximize | maximizing | minimize | minimizing {form | it} 
                            [into simple-var] [type-spec] 
    conditional::= {if | when | unless} form selectable-clause {and selectable-clause}*  
                   [else selectable-clause {and selectable-clause}*]  
                   [end] 
    selectable-clause::= unconditional | accumulation | conditional 
    termination-test::= while form | until form | repeat form | always form | never form | thereis form 
    for-as-clause::= {for | as} for-as-subclause {and for-as-subclause}* 
    for-as-subclause::= for-as-arithmetic | for-as-in-list | for-as-on-list | for-as-equals-then | 
                        for-as-across | for-as-hash | for-as-package 
    for-as-arithmetic::= var [type-spec] for-as-arithmetic-subclause 
    for-as-arithmetic-subclause::= arithmetic-up | arithmetic-downto | arithmetic-downfrom 
    arithmetic-up::= [[{from | upfrom} form1 |   {to | upto | below} form2 |   by form3]]+ 
    arithmetic-downto::= [[{{from form1}}1  |   {{{downto | above} form2}}1  |   by form3]] 
    arithmetic-downfrom::= [[{{downfrom form1}}1  |   {to | downto | above} form2 |   by form3]] 
    for-as-in-list::= var [type-spec] in form1 [by step-fun] 
    for-as-on-list::= var [type-spec] on form1 [by step-fun] 
    for-as-equals-then::= var [type-spec] = form1 [then form2] 
    for-as-across::= var [type-spec] across vector 
    for-as-hash::= var [type-spec] being {each | the}  
                   {{hash-key | hash-keys} {in | of} hash-table  
                    [using (hash-value other-var)] |  
                    {hash-value | hash-values} {in | of} hash-table  
                    [using (hash-key other-var)]} 
    for-as-package::= var [type-spec] being {each | the}  
                      {symbol | symbols | 
                       present-symbol | present-symbols | 
                       external-symbol | external-symbols} 
                      [{in | of} package] 
    type-spec::= simple-type-spec | destructured-type-spec 
    simple-type-spec::= fixnum | float | t | nil 
    destructured-type-spec::= of-type d-type-spec 
    d-type-spec::= type-specifier | (d-type-spec . d-type-spec) 
    var::= d-var-spec 
    var1::= d-var-spec 
    var2::= d-var-spec 
    other-var::= d-var-spec 
    d-var-spec::= simple-var | nil | (d-var-spec . d-var-spec)
I don't think there are many active languages with a more complex LOOP construct. Add to that, that some implementations have an extensible LOOP that allows to add even more syntax to it.

The syntax for DEFCLASS:

    defclass class-name ({superclass-name}*) ({slot-specifier}*) [[class-option]]

    => new-class

    slot-specifier::= slot-name | (slot-name [[slot-option]])
    slot-name::= symbol
    slot-option::= {:reader reader-function-name}* | 
                   {:writer writer-function-name}* | 
                   {:accessor reader-function-name}* | 
                   {:allocation allocation-type} | 
                   {:initarg initarg-name}* | 
                   {:initform form} | 
                   {:type type-specifier} | 
                   {:documentation string} 
    function-name::= {symbol | (setf symbol)}
    class-option::= (:default-initargs . initarg-list) | 
                    (:documentation string) | 
                    (:metaclass class-name) 
there is much more of this.


LOOP is a good example of syntax creeping its way into Lisp (defclass, not all that much IMO).

As a counterexample that almost makes LOOP syntax look quaint, consider Perl regexes. That is some syntax right there.


Common Lisp has more than 30 special syntactic constructs: block, catch, eval-when, LET, ..., unwind-protect.

It has probably more than a hundred macros that implement syntax: DEFUN, LOOP, DEFMACRO, WITH-OPEN-FILE, DEFPACKAGE, PPRINT-LOGICAL-BLOCK, HANDLER-CASE, ...

It has various basic syntactic elements like function lambda lists, macro lambda lists, etc.

It has FORMAT string syntax.

I'm not trying to win a contest with PERL and its syntax, but thinking that Common Lisp has almost no syntax is misguided. As I mentioned, in Common Lisp much of the syntax is implemented on top of s-expressions.

Stuff like regexp syntax is implemented in Common Lisp libraries. Like this one: http://weitz.de/cl-ppcre/ .


LWall > Lisp with a decent syntax

I'm no Lisp guru, not even proficient but isn't saying that kind of missing a large point of Lisp in that code is data and data is a list/sexpresion or what ever it's called?


I think the point is "with rules you don't need everything to be and s-expression". I don't know how much to agree or not with that point. Currently I'm of the we'll see where is goes point of view.


> "with rules you don't need everything to be and s-expression"

I am unsure if I would call it a feature.


From a Perl point of view, I think it's the right way to go. There are a few "standard" tricks to simulate extending Perl5, but they often feel hackish. From my understanding this will be more flexible and explicit. I don't have the Lisp experience to compare the two. My guess is Lisp macros will still be more flexible but the gap will be significantly narrowed.


Whatever Larry is smoking, I don't want it.


I'm also not sure if that will help Perl 6.


It thankfully steals nothing from PHP

I love this quote.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: