Hacker News new | past | comments | ask | show | jobs | submit login
Raku: A language for gremlins (buttondown.email/hillelwayne)
393 points by azhenley on Aug 7, 2023 | hide | past | favorite | 249 comments



One way you might arranging programming languages in a 2D space is with two axes:

1. How much should the language surprise you?

2. When the language does surprise you, should it delight you or horrify you?

              surprising
                   ^
                   |
                   |
                   |
    delight <------+------> horrify
                   |
                   |
                   |
                   v
              unsurprising
Only a sadist would deliberately design a language for the top right quadrant, but there are many esoteric languages in there. I think most people tacitly assume that all well-behaved languages belong on the bottom left: the language should rarely suprise you but in the rare case it does, hopefully it's a delight.

Raku seems to overtly aim for the mostly-unoccupied top left. It's like the designers and users (two sets with apparently very large overlap) all think, "Yes, it's weird. Isn't that grand!"


The problem is those are subjective axes. And things can both delight and horrify; I wrote some javascript once where I did document.write = function ... which is kind of delightful in that I was able to do what I needed as a result, but also pretty horrifying :)

Other people I showed it to felt it was more on the horrifying side, but like I said, it's subjective.


This is true. Often times it's simple, stuff I wrote: delightful. stuff other people wrote: horrifying.


This was early Perl code for me (early 2000s). Write once; read never. Once you and your teammates developed some discipline, it got much better.


I think that this is the crux of the story for raku ... if you want you language to apply a standard discipline, then use Python: if you are happy with your team developing a dialect that balances usability with discipline, then try Raku


    if you want you language to apply a standard discipline, then use Python
Wow, I hear this stuff all the time about Python. It is a myth. I have worked on multiple 500K+ line Python projects. All of them decay into a total mess due to weak typing. You can type hint until you are blue in the face, but it is never enough if it isn't guaranteed by a compiler or at runtime. So many times, I am 29-levels deep in a call stack, looking at a local variable that is incorrect typed, thinking to myself: "Oh fuck, this again." Yes, I will get 8,000 downvotes for that comment, but it does not detract from my personal experience. It takes super human talent to fight that trend. Have you seen what Dropbox did with that Finnish dude who got a PhD literally studying how to make Python more type safe with static analysis? Jesus fuckin' Christ: Amazing work, but it would be much easier to pick a different language for something that will grow to millions of lines of code! And, sadly, I write that sentence as an unashamed Python fan-boi. I truly wish there was some kind of "strict-er typed" Python mode -- or something.


> I have worked on multiple 500K+ line Python projects.

There were already several 1M+ Perl LoC code bases by the end of the 1990s, as Perl use exploded at places like Amazon.

One key driver of Raku's design was making it easy to write large, clean, readable code bases, and easy to enforce coding policies if and when a dev team leader chose to.

(Of course, as with any PL, you can write messy unreadable code -- and you can even write articles and record videos that make it seem like a given PL is only for Gremlins. But it's so easy to write clean readable code, why wouldn't you do that instead?)

> type hint ... is never enough if it isn't guaranteed by a compiler or at runtime

Indeed. Type hints are a band aid.

Raku doesn't do hints. Raku does static first gradual typing.[1]

The "static first" means static typing is the foundation.

The "gradual typing" means you don't have to explicitly specify types, but if you do types, they're always enforced.

Enforcement is at compile time if the compiler's static analysis is good enough, and otherwise at run time at the latest.

For example, in the Gremlins article only one example uses a static type (`Int`). But user code can and routinely does include types, and they are enforced. For example:

    sub double(Int $x) { $x * 2 }
    double '42'
results in a compile time error:

    SORRY! Error while compiling ...

    Calling double(Str) will never work with declared signature (Int $x)
        (Int $x)
> total mess due to weak typing

I know what you mean, but to be usefully pedantic, the CS meaning of "weak typing" can be fully consistent with excellent typing discipline. Let me demonstrate with Raku:

    sub date-range(Date $from, Date $to) { $from..$to }
The subroutine (function) declaration is strongly typed. So this code...

    say date-range '2000-01-01', '2023-08-21';
...fails at compile time.

But it passes and works at run time if the function declaration is changed to:

    sub date-range(Date() $from, Date() $to) { $from..$to }
(I've changed the type constraints from `Date` to `Date()`.)

The changed declaration works because I've changed the function to be weakly typed, and the `Date` type happens to declare an ISO 8601 date string format coercion.

But what if you wanted to insist that the argument is of a particular type, not one that just so happens to have a `Date` coercion? You can tighten things up...

    sub date-range(Date(Str) $from, Date(Str) $to) { $from..$to }
...and now the argument has to already be a `Date` (or sub-class thereof), or, if not, a `Str` (Raku's standard boxed string type), or sub-class thereof, that successfully coerces according to `Date`'s ISO 8601 coercion for `Str`s.


And to finish that last bit, if an argument of a call is not a `Date` or `Str`:

* The compiler may reject the code at compile time for failing to type check; and, if it doesn't:

* The compiler will reject the code at run time for failing to type check unless it's either a `Date`, or a `Str` that successfully coerces to a `Date`.


Then a 3d chart with "likelihood of" being a point or region in the space. The axises would then be "Surprise", "Delight", and "Horrify".


Context is critical. Unless you're a beginner with the language, surprise in a production codebase is horrifying. The Perl philosophy has never been compatible with lack of surprise. Perl wants to be like natural language, and natural language has limitless surprises.

The problem with Perl (or, I assume, Raku) in production is that the responsible way to read it is like reading every single footnote in an annotated edition of Shakespeare. It sucks the joy out of it, and joy is the point. Production Perl is joyless and therefore pointless, unless you're some kind of prodigy who understands every obscure political reference and every 16th century pun without any help.


> Context is critical. Unless you're a beginner with the language, surprise in a production codebase is horrifying.

Why? Are you assuming a very particular kind of surprise here?

"Oh, I can replace these five lines with a single builtin." is a surprise, and so is seeing that someone else already did so.


I was thinking axes of 'readability' and 'expressiveness' would make a nice chart for programming languages.


I don't think those are orthgonal though. For example, if a language lacks expressiveness, that can hinder its readability. But on the flip side, if it is highly expressive that can also potentially hurt readibility, especially if the reader isn't very familiar with the language. Although both are somewhat subjective (readability moreso than expressiveness), and context specific.


Exactly, they don't need to be orthogonal in my opinion. Every language should strive to be in the top right quadrant.

Keep in mind, this is just what I value in a language. The perfect programming language to me is one that is expressive while also being easy to read. I think Python is nice in that regard, and probably why it's so popular to this day. But there's always room for improvement!


As long as the negative sides of both axes are included too.

            e
            |
            |
            |
   -r ------------- r
            |
            |
            |
           -e


Filling in language names is left as a fun exercise for the reader.


Python top right, Go bottom right, Brainfuck bottom left, Rust or Perl top left.

(Don't kill me fanatics)


Do you think many people would put Rust and Perl next to each other?


I was a professional perl5 developer for years and when working through https://doc.rust-lang.org/book/, kept remarking to myself how similar the languages are, in terms of expressiveness, complexity, and implied developer disciplines.

Rust feels like an industrial version of Perl to me.


In this chart, sure. Not like, the same dot, but both in that quadrant. But those are my personal opinions, as I find both difficult to read yet very expressive. YMMV.


Interesting. I haven't heard anyone complain that Rust is harder to read than the average language, just stuff about writing it.


Having never written anything in Rust but read and “translated” some code (maybe 3k LOC?) from Rust to Ada, I can tell that I found Rust rather hard to read and harder than e.g. Ruby (which I also do not know how to write but translated to Java once).

Here is what I guess makes it hard to read for me: There are many terse keywords (fn, mut) and symbols (&, [], ->) along with some things that look like Java annotations and are equally hard to understand without knowing the language. Array slices look easy enough to understand [a..b] but surprise: b is actually exclusive? The definition of arrays is the most weird one I have ever encountered [u8; n]. To me Rust looks much like C++ with a twist and people rightfully complain about C++ often :)

The documentation about language and libraries was solid for my cases and most of the time it seemed enough to ignore most of the tiny key words and symbols without losing much understanding of what the code intends to do. If I were given the choice I'd probably still prefer Rust's weird syntax with added safety over the clarity of C which offers no safety at all (not talking about obfuscated code contests here).


> To me Rust looks much like C++ with a twist and people rightfully complain about C++ often :)

But would you put C++ next to Perl in terms of difficulty reading? I wouldn't have thought they were particularly close. (In terms of normal code at least, not the implementations of ultra-generic templates.)


The difficulty of reading Perl is probably overclaimed and overrepresented. Let's not forget that we are comparing a pretty darn low-level language to a fairly high-level one.

And regarding design principles, I think the comparison is clearly on point. These are languages where the motto is basically "if you can't do it (if you can do it but it's perceived too long that also counts) then the language needs to be horizontally extended to account for this use case".

My vague impression was that Rust is still a (significant) simplification over C++. (Ada is a funny case because it seems to be deliberately designed to be hard to read and hard to write, overall a pain in the arse, in the spirit of "if you never reach flow, you will always be very focused", I guess.)


> "if you never reach flow, you will always be very focused"

That's how I stay alert in every language. :D


Some of the dewrapping constructs can get a bit gnarly looking when there is something wrapped in a wrapper in a wrapper sort of thing. But it's probably largely author-dependant and you could write the same code more readably.


I found about raku one time I had an assignment for college where I needed to create a parser. I found raku's grammar features really interesting and used it for my project, it felt almost like cheating since it did pretty much every thing for me (actually it was probably just cheating but it was still fun).


That’s exactly what you want in a language. It should always feel like cheating!

Also debuggers. Give RR a try some time :)


I imagine you are referring to https://rr-project.org/ ?

Had never heard of it, looks pretty amazing, I might actually enjoy debugging now!


Low contrast text site.

I know there are workarounds, but we shouldn't need them.


Every time I try it, it doesn't seem to work right. Because I don't feel like debugging my debugger, I just go back to gdb.

Yet I only hear other people speaking of it positively, so it seems they don't have that experience.


It does have some annoying hardware requirements. It works on both modern Intel and AMD Ryzen CPUs, but if your CPU is too old it just can’t. And at any time the next generation of CPUs could break the whole thing, although that hasn’t been a problem yet.


FWIW I had bad experiences with rr shipped with Ubuntu LTS. I build rr from source, it's relatively easy to do.


> it felt almost like cheating since it did pretty much every thing for me

During my undergrad I implemented a semester-long physics project in about 30 minutes. The supervisor had previously taught the Programming in C module, so I asked if I had to use C; he said I didn't, so I used numpy. The code was literally just repeated matrix multiplication.


I mean, it was "Perl 6" so is it really surprising that Perl devs wanted a language that looks nothing like any other language?

Perl has it's own healthy share of "delightful surprises" and Raku was mostly designed to eliminate perl's horrifying surprises.


Apparently by replacing them with new horrifying surprises, if this article is any indication.

Good grief, I don't miss working in Perl.


One person's unspeakable horror is another's creature comfort.


De gustibus ...


>Good grief, I don't miss working in Perl.

Good and grief in the same sentence about the same language ... Perl.

Delighted and horrified?

;-)


Delighted that the screwing around I did in high school from a copy of the Camel Book my mom got me for my birthday gave me a skill I could parlay into a job that turned into a great career. Horrified once I finally got out into the industry proper and discovered all I'd been missing.

It's not that I regret ever having worked in Perl. It's just that I'm glad to have moved on to bigger and better things.


The Windows build still had plenty of Perl in it when I worked on Windows 8.


Given all I've heard of the Windows build process, I think this is much more an argument in support than in opposition.


Some of the Raku people think that showing of the most bizarre parts of the language makes it look good for some weird reason.


I guess that all depends on what you consider bizarre parts. You of all people must have your own weird reason to say this out of the blue, though.


Even from the perspective of long familiarity with Perl 5, the parts of Raku on show in this article are pretty damn bizarre. Maybe not by the standards of the Raku community, granted - but if this by them is ordinary, all the more reason for me to want to stay away.


It would be invaluable feedback if you could point out what you find so bizarre? I mean, the "let's define some ad-hoc postcircumfix operator" is a weird flex for sure but otherwise? I could only see basic non-magic subroutine definitions and appropriately used operators.


Where would you like me to start? The apparent variance between list syntax accepted by the reader and emitted by the printer, the brambly thicket of special-case operators and grammars ('o' for function composition? Square brackets to reduce over an operator? Square brackets with a backslash to map? - I remember the "Periodic Table of the Operators", but even by that standard this is wild), function signatures as first-class values (...what?)

But honestly, I don't think I can provide actually useful feedback here, because I'm pretty sure nothing I'm doing is similar enough to anything Raku is doing for the conversation to really make sense.

Everything I've just described, and much else about the language, strikes me very much as needless complication for the sake of complication, the sort of thing that earned Perl 5 its "write-only language" sobriquet by encouraging, if not necessitating, the development of idiolects among its users - and Raku if anything seems to do much more of this than Perl 5 ever did.

That's not actually a bad thing for a small codebase maintained by one programmer, for whom the high cognitive complexity imposed by the language design can be managed by mostly working within their idiolect. But my experience has been that the maintainability of such a codebase decreases as at least the square of the number of people working on it, and that quickly becomes a catastrophe in the sorts of very large codebases, shared among tens or hundreds of engineers, in which I've spent the bulk of the last decade working.

That's not a world in which I see Raku being able to survive, and my experience there has led me to value languages and practices that are actively as un-clever as possible - because the cleverer something is, the harder to understand, which becomes a real problem when understanding it is necessary to resolve an outage denying service to millions of users.

That's also not a world in which I would expect Raku to try to survive, because it's obvious to me that that's not what Raku is meant for. Nor should it be; that that's not the place for such carefree, freewheeling weirdness for weirdness' sake isn't the same as saying no such place exists. It's just that that's a place I'm glad I don't live any more.


Isn’t it the same as if to say - “you are given hammer and nails only because anything more complex would make it things harder to the rest of your team”? But Raku, by the way could be that way - very simple and you don’t have to use all it’s capabilities


> The apparent variance between list syntax accepted by the reader and emitted by the printer

Could you provide an actual example for this? I suspect this is a simple misunderstanding.

> 'o' for function composition? Square brackets to reduce over an operator? Square brackets with a backslash to map?

I never used 'o' myself but YES, square brackets to reduce over an operator. Indeed, I see nothing wrong with that (okay, I actually do but not with the principle, rather the parsing). And "square brackets with a backslash" is NOT a map, it's a produce. The blog post got it right.

> function signatures as first-class values (...what?)

Okay, now give me a break... this... is a "what"? Are you real now? A perfectly sensible and useful feature, a "what"? To be able to investigate function signatures and pass them around appropriately without constant hacking, is that bizarre? I guess airplanes are also bizarre to some...

So yes, I think what you present here might be the other extreme: rendering anything that might be slightly clever as "weird" or "complicated", even if it clearly reduces the custom tinkering each individual developer would have to do...

The situation is a bit touchy because I think your characterization of the Perl mindset living on in Raku as well (C++ is a similar language in a way) is legitimate but it seems to be based on "fortunate" prejudices and stereotypes rather than a good understanding of the situation. You haven't seen the bad parts yet and you apparently called some of the good parts bad. Not sure if it can be helped. I don't think the language will be cleared up anyway but I also wouldn't want to go as "unclever" as you seem to find desirable. Languages like Python and Rust can thrive despite not being completely dumb. Opinionated yes, dumb no.

Also, I have been thinking that maybe the way to find industrial use for Raku (or this broader mindset overall) is to outright reject this framing that all projects need to be monolithic. Yes, you may be right that Raku will never be a language where dozens of people work on millions of lines of code together, for decades. What if your services can be broken down into pieces that CAN be managed by a couple of people tightly co-operating? What if these services can be developed and deployed so efficiently that instead of maintaining them for eternity, you can just start anew and replace them cheaply?

There is the common wisdom "what one person can't fully comprehend, definitely contains flaws". There is some truth to it for sure, and it does seem like with the rise of microservices, there would be place for software developed in a much more distributed manner. Now, there are practical reasons why Raku didn't catch up, at least not yet - but they are absolutely not language philosophy reasons.


> Could you provide an actual example for this?

Lists are given to the reader in angle brackets and printed in parentheses, or at least that's how it looks. I suppose the other possibility is that these are actually different types, but I'm not sure that's better.

"Produce" is a term I've never seen used this way before, and I assume it is unique to Raku. It appears to be a special case of reduction, not at all common in my experience, but presumably very common where Raku is used.

> Okay, now give me a break... this... is a "what"? Are you real now? A perfectly sensible and useful feature, a "what"? To be able to investigate function signatures and pass them around appropriately without constant hacking, is that bizarre? I guess airplanes are also bizarre to some...

That I half expected something in this nature is what had me in two minds over whether to reply to your prior comment at all. I suppose I'm glad there's not more of it, but really, this kind of attitude adds nothing to the discussion, and it certainly doesn't help make anyone's case that Raku has a community worth trying to be part of.

Beyond all that, asking people for feedback you claim to value and then behaving so badly when they take the time to honor your request is a great way not to get any more feedback, not only from that person, but also from everyone else who sees you repay generosity with insult.

(Before you complain that I gave the first insult - yes, I found this very surprising and frankly weird and I was not shy about saying so, nor need I have been. If an honestly bewildered "what?" is something you can't help taking personally, then the task of soliciting feedback may best be delegated to someone who has thicker skin - I have been in this line of work for a long time now, from which experience I can with certainty say that as user feedback goes, "what?" is nothing.)

Function signatures, as opposed to functions, being first-class runtime values is objectively weird. Deriving one signature from another at compile time via reflection is unexceptionable and occasionally useful, sure. But the idea that a function's signature in isolation has meaning enough to treat as a runtime value in its own right is something I don't recall ever having seen in close to forty years of programming across a widely varied range of languages.

I did some looking around the docs; this seems to be referring to values that might be returned from

    $?ROUTINE.signature
and if so, then the use cases for it appear to be in line with what I'd expect to see done via reflection, although I admit I have struggled in the time available to find an example of anyone actually using it. So in the context of a language without a clearly defined concept of "compile time" this does make sense.

It might have done so more quickly, had you been more inclined to consider perspectives other than your own - the lack being again not something that does much to commend the community you've chosen to represent as one worth joining. Your problem to solve or not just as you choose, of course, but do perhaps consider that behaving as if an adolescent in a tantrum fails to leave a good impression.

As for the rest of it, I think that can be boiled down to two points:

> You haven't seen the bad parts yet and you apparently called some of the good parts bad. Not sure if it can be helped.

I'm not sure it can either, but you would do well to consider that the confusion you describe may have occurred because Raku is legitimately confusing.

Perl 5 has the same problem, and so did I in growing past it: there are not actually all that many novel concepts here or anywhere in programming, but this universal fanaticism for idiosyncrasy necessarily incurs as a side effect that if you want any of your stuff to make sense to someone who isn't already familiar with it, you have to spend a hell of a lot of time explaining the correspondences first. The incoherent farrago of syntax is just the most obvious example of this, and Raku does it a lot more than Perl 5 did! Not for nothing was the older language so often derided as line noise, or the first two letters of its name glossed as "Pathologically Eclectic" - jokingly perhaps, but many a true word is said in jest, and at some point you really have to admit that you are trying people's patience. If whatever you're getting in exchange for that is worth it to you, well and good, but you do yourself no favors by refusing to acknowledge the tradeoff that's been made.

- Microservices aren't an answer here. I have worked in such architectures, at scale sufficient to support billions in annual revenue with tens of engineers, and I would've been laughed out of the room for suggesting any of them should be implemented in Raku. Decomposing an architecture into services doesn't mean those services do not themselves need to be maintainable under the constraints I earlier described, including if both of your "couple of people tightly co-operating" die in the same plane crash. Modern engineering organizations work hard to increase the bus factor, and this approach not only militates directly against that but makes it practically impossible, not least because where else are you going to find anyone who can understand the code? (I am reminded strongly at this point of "The Bipolar Lisp Programmer" [1].)

If nothing else, I see in Raku a language of which specifying and enforcing a remotely maintainable subset, if possible at all, would require a whole lot of effort that could be better placed elsewhere - such as serving the needs of the business that as an engineer in industry is inevitably your client, even if that client happens also to be you.

I don't know why you would want Raku to "catch up", since a Raku capable of doing this would long since have ceased to be a Raku that an obvious partisan such as yourself would recognize or appreciate. But if that is what you want, you need above all else to show - quickly, clearly, and in a way that's compelling - how Raku is uniquely capable of making engineering organizations able to ship fast and ship well, in ways that can't be matched by anything among the very large set of more approachable, more supportable alternatives.

From what I've seen thus far, there's a lot of work ahead to get there. If it really means that much to you, I confide you'll find a way to spend your time on that, rather than fruitlessly argue any further at me.

[1] https://www.marktarver.com/bipolar.html


  >  "Produce" is a term I've never seen used this way before, and I assume it is unique to Raku.
More commonly referred to in the functional languages as a `scan`, but yes, it seems that `produce` might be unique to Raku. I've not seen `accumulate` outside of Python, but there's a lot of langs so who knows.

code_report (Conor Hoekstra) has discussed the difference names of `scan` on a couple occasions, eg: https://twitter.com/code_report/status/1246494250537291776

Connor has declared a fondness of APL's where fold and scan show a clear visual relation: `/` & `\`. I suspect Raku was shooting for something similar with it's choices:

[f] list # or list.reduce(f)

[\f] list # or list.produce(f)


I've just found the key exchanges that arrived at "produce" by using the IRC log search[1] and then, er, scanning backwards to where the exchange began[2]:

> TimToady: is it okay to rename the 'reduce' builtin to 'fold' and add one for 'scan'? My understanding is that 'reduce' is the general term for them both.

[1] https://irclogs.raku.org/search.html?query=produce+reduce&ty...

[2] https://irclogs.raku.org/perl6/2006-05-15.html#17:10-0001


I recall Larry saying the key visual aspect is the `\` (I don't recall him mentioning the `/`) because `[\` visually echoes the typical shape of results:

    .say for [\**] 2,3,4

    4                          #           4
    81                         #      3 ** 4
    2417851639229258349412352  # 2 ** 3 ** 4


> Lists are given to the reader in angle brackets and printed in parentheses, or at least that's how it looks. I suppose the other possibility is that these are actually different types

They are different types:

[1,2,3] is an array with mutable elements that can be expanded, shortened (1,2,3) is a list with immutable elements of fixed length

> find an example of anyone actually using it

A common example of signature introspection is when a routine accepts a lambda as a parameter, and adapts its function depending on the number of arguments the lambda expects (the arity: &foo.signature.arity, see https://docs.raku.org/syntax/Arity).

Two examples:

The "sort" function optionally accepts a lambda: if that takes one argument, then a Schwarztian transform will be done under the hood. If it takes two, then the lambda will be used as the comparator.

The "map" function expects a lambda. Depending on the number of arguments it takes, it will take that many arguments each time from the iterator it runs on.

say (1..12).map( -> $a, $b, $c { $a - $b * $c } ) # (-5 -26 -65 -122)


> They are different types...

Okay, we have arrays and tuples in square brackets and parens respectively. Where do the angle brackets come in?

> when a routine accepts a lambda as a parameter, and [via runtime introspection] adapts its function depending on the number of arguments the lambda expects

I note from the article that Raku has explicit multiple dispatch. This appears to be the same thing by other means, but probably much harder to optimize.

If so, this would be an application of the same "TIMTOWTDI" principle that governed much of Perl 5's design. But that takes us back to the encouragement of idiolect and the fanaticism for idiosyncrasy, on which I have said a fair bit already. Beyond impugning "ease of writing poetry in it" as an absurd, counterproductive, and frankly twee desideratum in programming language design, I can find nothing more to add.


> Where do the angle brackets come in?

The angle brackets are syntactic sugar for word quoting.

    <a b c>
is syntactic sugar for:

    ("a", "b", "c")
They can be used standalone. Or as postcircumfix on hashes:

    %foo<a>      # the value of key "a" in hash %foo
    %foo<a b c>  # the values of keys "a", "b", "c" in hash %foo
There's a lot more to it than that, but that's the gist of it.

https://docs.raku.org/language/quoting#Word_quoting:_%3C_%3E


Oh, okay, so it's just more sugar over Perl 5-style qw(), which is itself sugar over split().

I heard something once about "cancer of the semicolon"...


> Lists are given to the reader in angle brackets and printed in parentheses, or at least that's how it looks. I suppose the other possibility is that these are actually different types, but I'm not sure that's better.

About that, huh? It was the author's preference to write <foo bar baz> (syntax sugar) instead of ('foo', 'bar', 'baz'). The REPL uses a simple stringification method for the feedback it gives by default. I wouldn't think too hard about it.

> "Produce" is a term I've never seen used this way before, and I assume it is unique to Raku. It appears to be a special case of reduction, not at all common in my experience, but presumably very common where Raku is used.

The point stands: the author of the post did know what it is for (and called it "accumulation"). You didn't know it and mistook it for a map even with the given output. I don't know if this is all Raku's fault but in your place I would be more humble to jump from a faulty assumption to the next assumption...

> That I half expected something in this nature is what had me in two minds over whether to reply to your prior comment at all. I suppose I'm glad there's not more of it, but really, this kind of attitude adds nothing to the discussion, and it certainly doesn't help make anyone's case that Raku has a community worth trying to be part of.

Likewise: I half expected that somebody, instead of appreciating good features, will outright label them as a reason why the language is bizarre. Indeed, it is a desperate situation and I don't think there could be any way to win over somebody with prejudices this strong. Nor am I certain if "worse is better" is a good paradigm to act upon.

> you repay generosity with insult

> If an honestly bewildered "what?" is something you can't help taking personally

I must be clueless about people really. I gave you `an honestly bewildered "what"`, according to your own words, and that's now labelled as an insult. Frankly, it's not the greatest pleasure to try to reason with somebody who throws around judgements like that.

> Function signatures, as opposed to functions, being first-class runtime values is objectively weird

I mean, how is one supposed to argue declarations like this?

> Deriving one signature from another at compile time via reflection

Reflection is rarely ever a compile time feature. It obviously isn't a compile time feature in dynamic languages but neither is it compile time in VM-native languages like Java or C#. It's the metamodel the VM's provide and means to access that.

> But the idea that a function's signature in isolation has meaning enough to treat as a runtime value in its own right is something I don't recall ever having seen in close to forty years of programming across a widely varied range of languages.

Arguably, having arguments go into a hash/array in a function, and the other way around: passing individual arguments through hashes/arrays, should qualify: something that has been a part of Python for eternity, and was added to ES2015.

This is mainstream enough but if you are not satisfied, let's roll back to reflection: https://learn.microsoft.com/en-us/dotnet/api/system.reflecti... https://docs.python.org/3/library/inspect.html#introspecting...

Objectively weird?

> I'm not sure it can either, but you would do well to consider that the confusion you describe may have occurred because Raku is legitimately confusing.

My point is that Raku may be legitimately confusing and you may be illegitimately confused. The two things don't contradict. I don't need to be convinced about the former. I was curious about the impression and conclusions this particular article has led to. I'm disappointed to see that the reactions apparently aren't even based on the article. I'm not sure that the Raku community could even change the language enough to break down the prejudices.

> I would've been laughed out of the room for suggesting any of them should be implemented in Raku

Again, that's least about being a "write-only language". Of course there are technical merits.

> Decomposing an architecture into services doesn't mean those services do not themselves need to be maintainable under the constraints I earlier described, including if both of your "couple of people tightly co-operating" die in the same plane crash

If the granularity of your system is small enough for any of this to make sense, then YES, it does mean that. The purpose of microservices is also testability, stability and replacability. The fact that you do not need to keep tinkering the same piece of code after years, because it already does that one thing it was created for, and when the change in requirements makes it unsuited, by all means throw it away and make a new, fitting component. The whole point is that not all software development has to be about reading code more than writing it, and maintaining the same heap of code for eternity. The bus factor also cannot be constantly increased by merely making more people do everything. Once again, I'm not saying that I hold the absolute truth, I am saying, however, that you are neglecting a lot of possibilities.

> I don't know why you would want Raku to "catch up", since a Raku capable of doing this would long since have ceased to be a Raku that an obvious partisan such as yourself would recognize or appreciate

Okay, so we have reached there just a couple of paragraphs after you talking about insults. If I knew you would be firing hot takes with a gatling, I wouldn't have asked you in the first place. At this point it's really just a counter-attempt of survivor bias on my side, so that your hot takes don't go unreflected. This particular one will, though.


> I don't think there could be any way to win over somebody with prejudices this strong.

Yes, I have long prior experience of Perl 5 - it was the first non-BASIC language I learned, and from 1995 through about 2010 the language I used for almost everything both at work and at home.

That I have learned from both that experience and what followed, and that not all the conclusions I've drawn from it are favorable to either that language or its successor, you are of course welcome to describe as "prejudice" if you like.

> Nor am I certain if "worse is better" is a good paradigm to act upon.

The phrase was originated by Richard Gabriel during his time as the head of Lucid Inc. Are you at all familiar with the history behind it? Theirs is an honorable enough example, but also one I'd be very concerned to find myself following.

> The whole point is that not all software development has to be about reading code more than writing it, and maintaining the same heap of code for eternity.

You do realize, I hope, that saying stuff like this doesn't really make the "write-only language" cavil that's dogged Perl throughout its history seem any less fair...

> Once again, I'm not saying that I hold the absolute truth, I am saying, however, that you are neglecting a lot of possibilities.

And once again I am saying you have failed as yet to convince me those possibilities might offer productive benefit, especially when I have yet to see the theoretical advantage of language flexibility in microservices materialize in practice.

I'm not saying that means it's impossible it ever will. I am saying that means you have a high bar to clear in making the argument for Raku's particular fitness - but that doesn't seem to be the argument you are trying to make, anyway.

Instead you seem to plead language flexibility in microservices more or less as an excuse, on the idea that niche languages for which knowledgeable engineers are rare as hen's teeth face no barrier here because, after all, you can always throw the code away and write more. And that is not convincing, except inasmuch as I'm finding it hard to imagine by this point you have experience with the costs involved in building production software.

(Also, what's so insulting about being called a partisan? Your partiality is by this point very obvious.)


The problem isn't whether the conclusions you draw are favorable or not, it's that they are not legitimate. It's getting tiresome that I have to repeat it: I'm not at all arguing either in favor of Perl, or against the "write-only" narrative that imo applies to other languages as well, including C++, Scala, Common Lisp, Ruby and whatnot, and I quite clearly know what it means when somebody says Raku is a "write-only" language. Newsflash: it's not about visual appeals, it's about mental portability across people.

> You do realize, I hope, that saying stuff like this doesn't really make the "write-only language" cavil that's dogged Perl throughout its history seem any less fair...

Surprising, right? I frankly cannot parse that sentence but yes, by now I really hope it's clear that I'm not arguing with the "write-only" narrative. I'm arguing that there is a world of things where software doesn't have to be about sticking to the same monolithic code base for decades and pass it around across dozens of people. That there is a world where it is indeed more important to write code than to read it.

> And once again I am saying you have failed as yet to convince me those possibilities might offer productive benefit, especially when I have yet to see the theoretical advantage of language flexibility in microservices materialize in practice.

I hope I suceeded in convincing you about factual things about signatures, reflection and the existence of accumulation. I don't have research studies about the productivity of various programming languages in a microservice-based architecture; if you have something like that, I will be happy to read it.

> Instead you seem to plead language flexibility in microservices more or less as an excuse, on the idea that niche languages for which knowledgeable engineers are rare as hen's teeth face no barrier here because, after all, you can always throw the code away and write more. And that is not convincing

This is not convincing because it's a strawman. Let's throw the "niche language" part away and let's replace "after all, you can always throw the code away and write more" (which is true but indeed no argument) with "a small, single-purpose, well-tested tool of any sort is best replaced when that single purpose wears out". Which is what I stated. And it makes sense, given that it was designed and tested around one single purpose. Maintenance is not a virtue but a common technical necessity. If you don't know when to let go of software, then ironically enough, you might as well have been exposed to too much Perl.

Anyway, considering the topics you did not respond to, one can deduce what points I did nail. Also, it seems we are getting away from the insult motive, that way it may be more useful for somebody who just reads.


> You check set membership with ∈

Yeah, ok, I see what you mean.

>> 0,2,4...10

> (0 2 4 6 8 10)

>> 1,2,4...10

> (1 2 4 8)

Wait... what? Is it looking the next number up in OEIS or something?


    Unable to deduce arithmetic or geometric sequence from: 1,2,5
    Did you really mean '..'?


> arithmetic or geometric

See, that doesn't delight me. OEIS is what's needed here.


Well… that was supposedly considered at some point:

https://youtube.com/watch?v=BJIfPFpaMRI&t=1891 (~1m)


Heh, even Larry Wall says "no, that way lies madness"

(now I want to see it in an esolang precisely because he said that)


This is much like my wanting to eventually resurrect my MicroVAX 2100 ... so I can install Eunice on it ... so I can get perl's Configure script to take the other branch of the test that reports "Congratulations, you aren't running Eunice" for real and take a photograph of it.


Welp now I know what "using it in anger" is gonna look like


Do you want it to take the first match? Inputting a unique prefix doesn't sound practical.


That would lead to weird behavior since the OEIS is sorted by number of references, which I think prefers "interesting" solutions over straightforward ones. For (1, 2, 3), the positive integers are only ranked 5th -- the top hit are the Fibonacci numbers. It's funny to think about.


I thought about that. I thought the compiler could error with "Insufficient data for meaningful answer" until you'd added enough digits that the sequence was unambiguous.

Then I tried it in OEIS and as you say there are too many sequences that start [1,2,3,4,5,...] - I think we may be stuck with generators and yield. But they're not exactly delightful.


Convert https://trout.me.uk/data.jpg to sixels and have the compiler emit that?


> sixels

Thanks! My build scripts just got that little bit more entertaining (for me. not for the other devs).


> "Insufficient data for meaningful answer"

Nice reference, by the way.


oh fuck, i missed that


you could special-case arithmetic sequences


Maybe there's some way to extend the compiler to use OEIS. Is there a "liboeis" that lets you download and interact with the OEIS database offline?


There's must be one or two for either Python or Perl. Well: https://github.com/sidneycadot/oeis


Damn, that made me laugh.


In my view, Raku was venturing further and further into the top-right corner with every subsequent paragraph.


Pedantry incoming-

one of my greatest pet peeves is when some words in a list, or diagram, are in their adjectival form and some are in their verbal form. "Surprising" + "horrify"

It really unnerves me. The worst is that nearly every company has this in some form in their promotional or educational materials.

E.g. here at Genericorp, we prioritize the following valurs : integrity, kindness, honest,

Etc

You get my point. Drives me nuts this kind of thing constantly slips through the cracks.


Well, the axiis are orthogonal. One of them is for the thing's quality, another is for the thing's action, so you get adjectives along one axis, and verbs along the other.


Yeah thats fair. Not a great example. Still irked me tho.


The grammatical term for this is - perhaps surprisingly, for this crowd - "parallelism".


Its impotent to halve clear valurs.


I like to think Ruby is dead center, by virtue of the fact that sometimes I'm not sure if I'm delighted or horrified.

https://codegolf.stackexchange.com/a/33217


I'm very interested in bottom right langs.


Absolutely JavaScript, in that it is easy to accidentally do horrifying things, but absolutely not surprising if you’ve spent any time working with it, as you quickly get used to how awful it is.


I’ve been in some sort of web development since 2005 and am to this day surprised by just how much awful is packed in there. Most is deprecated - thank God and any other deities - but it’s still there lurking for newbies and unaware developers coming from sane languages.


Have you seen how people can write JavaScript within CSS yet? That's another fun one. :)


Languages where there has been significant modernization and progress, but where codebases and online resources lag behind: CMake, PHP, CSS


Nothing can redeem CMake and CSS in my eyes. To me they fit perfectly in the bottom right, unsurprising horror.

PHP is often just bland drudge, true horror in PHP land is unexpected when I encounter it.


I think my vote goes to ColdFusion.


Assembly on load-store machines? (with perhaps the omission of MIPS where the lack of instruction interlocking can cause surprising things).


Maybe C?


You can definitely get there with C but the language isn’t actively pushing you into horror.


It doesn't actively push you there, which is why it's on the bottom half. There is a fairly coherent mental model for C and once you have it, the language rarely surprises you.

But when it does surprise you, it's almost never anything good. Stuff like Duff's device, `3[someArray] = value`, etc. The surprises are always the language's raw machinery showing through in unpleasant ways, and never a delightful bonus feature the designers added for you.


> The surprises are always the language's raw machinery showing through in unpleasant ways, and never a delightful bonus feature the designers added for you.

I was reading "Writing Solid Code" by Steve Maguire (though it should really be called "how to write code in C without shooting yourself in the foot"). One thing that surprised me, but which made sense, was that pointers can overflow. It's unlikely, and ANSI non-compliant, but something possible nonetheless. Hence, Maguire said that this code:

  void *memchr(void *pv, unsigned char ch, size_t size)
  {
      unsigned char *pch = (unsigned char *)pv;
      unsigned char *pchEnd = pch + size;

      while (pch < pchEnd)
      {
          if (*pch == ch)
              return (pch);
          pch++;
      }

      return (NULL);
  }
Has a bug. It surprised me when reading it, because it's such a common language idiom.

"What range of memory would memchr search when pv points to the last 72 bytes of memory and size is also 72? If you said 'all of memory, over and over,' you're right. Those versions of memchr go into an infinite loop because they use a risky language idiom—and Risk wins."

So, he said that that code should be replaced with this code:

  void *memchr(void *pv, unsigned char ch, size_t size)
  {
      unsigned char *pch = (unsigned char *)pv;
      
      while (size-- > 0)
      {
          if (*pch == ch)
              return (pch);
          pch++;
      }

      return (NULL);
  }
Just the little things C lets you think about. This seems like a bug that would occur every once in a blue moon.


Gone through a whole range of emotions looking at this. Tried to put together an argument about the high bits of (common) address spaces being zero and therefore it's safe but I don't think that works. It's the while (i++ < UINT32_MAX) bug in different clothing. Would make for a cruel take on the interview question of "tell me what bugs you see in this function".


>Tried to put together an argument about the high bits of (common) address spaces being zero and therefore it's safe but I don't think that works.

Yep; on AMD64, bits 48 through 63 must be identical to bit 47, which can be 1 or 0, akin to sign extension.

In practice, I don't think any sane OS would let you reserve the very last n bytes of memory, especially not with an address space as large as that of AMD64, but you can't assume the architecture, and you don't always have an operating system.

And yeah, you could see the same bug with integer array indices, if signed integers wrap.

  /* Endless loop if end == INT_MAX */
  for (int i = 0; i < end; ++i)
      /* code */;


Linux kernel space on x64 uses "negative" pointer values so the high bits are set there. Which is probably the more interesting place to find this bug.

Needs to be unsigned to get this failure mode, any signed loop gets compiled assuming no overflow for a different (though similar!) failure mode.

I'm leaning towards "C is surprising". Didn't have to be but as presently implemented is very full of hazards.


> Needs to be unsigned to get this failure mode, any signed loop gets compiled assuming no overflow for a different (though similar!) failure mode.

In C and C++, compilers will optimize assuming that signed integer overflow doesn't happen, but that doesn't stop it from actually happening. Unless you set it to trap on overflow, signed integers still wrap; it's just that compilers make (incorrect) optimizations assuming that it doesn't happen.

You know, this makes me wonder whether it's better pointers to be compared with signed or unsigned comparisons. Currently, my compiler emits unsigned comparison instructions for them.


Do you mean unsigned branches? JB/JBE/JA/JAE instead of JL/JLE/JG/JGE? Are there actually code patterns where it's preferable to just JE/JNE? AFAIK, computing a pointer outside of the underlying array's boundaries (except for computing the one-past-the-last-element pointer) is UB so e.g.

    for (SomeStruct * curr = p, * end = &p[N]; curr < end; curr += 2) {
        // processing the pair of curr[0] and curr[1], with care
        // in case when curr[1] doesn't exist
    }
is an invalid optimization of

    for (SomeStruct * curr = p, * end = &p[N]; curr != end; ) {
        // processing the pair of curr[0] and curr[1], with care
        // in case when curr[1] doesn't exist

        if (++curr != end) { ++curr; }
    }


> Do you mean unsigned branches? JB/JBE/JA/JAE instead of JL/JLE/JG/JGE?

Yeah, that's what I meant. Thanks for speaking more clearly.

> Are there actually code patterns where it's preferable to just JE/JNE?

That's a good point, and sidesteps the issue of pointer signedness.

I think sometimes JE / JNE isn't enough. For example, if you want to process a buffer in reverse order using pointer arithmetic:

  /* p starts off one past the end of the buffer */
  char *p = buffer + bufsize;
  while (--p >= buffer) {
      /* ... */
  }
I'm not sure if this would technically be undefined behavior, though, as the C standard only explicitly permits computing a pointer one past the end of the array, and other out-of-bounds computations are undefined, IIRC. In practice, I don't think any compiler would miscompile this.


Actually, this isn't true. The loop ends as soon as i == INT_MAX. It would only be endless if the loop condition were "i <= end" and end were equal to INT_MAX.


I'm not a C language lawyer, but I'd expect C to have a rule that calculating the one-past pointer will not overflow within an array object. So malloc would not be allowed to return such an allocation and this would be a bug in the caller, not in this function.


Yes, indeed it does. It's mostly ignored by most implementations but technically e.g. on architectures with 16-bit address space 0xFFFF isn't allowed to be part of an object (which makes 0x0000 an obvious choice for NULL).


am i reading this wrong? the code has a bug*, but the bug is

      unsigned char *pchEnd = pch + size;
      while (pch < pchEnd)
that if pch+size overflows (unsigned, from a large address to a small address), then the while loop will be skipped entirely.

*depending if you think the compiler would ever allocate so as to put you into this position, for example usually the stack is at the top with the heap underneath it so stack overflow would be your risk, not address overflow.


You're right. I didn't think it through enough. If pchEnd doesn't overflow, then pch always exits the loop equal to pchEnd (assuming no breaks or returns). If it does overflow, the loop never starts. There is no case in which it goes into an infinite loop.

I (and Steve Maguire) had assumed that if pch were 0xFFFFFFFF - size and pchEnd were 0xFFFFFFFF, then it would run into an infinite loop, but it won't; neither pointer will overflow in that case.

It would only run into an infinite loop if you wrote something like this:

  pchEnd = pch + size - 1;
  while (pch <= pchEnd)
where pch + size is 0 (due to overflow), thus pch + size - 1 is the maximum possible pointer.


> The surprises are always the language's raw machinery showing through in unpleasant ways, and never a delightful bonus feature the designers added for you.

Not always. It's rare, but eg `o[objects].up[objects].t[textures]` was definitely a delightful bonus feature (where `objects` and `textures` are global arrays).


It's the "unsurprising" part in "unsurprising & horrifying" - if you manipulate raw pointers incorrectly, of course it will crash or be vulnerable, to the surprise of no one by design.


This rings true the more I think about it. Any large code base gets there with size, but with C one gets there pretty reliably after a certain heft of code.


That being said, while C's surprises are certainly fewer than other languages, it still does have a few surprising corner cases in contrary to its usual "portable assembly" character (without compiler optimization) - implicit type promotion in expressions with mixed data types, potential malloc() in printf(), possibility of Duff's device, and the likes. A Sun engineer Peter van der Linden has a book Expert C Programming, Deep C Secrets that explores these topics.


The Undefined Behaviour In The Walls, slowly driving you mad even if your program runs perfectly.


C++!


Only old C++, new C++ creeps up the y axis.


C++ occupies a region of the space rather than a singular point, since old C++ is part of new C++.


I’d say C++ fills the space. It can be as dumb or as clever as you want it. It’s horrifying in a third dimension.


Interesting perspective. It surprises you in exactly how, but not very in that it will. (Horrify you. But it also delights sometimes. We need more dimensions?)


To put it differently, the new C++ is delightfully horrific at times.


Java 4, Java 6, maybe even Java 8.


COBOL?


It's just the languages everyone knows are bad.


The issue I have with the language is that I went from the top left of that graph to fairly far on the right, vacillating between the top or bottom. That happened for me long before it became Raku.

I clearly remember reading through the synopsis and exegesis documents around fifteen years ago and being excited for what was to come, but I also remember the first bits that gave me pause. Like unspace[1].

I also clearly remember when it started dawning on me that Perl 6 (as it was still called at the time) would never see any real adoption by people that were interested in other people working with them on things, whether that be some open source project or at a company with coworkers. It was when I was reading through the advent calendar exercises for some year and realized that while I couldn't recall half the features used to solve the problems, I had been exposed to them all multiple times before. The reason I couldn't remember them was because there were so many ways to solve the problems, that it felt like just keeping up with the features and syntax was overwhelming, and that unless you used a very strict subset of the language, it would be near impossible to limit the effort requires to understand what you read from others, or even yourself some time previously.

To many detractors of Perl, this won't be an unfamiliar complaint, but I've always felt that while Perl (5) projects could easily devolve into a mess if some care wasn't taken, it wasn't that hard to exercise some restraint and still take advantage of the extra capabilities when more power was needed. Put another way, while there was definitely more than one way to do it, it didn't feel like there were tens of vastly different ways of doing it that might have been sourced from random inclusions anywhere in the codebase because of numerous ways to insert entry points into every block, leaving you scratching your head when you encountered odd behavior.

Now, it's not a problem for a language to want to reside squarely in the "this is for screwing around and trying any feature you can imagine and how they interact together", but that's not necessarily great when you are trying to accomplish something with someone else and you need to have a shared base of understanding to work from. The real problem is that many, many people in the Perl6/Raku community obviously really hoped for it to take over for Perl 5 or at least be useful for projects which might have multiple people working on them, whether they be through github or for some commercial work entity with coworkers, so they could spread its use and have a reason to use it on a regular basis. I think once it became clear that it wouldn't serve that purpose as well, many people lost interest. I know that was the case for me. I was an avid lurker on the project for well over a decade, but eventually I just couldn't convince myself it was a good idea to use the language for anything other than small test programs to satisfy my own curiosity, and eventually not even there, since if I didn't already have it installed the overhead for a small test was quite high.

In the end I guess Raku found its niche, and it's doing okay where it is (I assume, I've fallen so out of date I'm not sure), but it's sad that it seems they had to shed a lot of their original community (as someone who identifies as part of their original community) to do so, whether that's through any concerted effort on their own part or not. That is, I don't think Raku changed to be what it is, I think most people eventually realized what the bundle of features that was originally proposed really means in reality, and realized it didn't fit what they actually wanted.

1: https://design.raku.org/S02.html#Unspaces


You just need to start using it. Raku is here and ready for many practical tasks. Again you don’t have to keep up with all Raku features, you may want to just start using it for something … )


Perl used to bandy about "TMTOWTDI" (Tim Towdy) incessantly in those days of Perl4/5.

There's More Than One Way To Do It


Yes, and it even had a way to pronounce it, I'd read:

  timtowdy

which I somehow did not like, but shrug.

And then Python got on the bandywagon :) with:

There should only be one, preferably obvious, way to do it

or such, which, in later years, seems to be not bandied about as much.


I had the wording of the Python slogan slightly wrong, above, and also remembered where I had read it - in the Zen of Python. It is principle 13 there.

https://en.m.wikipedia.org/wiki/Zen_of_Python

There should be one-- and preferably only one --obvious way to do it.


Is this measurement on the entire language or specific features? Because if its on the entire thing, I can see the results not being good (all over the different quadrants).

I do agree though, that if a language surprises me, it should be to the left quadrant in this case. If a language is not surprising me, it should be delight as well.

I don't know if I call people who made / use Brainfuck sadists as much as I'd call them very geeky / nerdy.


Hoon is occupying that top-left quandrant as well, as long as you you don't mistake your surprise for horror - https://developers.urbit.org/guides/core/hoon-school . Surprisingly powerful.


Hoon is in the top right quadrant when you begin to understand the performance characteristics and what it would take to improve them. And the type system is pitiful.


It's what you get when a programming language is designed to support a political philosophy, and not a technical one ;)


Raku looks very horrifying if you want to actually do anything practical.

But that's some incredibly clever stuff! I wouldn't want to use it, for like, anything, but if there was a programming based game or code golf contest or something it would probably be a great choice.

And the original article talks about math, and I have no real experience actually doing nontrivial math IRL, so perhaps it has some useful properties for people who do.


> Raku looks very horrifying if you want to actually do anything practical.

Why so? Raku is practical, I have a solid experience of doing a lot of practical things with it. It has all battery included things - type checking, overloading, named/typed function parameters, oop, functional programming - what else you need? Python seems awkward for me in comparison with Raku with all that features ...


Thank you for your effort with the ASCII art. Must have taken some time.


Why would you choose anything on the right, surprising or otherwise?


When the consequences of trying to be accommodating in as many ways as possible force you into a horrifying conclusion that rarely appears in practice. Turing complete type systems are an example that could be seen this way.


Raku is interesting as a language indeed.

I can't get over some of the idioms, they just don't map nicely in my head. The same reason AppleScript struck me as weird, it trys to use natural language on one hand (e.g. `my`, `say`, `sub`, `gather`) with some oddities like `@` being used for (assuming?) scoped iterations or declaring modules[0] and some other from-an-outsider byzantine syntax decisions. For instance, I can logic this out, mostly, but it would not feel intuitive to me to "discover":

```

   my @bottles = (flat ((99...2) X~ ' bottles'),
              '1 bottle',
              'no more bottles',
              '99 bottles');

   my @actions = (flat 'Take one down and pass it around' xx 99,
              'Go to the store and buy some more');

     for flat @bottles Z @actions Z @bottles[1..*] {
       say "$^a of beer on the wall, $^a of beer.
   $^b, $^c of beer on the wall.\n".tc;
  }
```

These idioms don't click for me intuitively. Its a very symbol heavy language and the symbols often appear to be overloaded by context.

All to say, its okay! different languages isn't a bad thing, however I can't say that I'd use Raku even if it was the best fit for something, like say, a natural language parser (which based on all evidence Perl / Raku is really good at building parsers)

[0]: https://examples.raku.org/categories/module-management/Fletc...


Let me guess, you were never a Perl programmer? Trust me, if you have a Perl background, a lot of that syntax looks pretty familiar (the @ sigil in particular denotes an array).


Yes. Though that's my point, to me it doesn't feel the most welcoming to others who know another language (or languages).

By all means though, if folks like it, carry on. I am sure Raku as a runtime is well engineered, I respect that.

I don't know what lineage Raku shares other than Perl, but it seems to be on its own plane if you will, and thats cool.


I think all programming languages look alien if you’re unfamiliar with their syntax.

Having come from a Pascal background, C felt weird with all of its curly braces. Then after getting familiarity with C and its ilk, Python felt weird. In fact one of the reasons I was reluctant to learn Python was because I was good enough at Perl that I didn’t see the need to learn Python. I had a similar repulsion to LISP too, until I learned s-expressions and now I can see beauty in LISPs too.

Bar for esoteric languages, whose goals are typically different from a general purpose language, most languages are designed to be useful. So once you spend a little time in it you do start to appreciate some of syntactical quirks you once perceived as ugliness

…or at least that’s been my experience with most languages (and I’ve learned a lot of programming languages over the years)


Calculus also doesn't feel welcoming for accountants.


I actually feel the same about bash


Which isn't surprising, because Perl was originally designed to be a replacement for shell as a scripting language. So, it borrowed lots of stuff from shell scripting, and if you've spent lots of time in bash, awk, sed, etc, then elements of Perl (and Raku) will look very familiar.


My favorite feature of Raku: Integer division and decimal literals both return `Rat`, a rational fraction type. Even though everyone knows floats suck, nobody is actually moving away from them except Raku. If you want a float literal you specifically have to use scientific notation.


There are older languages just waiting for the people who dislike IEEE 754 floats to notice them too, like Common Lisp and Scheme. They have a numeric tower which includes both rational and complex rational numbers:

    > (+ 5/3 4/5)
    37/15
    > (expt #C(0 1) 2)
    -1
    > (expt #C(3/2 1/2) 2)
    #C(2 3/2)
And of course they all support arbitrary precision numbers as well:

    > (complex (fact 10) (fact 20))
    #C(3628800 2432902008176640000)
    > (/ (complex (fact 10) (fact 20)) 29)
    #C(3628800/29 2432902008176640000/29)
Composition is a wonderful thing.


It's actually an awful misfeature, because those Rats will automatically turn into floats when the rational representation gets too big:

  > WHAT 1/10
  (Rat)
  > WHAT 1/100000000000000000000
  (Num)
(There is FatRat, which does not so 'promote', but it is not the default.)


Newer versions of Rakudo allow you to tune this behaviour dynamically through the $*RAT-OVERFLOW dynamic variable (which defaults to Num for the described behaviour).

See: https://docs.raku.org/language/variables#$*RAT-OVERFLOW*


I disagree that graceful degradation from Rat to Num (ie double) when Real numbers over- or under-flow is a misfeature.

We could (and have) debate whether FatRat should be the default, but imo the right expectation for an untyped language should be that very large or small numbers are represented with a floating point representation that (i) uses the FPU that's right there and (ii) sacrifices precision in the mantissa for accuracy.

Since Raku has (gradual) types, you can easily specify what you want and throw an error.


AFAIU, it's not only very large or small numbers that are affected, but any rational number with a denominator larger than 2*64. For most applications it's completely unpredictable when the switch to Num happens.


Nearly, the docs say "To prevent the numerator and denominator from becoming pathologically large, the denominator is limited to 64 bit storage." (https://docs.raku.org/type/Rat)

Please bear in mind that in raku (like perl and other untyped scripting languages) it is normal to say:

my $x = 1; say 1 + $x; #2 say 'a' ~ $x; #'a1'

My point is that when the type is automatically inferred/coerced like this, is is very natural that small/whole Numerics are Int, that medium/fractional/decimal Numerics are Rat and that large/exponential Numerics are Num (ie double). And that you can freely perform maths operations mixing the various Numeric types at will.

my $y = 1; say 1 + $y; #2 Int say 1.1 + $y; #2.1 Rat say 1e27 + $y; #1e27 Num

And, in raku, if you want to control the type, then just use the type system like this.

my FatRat $r;

I also think from a 2nd order point of view that a denominator of 2*64 means you are dealing with quite a small number 5e-20 ish), although admittedly that is a matter of taste and machine performance. It makes most sense in my view to go with Rat (which is stored as an Int (uint64) for the numerator and an Int for the numerator.

That way (i) you get to use all those transistors that you bought for your FPU and (ii) you do not get a surprise as Rat operations perform slowly without warning.

--- And yes - @lizmat has pointed out the various pragmas to let you control the behaviour you want if you disagree.


`INIT $RAT-OVERFLOW = CX::Warn` will produce a warning whenever a switch to Num happens.

`INIT $RAT-OVERFLOW = Exception` will throw an exception whenever a switch to Num would happen.

If you want to define your own behaviour, you can. For instance:

class ZeroOrInf { method UPGRADE-RAT(Int $nu, Int $de) { $nu > $de ?? Inf !! 0 } } INIT $*RAT-OVERFLOW = ZeroOrInf

would either convert the value to `Inf`if too large, or to `0` if too small.


And for 1.5 years now, you can have the FatRat behaviour by simply adding `INIT $*RAT-OVERFLOW = FatRat` to your program.


Rational numbers do not overflow or underflow. The only reason for having a type with this degenerate behaviour is to screw over the unsuspecting user who expects the language to protect them because they saw it using rational number representations in some contexts.


Ha, it's like best-effort typing. The interpreter gives you a precise rational if it can, and it throws you a float if it can't. I suppose that, in a language as dynamic as Raku, the idea is that you should never need to keep track of types anyhow, but this might be nicer if this value ends up being your user-facing output, depending on the application.


There is no reason why it could not continue to use the exact representation and operations if it wished (and that's what FatRat does). It's just an ill-conceived concession to performance.


and yet, raku is the first serious attempt to unify Int-Rat-Num-Complex types into a cooperative Numeric space --- which I think is a good design given its untyped default approach


Common lisp and scheme have had proper numeric towers without this problem for decades.


ok - but raku has a reinvention of Numeric that is less esoteric, and more practical ... what you mention is that an Int isa Rat isa Real isa Complex isa Numeric ... and thus in Scheme or Lisp, then the Number Tower class model is one of subsetting features as you go down to an Int which is counter to the reality in your computer than an Int is just a 64-bit register

in raku, for example, an Int is not an isa (grand)child of Complex (and so doesn't carry that overload) ... but yes it is a Real

so raku separates Real from Complex and Int from Rat and you can go eg. my $x = 1 ~~ Rat; #False and this is aligned with literals - 1 vs 1.1

and then you play nice if things overflow

see https://docs.raku.org/assets/typegraphs/Numeric.svg


> My favorite feature of Raku: Integer division and decimal literals both return `Rat`, a rational fraction type. Even though everyone knows floats suck, nobody is actually moving away from them except Raku.

Scheme’s numeric tower has a handled exact expressions with exact representations correctly for decades, so I guess its true that is not “moving away from” unnecessary and error-prone use of inexact floats where not explicitly requested, but only because it was never there to move away from.


Racket begs to differ. Also does things right.

  > (/ 1.0 3.0)
  0.3333333333333333
  > (/ 1 3)
  1/3
  > (- (+ 0.1 0.2) 0.3)
  5.551115123125783e-17


I'm not sure what I think about that. I know when to use decimal/rational types vs floats, but my own Python code has a whole lot more float() calls than Decimal()s. Floats are almost always what I want unless I'm directly working with money.


Are they almost always what you want? Like, what kind of code do you work on? I personally find, that I need floats quite rarely and rationals usually are just fine, when I write Scheme. And when I write Python, floats also don't come up that often, except for in places where I would rather like to have exact numbers than inexact ones, always making me doubt, whether what I calculate here might have too big error in it.


It's mostly in calculating percentages, measuring timing, computing the interval to wait to satisfy a rate limit, that sort of thing. None of those require the extra overhead of an exact datatype in the contexts I'm using them.


It's funny that you're talking about this in a Python context, because pretty much every few lines will be 50-100x overhead. I take your point that Python would probably slow this type of thing down even more than it should, but still... Python is the exact type of language where this type of generalization to something slower but "more correct" makes sense in my opinion.

For what it's worth I agree with you when it comes to languages where 50+x overhead isn't an ever-present fact; you should generally have to opt in to using decimal types and the like.


Raku contain so many cool things.



Love it:

> The following built-in types inherit from Cool: Array Bool Complex Cool Duration Map FatRat...


Examples?

Also, related and recent:

Ask HN: Are you using Raku? Pros / cons?

https://news.ycombinator.com/item?id=36922692

Question inspired by starting to watch this: Raku for Beginners, at TPRC 2023:

https://youtu.be/eb-j1rxs7sc


Floats are fine, people just don't want to understand computers it seems


When I was looking at the language, I didn't find the documentation "really poor". In fact I was impressed at how much of a one-stop-shop the official docs site was for both conceptual docs and API docs.

https://docs.raku.org/

Look at the following page as a jumping off point for conceptual stuff. Absolutely top class:

https://docs.raku.org/language


As someone who has been doing Raku for several years now, I will say the documentation is both great and also lacking. Most of what's there is well written and includes useful example code, but I occasionally run across things that either haven't been documented at all or have only been documented for simple cases.

The worst offender in my experience is the module system. There are docs around it, but they would benefit from a rewrite with a less "technical" audience in mind. The module/package distinction is difficult to wrap your head around just from reading the Modules page. The fact that the declared namespace of what you import can be different from the namespace you used to import, but the directory structure has to match the namespace for the compiler to find it... it's useful and makes sense in a strange way, but I had to learn that by fiddling around.

  # in Name/Space/Thing.rakumod
  unit class Thing { … }

  # in UseCode.rakumod
  use Name::Space::Thing;
  my Thing $thing .= new: …;


That's due to the Perl culture. Their FAQs and man pages were top class due to the kind of programmers who write Perl; funny, pithy and quirky.


The Perl man pages were always excellent.


Yes.

I remember downloading them, as multiple PDF files, one for each broad subsection (like perlsub, perlvar, etc.) under the main TOC or so, over a somewhat slow Internet cafe connection and then reading many of them.

Not only excellent content, but excellent typography and design too, even though they were mainly text. I remember, even now, thinking that the PDFs looked really good.


>Look at the following page as a jumping off point for conceptual stuff. Absolutely top class:

>https://docs.raku.org/language

Thanks for that.


> Raku has no qualms about using Unicode operators. You check set membership with ∈. There's also ∉, ∋, and ∌.

Something to note is that there are ASCII equivalents[0] for every cool Unicode operator found in Raku. For example, the equivalents for ∈, ∉, ∋, ∌ are (elem), !(elem), (cont), !(cont).

[0] https://docs.raku.org/language/unicode_ascii#Other_acceptabl...


I would definitely use the ASCII versions.


Why? It's less readable. I like how JetBrains in recent releases renders ">=" as a single glyph. I mean maybe for typing, I see that as a benefit, but I'd hope my IDE would replace them with the proper Unicode character.


Isn't this still the usual critique of a language before knowing much about the language? We were used to this with Perl's "line noise". Now we get

> Raku has no qualms about using Unicode operators.

You have the option. I have used this option because I want my code to be compact and expressive on screen. A good fit for careful, considered, creative unicode usage.

Or the ever popular (Jeez that's new):

> I hate the sigil thing

For me I have been using Raku/ Perl 6 because I loved the expressiveness of Perl's swiss army chainsaw. Raku is Perl to the power of Perl. It's cleaned up. It's expressive. Some massive piles of features have been built up from the good old Perl. It's great. The documentation is good - and does need some continuing work. (The comparison with Perl is tough. Perl documentation was/is just amazing.)


How do you type the non-ASCII operators? Special keyboard layout? Your editor automatically converting some sequences? Raw Unicode escapes? This seems complicated pointless to me just to save a couple characters.


Input method: I see this as fundamentally a keyboard input method that needs to be resolved for your setup overall. Not a Raku issue. For now I'm settled on three alternate methods (under linux). The Compose key method, mostly for accented characters. The editor's code point method, in vim Ctrl-v u or U (but you need to know the unicode code point and I don't exactly have them memorized). Or copy and paste from some web character chart. Since this is programming and not texting emoji, I'd rather have a specific, chosen code point so I tend to use the editor's code point input. (But wait, that's not all, you need enough unicode support in your programming font - not very easy on linux.)

What's the point: Absolutely not saving typing. The point is to have a more visually distinctive notation. A math blackboard would be full of that. For example, in the code right in front of me I have a U+25C0 e2 97 80 BLACK LEFT-POINTING TRIANGLE as a custom infix operator for two specific data types. I could have overloaded an existing one... but no, the existing one still exists and it should be extra clear which one is used. When that code is in front of me, it's perfectly clear which is which. And now that the symbol is there in the code, it can just be copy-pasted. (For good measure, the unicode code point is specified in the operator's definition - but that's not necessary in day to day editing.)


When I last looked at this -- admittedly years ago at this point -- all Unicode operators had an ASCII alternative. So maybe you could enter the code as Ascii and prettify it into Unicode?


I sometimes idly wondered what would a programming language look like if it were filled to the brims with syntactic sugar. Now I know. It has this "it's horrifying but in a weirdly enticing kind of way, I can't make myself look away, show me more, please" kind of feeling.


You might like noulith, a hobby language written by the person who won several of the recent Advent of Code events.

The about on GitHub reads "slaps roof of [programming language]* this bad boy can fit so much [syntax sugar] into it."

They even used the language in the most recent event Advent of Code and won. https://github.com/betaveros/noulith


Wait until you learn about Raku's Grammars.


I have the same reaction every time I look at modern C++.


It's worth remembering that Raku originated as "Perl 6" and a lot of design philosophies are derived from Perlisms in the first place.

The author seems to be unfamiliar with both Perl and Raku's history, given that he immediately comments on the x operator for repeating strings. Which has been the case for many decades of Perl. :)


FTA: "The regex syntax isn't backwards compatible with Perl 5. For thirty years languages followed the PCRE "standard" and Perl 6 just… threw it all away."

so it looks like the author has at least passing familiarity.


Which also gives away some gaps in the author's knowledge.

Perl wasn't following the PCRE standard. It was the PCRE standard. Everything else was following it. Usually with some piece missing--far more implementations should have the /x modifier.

The Perl 6 design process didn't throw it away after 30 years. It threw it away with less than 10 years. There was already widespread knowledge that Perl had gone beyond "regular expressions", and IIRC, that was even before it had implemented recursive expressions. Perl 6 would be "patterns", and they would be good enough to be used as a full grammar. The language itself could be parsed in it, and Raku sometimes is, depending on the implementation.


> For thirty years languages followed the PCRE "standard"

It says that (most) other languages followed the standard set by Perl 5, not that Perl followed the PCRE standard


Technically Perl 5 is parsable using Perl 5 regexes: https://www.youtube.com/watch?v=e1T7WbKox6s

But it is also true that Raku’s grammars were designed from the start to write real parsers in, whereas Perl 5’s regexes sort of gradually got that way.


> It was the PCRE standard

haha true

> The language itself could be parsed in it, and Raku sometimes is, depending on the implementation.

That's fantastic, didn't realize.


Except that Raku does support PCRE, you just have to add the correct adjective.


The first footnote says that Raku was formerly known as Perl 6.


The "Perl6/Raku is nothing like Perl 5" hype has been hugely overstated. It wasn't called Perl 6 for nothing and it was developed by the same Perl team more or less. For anyone who worked with Perl 5 significantly the legacy in Perl6/Raku is obvious. Raku's whole object model is a slightly more capable version of Moose.pm, a Perl 5 module on CPAN.


From what I heard, it half inspired Moose actually.


Are you sure it wasn't the Meta Object Protocol (MOP) rather than Moose? "A MOP for Perl" inspired endless debate but never amounted to much. In the end I think the Perl 5 community felt it was best left to Perl 6 but I could be wrong as this was all about 20 years ago.


I don't know, this is "second-hand intel" for me, pretty sure I read it in a very old article from someone of the likes of Carl Mäsak, Moritz Lenz or another early-day Rakudo contributor...


Let's face it, the Perl 5/PCRE regex syntax is atrocious. The only reason it exists is that (? was a syntax error in earlier regex syntaxes, so it could be redefined to mean anything.

Raku is an attempt to design a sane regular expression language from first principles, now that we know what we want them to be able to express. The alternative is being stuck with (?:this|(?>or that)) for the next 30 years.


Awful? It is inscrutable black magic and it is wonderful once you get it.

I haven't touched perl in years but I still find myself writing regex often!


Yes, awful. It takes a crack in the wall and drives a bus through it, at a significant penalty to readability. The compatibility advantage doesn't matter when you're evaluating syntax in a vacuum.


Not everything has to be super easy-to-use.


Nobody is suggesting super easy to use.

But sometimes interfaces could be easier with no loss of ability, because they grew over time and nobody ever fixed them.


It is terse for a reason. You're ordering a parser around in 10 characters. Which makes them fit everywhere! A longer more readable version starts taking up multiple lines and looking like shitty react code. Or you're just using your language's string primitives. But there is a reason that nobody does that any more -- who wants to read 50 lines of string parsing code when a couple dozen characters of regex will do the same thing?

Let us wizards have our magic!


The particular syntax being used an example is more verbose than it needs to be, so I don't know why you're making this argument.


Also various langages allow formatting & commenting regexes and that’s quite useful. Named groups as well.


>Let's face it, the Perl 5/PCRE regex syntax is atrocious.

Agreed, but god-damn is it useful.


Raku’s new grammar syntax is much less awful and just as amazingly useful.


That's cool that's cool, but the problem is getting people to use Raku.


Or even just Raku’s grammars. Perhaps we need an RCRE library :)


Perhaps. But that will become very difficult indeed.

Because in Raku, grammars are just a different way to write code. Grammars are really just specialized classes. And tokens / rules / regexes are just specialized methods. It all compiles down to bytecode, rather than something you can feed a statemachine.

This has several advantages: if a grammar doesn't provide functionality you need, you can write it in Raku code as part of the grammar.

It also means that when you improve execution of the bytecode, you will also improve the performance of grammars and regexes.

Finally: Raku grammars are very powerful. They are used to parse the Raku language itself. Which is a testament to its power. But also brings a whole set of challenges for the core developers :-)


I know all of that :)

I admit that I was partly being facetious, but I also don’t think it is entirely impossible. For example, a hypothetical RCRE could provide hash tables for storing named regexes, or could take function pointers for looking up names so that the library user could implement their own storage for them. And so on, and so forth.

I think that a hypothetical RCRE could be as influential as PCRE was, if someone could find the time to do it.

On the other hand, I am very weary of C these days. A Rust crate with procedural macros to provide compile–time grammar compilation would be a lot more fun. If I had some funding I could easily see spending a year or three on that.


I am definitely a gremlin, in a specific sense: I love tools that are quirky and complicated in a way that makes me more productive.

But this idea of programming in The Large vs The Small: I disagree with that comparison. A less wise person might assume that means it makes a bad language for The Large work. On the contrary, it's probably just as good as any other language at it, maybe better. The problem, as it was with other gremliny languages, is you need wisdom to use it.

Simplest example: using $x instead of @x. Any person who has used a similar language "enough" would never confuse these. In fact, the sigils actually make your life easier when reading the code, because you now know the type (well, the simple type, or with Raku, the interface) just by looking at that variable anywhere it's used. Now you don't have to go look it up, and you can do useful things with the same variable namespace using different sigils. It looks weird, and it's extra characters you shouldn't need, and you need to know what it means to use it, but it can make life easier. (https://www.perl.com/article/on-sigils/)

The problem comes for people without wisdom. The people who don't really know what they're doing. They will probably find this gremlin language a living nightmare. These people need a lot of bowling-bumpers and floaties and kevlar gloves and hard hats and GPS. Somebody let them into a store full of sharp objects and they need a lot of padding to navigate it without killing themselves.

That doesn't mean you can't build a skyscraper with the gremlin language. It just means the accident-prone unwise non-gremlins can't build a skyscraper with it. But wise gremlins can.


The notion/mindset that "This tool can only be used if you are good enough" is an enormous design smell in my opinion.

Tooling exist to help you and your team do the required work as good or fast as possible.

If you are using tools to gate keep juniors in your field, or that are unnecessary complex in order to stroke your own ego. Then the tool is more of a weapon against the rest of your organisation instead of something helpful.


About that, I loved that perl could be learned in layers. You learned the basics while using the language, and could discover one layer after another and Perl, mostly, kept you going and behaved as you expected. (By contrast, as I see it, to C++).

Except that no: Many people were not even paying attention to that. And the perl sigil system - which was taught pretty much from the start - was still one of the top objections! It wasn't gatekeeping. It was people refusing to learn the tools they were meant to use.


Have to agree with this one, and also, there is something that I think is easy to forget until you actually face code that has been worked on by dozens of people of completely random backgrounds, over decades: your wisdom doesn't necessarily coincide with someone else's wisdom. Context switching across people - and especially people who barely have anything in common - can be really troublesome. You have to know everything that somebody else you work with has ever used.


Tools aren't all made the same. Some are made equally well for novices and experts, and some require expertise. It's not a "design smell". Haven't you ever used a tool that wasn't made for toddlers?


"If you define a MAIN function, any parameters you give it will be automatically turned into CLI flags."

This is actually cool


What other languages do this? I love it



Practically any language that supports reflection has a library for that.



nushell[0] has this built-in. It's one of my favourite things about it.

Python has typer[1] to do this with type hints. It's limited and gets ugly quickly, but I love it for simple scripts.

[0] nushell.sh

[1] typer.tiangolo.com


I kept reading the post and had a growing feeling of “oh my god, it's like Perl but MORE.” Only when I got to the end I saw the footnote that it's indeed Perl 6


pyret has kebab-case and infix subtraction too, but it has an easy time doing so because it made the decision that infix operators have to be surrounded by whitespace. i guess that wouldn't really be in keeping with raku's anything-goes philosophy though.


I for one welcome our Raku wielding overlords.


There's literally dozens of them!


Raku? Nice but, way too much magic, trying to be too smart and ending up being too late and too slow. And I'm happily doing Perl5 for $work, among other things. Going to write my new/other stuff in Ruby. Then I could use Crystal I ever want static typing, a compiler and binaries. Sane tradeoff between magic and a programming language out of Disc World.


Who are the target user base for Raku? I remember writing some fun scripts in Perl more than decade ago when I was still in uni. I really liked the language. Is there any benefit for me as a data analyst to learn Raku over another language like Julia?


We see so many language critiques or preferences where the visuals seem to have been a main problem. "line noise" "sigils!!!" "too many parentheses" "unicode!!!". Which to me denote languages judged long before being looked at with any care.

Should this be a fundamental lesson for language designers? Should languages come with a converter to switch back and forth between the "calm, proper english version" and the "compact, expert mode" visual?

(Without mentioning the translated programming languages like BASIC in french and such things which were discussed on HN not long ago.)


This is a fascinating language! I am so curious: how does the “anything goes” matcher work? I struggle to imagine how one would implement that ~~ operator.


    $lhs ~~ $rhs
is mostly sugar for

    $rhs.ACCEPTS($lhs)
and then method resolution (including multimethod handling, of course) figures out which ACCEPTS method to call and that does ... whatever the author of the type implemented.

('mostly' because ~~ actually aliases $lhs as $_ and passes $_ as the argument to ACCEPTS)


You probably know it well but let me elaborate a bit more "for the sake of the audience":

- the left handside is evaluated into a value (let's symbolize this value with $lhs) - this value is passed into the expression on the right handside as the $_ (topic) variable - the expression on the right handside gets evaluated, producing an $rhs value - now, $rhs.ACCEPTS($lhs) is called

The funky thing is that the right handside of this operator is an expression, not an evaluated value... it's like an invisible code block. The implementation calls this property of an operator "thunkiness", the expression on one side acting like a thunk rather than something that can be evaluated right away. This is akin to the short-circuiting behavior of && and ||.


There were so many claims in the first couple of years after Perl6/Raku's initial release in December 2015 about how it had the potential to be optimised due to its superior design but here we are nearly 8 years later and it's still takes 10 times as long as Perl 5 to parse a log file with a regex.


Yes, and those claims might have even been real (also, Perl still lags behind on Unicode stuff, or so I heard?)... the thing we like much less to talk about is that there just needs to be somebody to make those optimizations happen. Work on the actual bytecode VMs (or rather just MoarVM at this point) has been desperately lacking.


Is Jonathan Worthington still active within the project? He was the original force behing optimisation if I remember.


Jonathan Worthington has been much going on and off in the recent years. He still had one big optimization project that got completed in 2021 - the new dispatch mechanism - but I don't know about the benchmarks and I doubt it would help with the shear throughput of basic data processing.


I don't know if @hillelwayne is watching ... but the raku docs do explain how to use a Grammar to write a calculator https://docs.raku.org/syntax/Creating%20grammars


Cool article.

> This isn't weird, lots of languages have multiple dispatch. What is weird is that you can also dispatch based on a runtime predicate of the value.

Is it wrong to think of the predicate itself as a type here?


Wow I never see Perl on HN; are we finally gonna see a Perl revival?


Raku is what you get when you mix APL and LSD.


Speaking in general terms here to avoid NAXALT responses but it’s been my experience that the Raku/Perl crowd enjoy being clever for clever’s sake and are more concerned about the language itself than the problems being solved. I can see advantages of this cleverness for small scripts but extending a language who’s roots are in a Bash replacement to large scale project development seems DOA to me.


Well done. After spending some time with the Raku examples it made me go back and want to learn TLA+.


The language formerly known as Perl 6. I thought I'd heard of it somewhere.


Raku also has exact floating point math, eg. 0.1 + 0.2 + 0.3 = 0.6


TIL circumfix operators...


> Formerly known as Perl 6. ↩

I’m in a desparate need of a message from Captain Obvious right now. I know nothing about perl


Perl was published by Larry Wall in 1987; Perl 5 appeared in 1994 and has been remarkably stable since then -- I have code from 1998 that still works great. It feels like a more powerful sed or awk, and has an enormous available online library at cpan.org. There are many books describing it; start with "Programming Perl" by Larry Wall; get advice on elegance from "Modern Perl" by Chromatic; if you want to build an Internet component smaller than an available module (or get guidance on the implementation thereof), "Network Programming with Perl" by Lincoln Stein is superb.


It's just Perl 6 renamed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: