Hacker News new | past | comments | ask | show | jobs | submit login
Passerine: A small extensible language designed for concise expression (passerine.io)
227 points by isaacimagine on Feb 21, 2021 | hide | past | favorite | 111 comments



Despite new language fatigue, this does look interesting.

A few things that stand out:

1. use of Exceptions. These get so much hate these days, but the articulation here of the reasons for allowing them is better than most of the Hacker News comment threads about them. I might just link to this section of Passerine's docs the next time somebody tries to tell me how Result<> is the only thing any programming language needs.

2. The use of fibers/coroutines as built-in error boundaries. This is such a pleasant natural extension to the Erlang proposal I find myself wondering why so few of the languages I'm familiar with have taken the same approach. Presumably I need to get out more.

3. Not sure what to think about the left-associativity of function calls, nor the lack of explicit call syntax. One of the things that bothers me about Ruby is the extra work you have to go to to pass functions as arguments to other functions. Naively, lacking explicit call syntax seems to me like a premature optimization. I'd appreciate someone's perspective on these two (separate?) points.


I'm glad you found the langauge interesting! I actually started Passerine after reading Bob Nystrom's blog posts on concurrency and iteration (Iteration, Inside and Out). At the time, I was also reading through the thesis on Erlang, and I was thinking how neat it would be to combine Nystrom's thoughts on concurrency with Erlang's processes.

I think that I could answer 3, given I know why I did what I did ;P. So, passing a function as an argument to another function is pretty simple. If it's in a variable, say `bar`, `foo bar` will do. If it's anonymous, say an identity function like `i -> i`, you just need to group it and schloop it in: `foo (i -> i)` or `foo { i -> i }` will work.

Thanks!


The author summarizes it nicely: "Passerine has roots in Scheme and ML-flavored languages — it's the culmination of everything I expect from a programming language, including the desire to keep everything as minimalistic yet concise as possible. At its core, Passerine is lambda-calculus with pattern-matching, structural types, fiber-based concurrency, and syntactic extension."


People say they hate exceptions much, but what they actually hate is either one of two subsets of exceptions: Java-esque checked exceptions and C++-style untyped (and heavy) exceptions. Result type is an implementation of much wanted middle ground: explicitly typed but ergonomic enough exceptions. Its non-abrupt nature is, in my opinion, just a side effect.

By comparison exceptions, as a means to abruptly terminate the thread of computation, is not something you can simply replace (unless you do total programming). Most languages with result types do have exceptions in this sense: ML `raise`, Haskell `error`, Rust `panic!` etc. Passerine seems not much different. (Note: the whole error system is, as far as I see, completely unimplemented.)


Hey, author of this language here. I love Rust, but what I like the least about `panic!` is the lack of context it provides. I mean, sure, there's `RUST_TRACEBACK=1`, but that's not nearly as neat as a real traceback, with the appropriate context and data to help determine the root of the problem.

I'm still thinking through my design options, because I've been recently reading a lot about Algebraic Effects. I'd hate to have two non-orthogonal features in a minimal language, which is why I'm waiting until more of the language has solidified and I've reached a conclusion about Algeffs before implementing the whole error system.

Thanks!


Yeah, exceptions are not enough for fully reconstructing the problem! I'm not sure this can and should be necessarily fixed by changing exceptions, though. It is often the case that you need a deep trace to understand the problem and mainstream tooling is not prepared for that (I wish rr to be more widespread).

For algebraic effects: I too had keen eyes to Koka and think that it might be the future. What I don't know is how to make effect types mostly invisible to users without sacrificing its power. I had been experimenting with dynamic scoping in my mind, but that feels unsatisfactory.


Resumable exceptions - ie, exceptions that don't unwind the stack when they are thrown - do let you examine the problem in situ.


I left this comment just a couple of days ago - I don't see any reasonable solution for this except (ha) exceptions:

https://news.ycombinator.com/item?id=26198607


Technically speaking cancellable sort requires non-local jumps, which includes exceptions but also many others (call/cc, delimited continuations, algebraic effects, ...). It can be also argued that the general idea of cancellation is an exception (as in futures). In any case we are not dealing with something unnecessarily requiring exceptions.


> which includes exceptions but also many others (call/cc, delimited continuations, algebraic effects, ...)

I think you might be missing my point? How would you use call/cc or the rest with arbitrary callers? Do you write everything in that manner all the time? Do you realistically expect others to?


> How would you use call/cc or the rest with arbitrary callers?

For example, with shift/reset operators (a form of delimited continuations) you can do something like this pseudo-JS code:

    const cancelled = reset {
        arr.sort((a, b) => {
            if (aborted()) {
                shift (k) {
                    // k is a continuation from shift to reset,
                    // but we want early return so we discard k
                    // and replace the reset expr with false.
                    return false;
                }
            }
            return compareTo(a, b);
        });
        // this will replace the reset expr if shift is not triggered.
        return true;
    };
Every example I've mentioned can be used to implement exceptions after all, so if you know something can be done with exceptions, you can do the same with call/cc and so on as well.


This is basically a functional version of setjmp and longjmp right? Meaning it has the same problems? How do you ensure the caller's cleanups etc. still occur so you don't mess up the program state?


I'm not sure about "messing up the program state". Do you mean the memory error like segfault, or the `finally` block? My answer would depend on the answer:

Memory error: Instead of the register states for setjmp, call/cc and so on give you a well-behaved function of the continuation. I will spare what the continuation is but basically it's possible to convert any code into a code with the explicit continuation available as a function at any time (Continuation-Passing Style). So unlike setjmp/longjmp you can be sure that everything will be a normal function call in the end. (The actual implementation can of course vary.)

`finally` block: You are kinda right. They are used to implement things like more readable control structures like exceptions and cumbersome to use directly. There are things like `dynamic-wind` in the Scheme world, but it's true that if you weren't aware you can easily screw up. My point stands though, what you need is a non-local jump and not the exception proper. It just happens that the most widespread non-local jump construct is an exception.


I meant finally blocks... freeing memory, reverting changes, etc... which are things you'd use destructors for (and hence why you can't use setjmp/longjmp with them). If you don't do them then you mess up your program's state and hence have no way to ensure correct behavior afterward. This is why I mentioned arbitrary callers: you have no idea what callers have done or what they still need to do before returning. Heck, even sort() could have allocated memory that you'd be leaking, and not all destructors are constrained to resource leaks either.

I don't know what you consider "exceptions proper" (do you have to inherit from Exception? do you even have to pass along extra information?) but I don't think I agree. My point is actually precisely that what you want is not a non-local jump, but actual unwinding. Semantically, this is what exceptions are: synchronous dynamic unwinding. If you have a way to stop control flow and unwind arbitrary callers in a synchronous manner, what you have is exceptions, regardless of the particular syntax or other details. (e.g., it's beside my point you can "throw" extra information alongside this mechanism, or whether it's part of the language or a library, or whether you do the unwinding on the heap vs. the stack, or whether it's functional or procedural, or what have you.)

And my overall point is that this kind of dynamic unwinding is necessary for you to be able to do certain things like canceling routines in the middle in a safe and well-defined manner. (Especially that Result<>/Optional<>/whatever explicit localized mechanism is in fashion these days is clearly an insufficient replacement for exceptions.)


I'm in principle in agreement. You do need unwinding. Specifically I meant an explicitly triggered unwinding by "exceptions proper". I just wanted to point out that unwinding is much broader than exceptions. (For example generators will implicitly unwind upon its cleanup. Of course it requires callee's cooperation and I couldn't use it as an example.) I also don't buy the claim that explicit Result types are incompatible with unwinding---e.g. RAII works fine with result types.


i remembering seeing and appreciating this one a few days ago :)


Oh cool! Happy to hear :)


Languages that do Result + panic, or languages that do checked + unchecked exceptions. Which is best?

IMO, I think "Result + panic" works better because it's a cleaner separation in the sense that handling panics is strongly discouraged. Whereas handling unchecked exceptions happens all the time, and leads to confusion.

Edit: To clarify, I would consider Go to be "Result + panic" even though it doesn't have a Result interface. It has a convention of returning a tuple of "value, err", which is handled by the caller and is essentially an informal Result.


"Panic" is a fancy name for unchecked exception. It can arise from any code, for example, by out-of-index accesses or arithmetic overflows. In the other words, if you want to allow, say, `arr[0]` without any checking, you should accept that it can abruptly terminate the current thread. (Pony is a rare example of the contrapositive.) I too agree that "Result + panic" is better for the very same reason, but it doesn't mean exceptions are gone.


Handling unchecked exceptions "happens all the time" because in certain languages and libraries, like Java, it looks like less work than tediously checking error conditions or letting the program die properly. Different kinds of software in a different language can fare very differently.


The function call syntax is old and well-established. It is used in Ocaml and Haskell, for example, but comes from older languages that influenced them.

Note that the problem in Ruby does not come from the syntax "f x" for functions applications, but rather from "f" being syntax to call f rather than just evaluating to f. This Ruby is mistake is typically not present in functional languages that use "f x" syntax for calls. Typically in those languages a function cannot have zero arguments, and for functions with no useful arguments you pass a dummy argument called "()". So "f ()", commonly abbreviated to "f()" is the syntax for calling a function of "zero" arguments in those languages.


thanks, this is very helpful.

I think I can agree that special-casing functions that take no arguments is a reasonable approach (such things should be pretty rare), but I'm less sure I see the utility in not special-casing function call syntax in the first place.

Generally, I think of function calls as "special" because they redirect active execution of the program to a different context.

e.g., in:

  f a b
the space between the letter "f" and the letter "a" means "transfer control flow to f, passing a", whereas the space between the letter "a" and the letter "b" means, essentially, "also". In a butchered English sentence, "transfer control flow to f, passing a also b". This seems far more explicit when looking at ALGOL (and following)-style

  f(a b)
I'm curious what the argument for not special-casing general call syntax is, because I guess I've never run across a place where it was articulated!


Well, the "f x" notation for function calls comes from the tradition of functional programming languages, where functions calls form the bulk of the code. It makes sense to make the most common operation have the lightest syntax.

Also, I suspect there is a misunderstanding in the "f a b" case. In the languages that use this syntax functions only have one argument, so that parses as "(f a) b". Both spaces means the same thing: evaluate a function; first evaluate f with argument a, which returns a function f a, then evaluate this function f a with the argument b.

In these languages, what would be a two argument function in other languages, you would define instead in either of these two fashions:

- As a function of a single argument which is a pair. So "f (a,b)" (which can be written as "f(a,b)" for short) calls f with a single argument, namely the pair (a,b).

- As a function returning a function that uses the second argument, the "f a b" style, which means "(f a) b".

These two forms are equivalent, different functional languages tend to prefer one or the other (SML, Ocaml and F# tend to use "f(a,b)", while Haskell tends to use "f a b"). Passing between one form and the other is called currying in one direction (I always forgot which one!) and uncurrying in the opposite direction.


yep, this is what i was missing. I'd forgotten that it was common to be strict about functions always doing partial application, such that `f a` calls a function creating a partially-applied function.

thank you for taking the time to clear this up for me!


Something interesting is partial application. Say you have a function that takes two arguments:

    add = a b -> a + b
This is really equivalent to:

    add = a -> b -> a + b
Which in turn is equivalent to:

    add = a -> {
        b -> { a + b }
    }
(for good measure). This means that the first call to add, when you pass a value `x`, returns a function (the `b -> { a + b }) that closes over x:

    add_5 = add 5
    add_5 7  -- is 12
    add_5 -1 -- is 4
This also has a unique property: most function calls are just chains of identifiers and values:

    foo bar "Hello" baz
Which means, if we consider them as chains, we can match against those chains and apply transformations to them:

   syntax 'foo 'bar name 'baz { print name }
Which is the basis of Passerine's macro system! (And don't worry, if the form matches multiple macros of if you use an identifier defined in a macro, it'll let you know).

Hope this helps!


this is great, thank you.

In the languages I use more frequently, one reason partial application needs to be done more explicitly is because not all arguments are necessarily required. Frequently these languages support either maps or "keyword arguments" to solve the problem of allowing some arguments to be optionally specified with defaults.

Is it common in this alternative style of language to have named arguments, such that you could still apply "optional" arguments in a non-positionally defined order by name? I'd love to be pointed to some information on how this "school" of languages handles problems where this is a less well-defined order of partial application.


So Passerine has a record type, and is row-polymorphic. Names keyword arguments in passerine would look like:

    draw_arc {
        x:      3,
        y:      5,
        radius: 10,
        start:  0,
        end:    2 * 3.142,
    }
As for defaults, this is something a bit more difficult. In a dynamic language, you can just try getting keys and replacing missing ones with the default ones.

However, in a static language, this isn't quite possible. a missing field on a record is a compile-time error, not a runtime one. Even row-polymorphic type systems, which are generally more lenient with respect to records aren't a solution.

Rust has `Default::default()` which allows one to splice in default values when a struct is constructed. This relies on the `..` splicing syntax; I've been considering something similar for Passerine, but I haven't come to any conclusions yet. I like this because it makes the fact that you're using defaults explicit, but at the same time I'm against it because it feels like unneeded boilerplate:

    draw_arc {
        x: 3,
        y: 5,
        ...Arc::default(),
    }
Another option is to just defer field checking to runtime, and allow these errors to be handled. Given that I want to be able to compile Passerine to machine code at some point, I'm not sure whether this is a good idea:

    -- if the expression errs, replace it with default
    syntax expr 'default def {
        match (try expr) {
            Ok v -> v,
            Err _ -> def,
        }
    }

    draw_arc {
        x: 3,
        y: 5,
    }

    draw_arc = arc -> {
        full_arc = { 
            x: (arc.x) default 0,
            y: ... 
        }
    }
But I'm not sure which avenue I should take at this point in time.

> I'd love to be pointed to some information on how this "school" of languages handles problems where this is a less well-defined order of partial application.

I know Ocaml has labels, which are basically public function names. so like:

    let f ~banana = a + 1;
means that the type signature is something like:

    val f : banana:int -> int = <fun>
Notice how `banana` is a part of the type signature. There are also optional named parameters:

    let f ?(banana = "ripe") = ...;
Which when used have to be explicitly listed:

    f ~banana:"rotten" ();
That's about it. What are your thoughts?


>2. The use of fibers/coroutines as built-in error boundaries. This is such a pleasant natural extension to the Erlang proposal I find myself wondering why so few of the languages I'm familiar with have taken the same approach. Presumably I need to get out more.

This is done in Lua, although it has a divergence between `pcall()` and `coroutine.wrap()`.


> The use of fibers/coroutines as built-in error boundaries

Isn't this roughly what JavaScript does with Promises?


my issue with Promise-style exception boundaries is the lack of type checking or explicitness, especially since `catch(Exception)` is discouraged in C#, specifically (since e.g. you probably want to panic on null refs but handle a socket exception.)

in my ideal, exceptions could only be thrown/caught at the async/await boundary because only there should you have real side effects, and there should be structure, but at that point I'm reinventing Haskell's monads.


'roughly' ;)


Isaac Clayton @slightknack

High-school student and aspiring Software Architect.

Enjoys Language Design, Graphics Programming, and Machine Learning. Working at @vrtbl on Passerine.


Jesus. What have I been doing with my life?

Edit: Here's another super impressive project by this person https://github.com/slightknack/keikan


People born of the highschool age have had access to computer at least 10 times more powerful than when I was the same age, and with a lot more material on the internet and a more mature field as a whole. It would be stupid to feel bad by comparing yourself. He mentions that he's been programming for "a decade" by high school. There were no computer science classes in high school in America when I went 10 years ago. So he seems to have had good exposure from his parents as well.

That said, he does seem to be taking full advantage of his advantages. He's the kind of CS major I wish I had encountered when I was in school. Too many of them just seemed to slip in for the 6 figure salaries on the rise.

Also it's pretty nostalgic how obvious the wording on his site is as being poised as a resume. It reminds me of my career start. Though with just this PL project, his resume is already stronger than most his age.


Yes, they have access to a more powerful computer, but they’re also surrounded by more powerful distractions. This is incredibly impressive no matter how you look at it.


Not to be too cynical. The real challenge is to see if he's able to avoid using all those skills to make people click on adds.


"The best minds of my generation are thinking about how to make people click ads. That sucks."

~ Jeff Hammerbacher, fmr. Manager of Facebook Data Team, founder of Cloudera

(one can argue that indirectly that is what every employee of Google does)


> There were no computer science classes in high school in America when I went 10 years ago.

To be fair, this is a function of the high school you went to. AP Computer Science has existed in some format since the 80s.

"The College Board added computer science to the Advanced Placement Program in the 1983-84 academic year; as the discipline has evolved, the AP Computer Science course and exams have also changed."

https://secure-media.collegeboard.org/apc/ap07_compsci_teach...


I took AP computer science in 1987. It taught me almost everything I needed to know as a foundation to keep coding up to today.


I'm 36 and wrote my first line of code when I was 28. I'm so thoroughly impressed. I too, thought, what am I doing with my life?


I feel proud of what this young person accomplished, but I also really hope they didn’t skip some important life experiences in the late teens and twenties that can’t really be replicated later in life. That’s a special time.


I'm jealous, honestly. When I was in my early teens I wasn't doing anything impressive, but looking back I feel like I had momentum. I had always idolized college/academia, though, so I started at 16.

The next 5 years of my life were the most empty I've had so far, and it still pains me to think about how pointless it all was. My social skills only deteriorated over time, till the last semester when I ended up just ghosting all my classes and let my minor burn. Dropped my GPA from a 3.7 to something like a 2.8, but it felt so good to get a job and get back to doing the stuff I loved. I've gotten better at socialization through work, and now have a life partner and a daughter.

Maybe I'm only proving your point since I'm only in my mid twenties, but sometimes I wish I'd have just kept hacking away and seeing where that road took me. I feel like I might have had potential like this fellow, but left it behind a decade ago.


Oh stop speculating about their personal life and stay on topic.


posting on this web site too much? :)


probably :P


Looks nice. I've been recently looking for an expressive language to make an experiment combining two approaches to "cloud native" programming: something like Ballerina[0] combined with something like Dapr[1].

For example: when I instantiate a queue, I get one from my cloud provider, or my local Redis, or my custom C++ implementation. That way, my code would represent my intent, and the actual building pieces could be switched from environment to environment.

Dapr is already pretty close to this, but it's still a little verbose and low level compared to "normal" local programming. Maybe I'll experiment with Passerine for this, since it looks easy enough to tweak with Rust.

[0] https://ballerina.io/

[1] https://dapr.io/


You might find this “New Directions in Cloud Programming” paper interesting http://cidrdb.org/cidr2021/papers/cidr2021_paper16.pdf


Not to discourage, but I think it's worth writing:

I too was very much in love with conciseness, and spent a lot of my time with CoffeeScript, and had a blast.

That said, if you either

- Work with another one person or more

- Want to come back to a code you wrote some time ago

conciseness is not your friend.

Second, new language is really, really, really a lot of work (I know, I spent a year building one). It'd be awesome if more folks invested their energy into improved tooling for existing languages. I desperately want to be able to select a block of Rust, then run a VS Code command and get a new function. I want type-based completion, so that types are filled in for me on conflicts. Better errors and hits for resolutions. And a better playground for Rust would be nice too. Basically most of the things I had in Golem[0].

That said, building a new language is much more fun than trying to wrestle with VS Code dev process. So enjoy the ride! :)

[0] https://www.youtube.com/watch?v=HnZipJOan54


I understand the issue with conciseness and readability. I think comments are underappreciated when writing code: not only can you explain what some code does, but how it does it, and why it does it that way. This doesn't have to be documentation, per-se, but it's quite nice when it is.

> Second, new language is really, really, really a lot of work (I know, I spent a year building one)

I know what you mean ;P. I'd say writing a programming language compiler/interpreter is probably the easiest thing about making a new programming language. Tooling, adoption, libraries, etc. are the other 99%.

> It'd be awesome if more folks invested their energy into improved tooling for existing languages.

I agree, but I have two rebuttals for your following point. I'll start with the more logical one:

1. As a hobby project, learning the internals of how a compilation pipeline works is much more fulfilling and generally applicable then wrestling with whatever API a VSCode developer decides to chose.

2. And here's the more irrational one: If Guido just decided to work on tooling for C, we wouldn't have Python. I'm not saying I'm the next Guido (not even close, haha), but there are some language features that I wish were more mainstream, and what's a better way to show that certain features are a good fit for a language than to make a language with those features? To make an omelette, you have to break a few eggs, I guess.

(P.S. Golem looks pretty neat, congrats!)


Comments should explain the why, not the what or the how. Obviusly that are exceptions, almost all of them around the really algorithmic parts of the code, something I have to write less than once per year (backend software is not that clever.)

A good choice of function and variable names is usually enough to explain what code does. If it's not enough probably those functions are doing the wrong thing, maybe more than one at once, and must be redesigned.


> Comments should explain the why, not the what or the how. Obviusly that are exceptions, almost all of them around the really algorithmic parts of the code, something I have to write less than once per year (backend software is not that clever.)

Although I agree that choosing clear variable and function names is usually enough to explain what code does, I'm just not entirely convinced that concise code becomes harder to read with multiple people or after coming back after some time. I feel that communication and documentation is generally the best way to get someone up to speed.

By concision, I mean something closer to terseness or brevity: exactly enough to understand what's going on, without anything superflous. Here's a classical concise python one liner:

    def flatten_list(t):
        return [item for sublist in t for item in sublist]
This flattens a list of lists into a list. But if you're actually coming back to the code either having never seen this comprehension or forgetting how it works, a little comment never hurts:

    # iterates through each sublist to flatten a list of lists into a single list
Some may say this is redundant, but it's useful in more complex circumstances too. If you're working on a machine learning pipeline, say, it can be a quick way to block out some code.

    # normalize all the input data
    # generate embeddings
    # train until error doesn't improve for 100 epochs
    # run the net on our test set
(This is my third most favorite programming trick: pretending that something exists, then building it out*)

These examples are besides the point though. Terse code doesn't have to be illegible or hard to comprehend; it's all about finding that balance: no less, no more.

* My second most favorite programming trick is building a small set of tools and using those to redefine the problem.

> If it's not enough probably those functions are doing the wrong thing, maybe more than one at once, and must be redesigned.

Often I'll come across old code, think, "this can be written much more efficiently if I do it like this" and then half way through refactoring remember that I did it the first way because I already thought through the problem and realized that the first way was better and but wait maybe if I do this then that and so on...

It's easier to leave a quick comment just explaining why code is the way it is, which is all I was suggesting.


Okay cool. Let’s just make better tooling for existing languages.

...

Turns out that languages that are widely-used are difficult to make tools for because of all the pragmatic (real or imagined) tradeoffs that were made.

How can I prototype my tooling when it takes so, so, so much work to make the tooling work for a practical language?

I’ll just make a mini-language to prototype that tooling and then get back to the widely-used languages once my prototype has taught me enough.

...


I guess this won’t get much use in Italy, where the name is a classic slang word for vaginas...


Well, the D and Coq communities might well be willing to give it a close look.


Like Gavin Belsons signature Box 3?


Some questions about the FAQ's "What is vaporization memory management?" section:

> All functions params are passed by value via a copy-on-write reference.

What's a "copy-on-write reference"? Does this mean that if write a function that takes a million-element list, and inside the function I mutate the last element, the entire list will be copied (inside the function, but invisibly to the outside)? If yes, can I write a function that mutates a list in place so that the caller can see the effect? Can I even mutate lists or anything else? I don't see any clear information about whether Passerine has mutable data at all.

> A form of SSA is performed, where the last usage of any value is not a copy of that value.

Does this mean that any non-last usage is a copy? Or is it "only" a copy-on-write? Can you always tell statically where a usage is a last usage?

> All closure references are immutable copies of a value. These copies may be reference-counted in an acyclical manner.

Again, are these eager copies or lazy copy-on-writes? How are non-closure references garbage collected?

I would be interested in a much more detailed write-up of this memory management technique.


I'm working on a blog post on the subject, but this is something that needs more justification than an HN comment. To answer your questions:

> What's a "copy-on-write reference"?

A copy on write reference (CoW) is a pointer to some immutable data, that when written to, is copied to a new memory location first.

> Does this mean that if write a function that takes a million-element list, and inside the function I mutate the last element, the entire list will be copied (inside the function, but invisibly to the outside)?

Only if you use the original list later:

    big_list = [ ... ]
    new_big_list = mutate_last big_list
    print big_list
This would copy `big_list` because we use it in `print` later. If you do not use big_list later:

     big_list = [ ... ]
     new_big_list = mutate_last big_list
Or reassign the variable:

     big_list = [ ... ]
     big_list = mutate_last big_list

the original list is not copied, because this is the last usage of a value, and last usages pass the actual value rather than CoW, allowing the program to mutate it directly. This is what Functional but in Place (FbiP) means - You can write code in a functional style, but when executed it will mutate data in place.

> Can I even mutate lists or anything else? I don't see any clear information about whether Passerine has mutable data at all.

So, this is less about Vaporization and more about Passerine. Passerine does not have mutable data, only mutable references to data*. So the reference can change (or like in a record, the reference in a field can change), but not the data itself. Because of FbiP, however, mutations are optimized to an actual mutation rather than a copy and an overwrite.

* This is needed if one wishes to extend HM type inference to support mutation, IIRC.

> Does this mean that any non-last usage is a copy?

Any non-last usage is a potential copy. If the function you pass it to does not mutate the data, no copy will be made. Last references also include reassignments, so something like:

    big_list = [ ... ]
    for x in 0..100 {
        big_list = big_list.append x
    }
Does not make 100 copies of `big_list`.

> Again, are these eager copies or lazy copy-on-writes?

These are eager copies, and immutablility is enforced by the language itself. This tradeoff has to be made, because mutable references allow for the construction of cycles. These references are reference counted if a copy of the closure is made or another closure closes over the same value in the same scope. It's safe to do this because these references are immutable, and when all closures referencing them go out of scope, they will be dropped.

> How are non-closure references garbage collected?

Non-closure references are 'garbage collected' when they go out of scope. Vaporization basically ties everything to the stack, and prevents values stored on the heap from becoming dissociated with it.

> I would be interested in a much more detailed write-up of this memory management technique.

Thanks for the interest! There's still a bit more formal verification to do, which is why I left this broad claim to the FAQ - we're rewriting the website, so the mention of it there is a bit outdated.

Given that it was just asked here, it looks like I need to extend the FAQ some more, given how frequent of a question it is. ;P

Have a nice day!


Thank you for the detailed answer. I'm looking forward to that blog post. One more question for now (a variant of something I asked before): How do you know a use is a last use? For example:

    foo = (list1 list2) -> {
        mutate_last list1
    }
The use of list1 in the mutate_last call is the only, and hence the last, use of list1. So mutate_last can mutate it in place. Right?

Except if I call this function like:

    big_list = [ ... ]
    foo big_list big_list
    print big_list
Then mutating in place will be incorrect. So how do you tell inside foo whether the call is the last use of the object referenced by list1? Heroic interprocedural pointer analysis? A uniqueness type system? Something else?


Thanks for discussing this with me and helping me work through it. To answer your question:

Let's annotate the variable lifetimes. `'` indicates last use, `_n` indicates that a variable holds the same value. I'll only be annotating the values in question:

    foo = (list1_0 list2_0') -> {
        mutate_last list1_0'
    }

    big_list_0 = [ ... ]
    foo big_list_0 big_list_0
    print big_list_0'
Let's focus on this line. Because these both aren't the last reference to `big_list`, they're passed as a CoW reference. I'll use `&` to denote this:

    foo &big_list_0 &big_list_0
In `foo`, `list1_0 = &big_list_0` and `list2_0' = &big_list_0`. `list2_0` is never used, so this reference is immediately dropped (not the value itself, rather the reference to that value). Let's now look at the body of foo:

    mutate_last list1_0'
Because list1_0 is a final reference, it's passed the value of the CoW reference, which is, well, the CoW reference. mutate list mutates this, and because the reference is copy-on-write, a copy of `&big_list_0` is returned. We'll call this `big_list_1`.

So the CoW reference of a value is a CoW<Value>. a CoW reference of a CoW<Value> stays the same, it's still a CoW<Value>.

> So how do you tell inside foo whether the call is the last use of the object referenced by list1? Heroic interprocedural pointer analysis? A uniqueness type system? Something else?

Something else. Vaporization is not all compile-time magic, it depends on the runtime as well. You can basically sum it up with two operations:

    reference: Value or CoW<Value>            -> CoW<Value>
    mutate:    Value or CoW<Value> + mutation -> Value
So when `mutate_list` mutates the list, depending on whether `list1` is a value or a CoW reference, either a copy will be made or it'll be mutated in place. I'm not claiming that Vaporization is the best approach to compile-time memory management, rather, it's more of a set of compiletime+runtime strategies that allow the interpretation of programs without garbage collection (in the traditional sense)

So, the question becomes: how do we indicate whether something is a value or a CoW reference? Something along the lines of a tagged pointer will do. Passerine already uses NaN-tagging, so this fits right in.

What about compiling to low-level targets, like LLVM IR, which don't really have a 'runtime' and rather specify the manual layout of structs and references?

I'm not quite sure, but here's one possibility. All values that can escape a local scope exist in some region of memory allocated for local variables. When a function is called, we can pass a bitfield indicating which of the locals variables on the stack are CoW references, and which are not. When the runtime goes to mutate a reference, it quickly checks the bitfield (which can be stored in a register or in the call frame) whether it needs to make a copy before mutating. In practice, the check is just a quick bit-shift and `&`.

Note that we can go a bit further: the bit-fields of all calls are known at compile time, so instead of copying and passing the bitfield itself, we can pass the index of the bitfield (which is a constant) to the function to use. In fact, I imagine you could somewhat 'monomorphize' this, so to speak, and generate different functions for different bitfields... but I'd have to think some more about this.

Anyway, that's about it. I hope I've helped clear some things up. If you notice anything else slightly off, or if you'd like to discuss this further, please let me know :)


Sorry, one more question about mutation and CoW. It would be good to know how mutation is written, since without that I can't write out the example I have in mind!

Imagine I want to write a function

    append xs y
that appends element y to the linked list xs using "mutation". It recurses down the linked list and at the very end it mutates the last cons cell to have [y] rather than [] as its tail.

Imagine this is called with a CoW reference for xs. I guess as you recurse down the list, each tail pointer must also be treated as CoW, since there will be copying mutation going on. So you get to the end, "mutate" the last element by copying it... but now all the other CoW references you encountered along the way must be copied as well. Do I understand this correctly? How is this handled? Do the CoW references have backlinks, or do you unwind some sort of stack (defeating any possibility for "real" tail recursion)?


Right, so there would have to be built in functions that mutate and return an object. So, for instance, one might be:

    List::replace list index value -> list
List would just write to the `list` it is given. If `list` is a CoW reference, it would make a copy first before mutation (hence copy-on-write), if it was not, it would just mutate it in place.

To mutate `xs` with append as you suggested, you'd have to reassign `xs` with the result of append. Because this would make the `xs` in `append xs y` the last occurence of the value, `xs` would be mutated in place:

    xs = append xs y
If the function were of the form `append y xs`, you could use dot-notation like so:

    xs = xs.append y
and if you really hate writing out `xs =`, you could always make a macro:

    syntax 'mut value function {
        value = function value
    }

    mut xs (append y)
I've got to go, but I'll get back to you on tail recursion.


Under the hood I'm going to use Rust `Vec<T>`s to represent lists, so it'd be trivial to provide a FFI operation to append something to the end in constant time. However, we can treat the list like a linked list, and, using recursion, here's a tail-call-optimized implementation of append in linear time:

    append = list value -> {
        loop = acc rem -> match rem {
            [] -> acc,
            [head & tail] -> loop [head & acc] tail
        }
        loop value (List::reverse list)
    }
I'm thinking of using something other than the `[a & b]` syntax, but that's besides the point. Note that `loop` does not close over any values other than itself, so it doesn't need to be inside `append`:

    loop = acc rem -> ...
    append = list value -> ...
The function really starts when we reach this line:

    loop value' (List::reverse list')

I added `'` to indicate that this is the last occurrence of both `list` and `value` in this scope, which means that they're passed to `loop` without making a CoW reference. Let's look at `loop`:

    loop = acc rem -> match rem {
        [] -> acc,
        [head & tail] -> loop [head & acc] tail
    }
This is where things get complicated, depending on the definition of match. If it were defined as macro, it'd be laborious to write out how it all works by hand. But if we treat `loop` as a function with multiple heads, it's a bit easier.

Here, we have two arms. If the remaining list is empty, we return `acc`. note that each arm is independent, so this usage of `acc` is the last in the scope.

If the list is not empty, we match `rem` against `[head & tail]`. This is equivalent to, roughly:

    [head & tail] = rem'
Note how this is the last usage of rem in this scope, meaning it's passed by actual value. This list is then split in-place, with head taking on the value of the first element of the list and tail taking on the value of the rest. Once passed to loop, note that all the values used are the last occurrence of the value in the scope:

    loop [head' & acc'] tail'
This means that the actual value is passed down to the next level. Looking at this as a whole, we can see that nothing is ever passed by CoW reference, which means that this operation is performed in-place. And because it's done in a tail-recursive manner, `append` only uses constant space.

So to answer your question specifically: what if append is called with `xs` being a CoW reference?

    append &xs y
When we reach the line:

    [head & tail] = &rem'
`rem` is mutated, i.e. split in two, so a copy of it is made. This copy is it's own value, so head no tail are no longer CoW. we then call:

    loop [head & acc] tail
Note that head and tail are now passed by value and not CoW - same goes for acc. Because nothing is CoW anymore, this tail-recursive implementation operates on the new list, mutating it in place.

In short: the first iteration in `append` makes a copy to the list, but in subsequent iterations the copy is used, and the list is mutated in place.

That was a big one, I'm practically writing the blog post out here, haha! Does this help clear things up?


Thank you again for explaining things.

> When we reach the line: [head & tail] = &rem' `rem` is mutated, i.e. split in two, so a copy of it is made.

Huh. That surprises me, that you consider destructuring a form of mutation. But if you do, then I see how this works out because it forces a copy.

However, this raises more doubts. If I write something like

    first_two xs -> match xs {
        [] -> [],
        [head & tail] -> [head & first tail]
    }

    first xs -> match xs {
        [] -> [],
        [head & _] -> [head]
    }
and call first_two with a CoW reference to a very long list, will it copy the entire list on the first destructuring, even though we really only want to copy two elements? For that matter, will a list search function, that is only supposed to read the list, called with a CoW reference actually copy the list?

If the answer is "well, lists are special", I guess you could consider the same questions on trees. In a functional language, insertion into a binary tree only copies nodes along one path in the tree. Would your language copy the entire tree instead, at potentially exponentially higher cost?

(I'm also slightly confused by your definition of append -- the accumulator seems to be a list, but then the initial call of loop would need to use [value] instead of value, no?)

Regarding "tying references to the stack" I had another question here: https://news.ycombinator.com/item?id=26228961


Thanks for your excellent questions. They're really helping me fuzz through my approach - I wish I had people I could do this with more often, haha. :P

Anyway, something I realized after I posted that question is that assignment doesn't have to force a copy; for instance, this could just take a CoW slice reference to the original list:

    [head & tail] = &rem'
The only requirement is that the value returned from a function is an owned value, not CoW. To avoid talking myself into circles, I'll show that `append` still works under this rule. It just moves the copy to the point where we construct the new list, rather that at the point where we de-structure it:

    loop [head' & acc'] tail'
So in `[head' & acc']`.

For the code sample you provided, a reference would be passed down, rather than a copy of the entire list, so just the first two elements would be copied. Do you have any further questions regarding this? I'd like to be able to prove the soundness of this system, but I'm not sure going back and forth on example after example is the best strategy in the long run.

> In a functional language, insertion into a binary tree only copies nodes along one path in the tree. Would your language copy the entire tree instead, at potentially exponentially higher cost?

Most functional languages use persistent data-types, because all data in those languages are immutable. I haven't thought about how they'd work into Passerine, but if you mutate data that will be used later, a copy of that data will be made. So yes, mutating a tree makes a copy of it if the original tree is used later.

This is less of a problem then you'd think. It's common to write programs as series of transformations on data, so note that if you do something like this on a big million-element list children .map { child -> (child.name, child.age, child.hobby) } .filter { (_, age, _) -> age > 10 } .map { (name, _, hobby) -> (first_name name, hobby) } .filter { (_, hobby) -> is_fun hobby } .fold "" { (string, (name, hobby)) -> string + name + " enjoys " + hobby + "\n" } .print

No copies will be made, because each step is the last usage of the data-structure produced in the previous step.

> (I'm also slightly confused by your definition of append -- the accumulator seems to be a list, but then the initial call of loop would need to use [value] instead of value, no?)

Once again, correct.

> Regarding "tying references to the stack" I had another question here: https://news.ycombinator.com/item?id=26228961

I wrote a long response, to this, but it ended up being a couple thousand words, so I'm trying to trim it down. (I went off the deep end, with stack diagrams, explaining how cycles get created, etc, etc.) TL;DR for now is that you can't have a list of references, each object is only mutably 'owned' by one other. Once I trim down the response, I'll post it - should be sometime later today.


Thank you again for your answers. I must admit that at this point I'm thoroughly confused by references, CoW, Passerine being pass-by-value, ownership (new information today!), and fairly advanced rules for when something needs to be copied. It probably makes sense to leave you to write that blog post where you can give a consistent view of the whole rather than iterating on small examples. I'm intrigued, and I'm looking forward to reading more, though I'm also getting a nagging feeling that there's a lot of ceremony here just to avoid garbage collection...


Thank you for your patience in explaining all this. Runtime tagging seems to make a lot of sense for this approach! Is this based on any previous work? It's been about ten years since I was last up to date with compile-time garbage collection techniques.

> What about compiling to low-level targets, like LLVM IR, which don't really have a 'runtime' and rather specify the manual layout of structs and references?

If you want to support dynamic memory allocation/deallocation, you will have to provide a runtime anyway. Even if it's "just" LLVM IR or C code or whatever for a simple allocator.

I think I'll have at least one more question about how the "tying of references to the stack" works, but I need to think things through a bit more.


It's loosely based on Proust ASAP, but without the explicit quadratic-time construction of paths (in a sense, this is done 'dynamically' at runtime with the CoW strategy we've discussed). When you think about it, Vaporization is really just an extension to escape analysis. It's not a perfect system though, namely, It required immutable closures. But if you're coming from a functional programming language, that's a small sacrifice to make, given that mutability is generally frowned upon.

> I think I'll have at least one more question about how the "tying of references to the stack" works, but I need to think things through a bit more.

I tried to answer your sibling comment on appending a CoW xs. I hope that clears some things up so you can ask any questions you have!


> Vaporization basically ties everything to the stack, and prevents values stored on the heap from becoming dissociated with it.

So if some code somewhere hands me a list that contains references to some heap data, how do you track this? For example, say at some point in the program I have a reference called "refs" that happens to be bound to a list of references to some objects a, b, and c. Then I do:

    refs = drop_first refs
so that refs is now [b, c]. We lost/dropped/gave up a. How do we know how a was tied to the stack? How many other references might point at the object previously referenced by a? Can we collect that object now? Is this the case where reference counting kicks in? In which case, the category of "closure references" is probably more general than I thought.


Because Passerine is pass-by-value, it only allows each reference to be owned by one object.

> So if some code somewhere hands me a list that contains references to some heap data, how do you track this?

Passerine is a high-level language, so it does not give the programmer control over whether something is a list of references to data on the heap or a list of values. Assuming `a`, `b`, and `c` have been defined, If I write this:

    list = [a, b, c]
`a`, `b`, and `c` will be moved into the list upon construction. If I do something like:

    list = [a, b, c]
    print (foo a c)
`b` will be moved into the list, and copies of `a` and `c` will be moved into the list.

So, to now answer your question:

    list = [a, b, c]
    list = drop_first list
> We lost/dropped/gave up a. How do we know how a was tied to the stack?

`a` was tied to the stack because it's in `[a, b, c]`, which is tied to the stack via `list`.

> How many other references might point at the object previously referenced by a? Can we collect that object now?

By definition, none - `a` is a unique value. Now that we dropped `a`, there are guaranteed to be no other references of `a`, so we can collect that object now.

> Is this the case where reference counting kicks in?

No, the reference counting is a small optimization for sharing immutable values in closures. If a closure captures a value, say a large list, we don't want to go around copying this willy-nilly:

    big_list = [...]
    c = () -> {
        -- big list is copied when the closure is made
        -- this happens because the reference in the closure can outlive local stack frame
        a -> first big_list
    }

    -- these calls to c do not copy big_list again
    a_outer = c ()
    a_another = c ()
Closures need to copy values immutably to prevent the construction of cycles. I wrote a long explanation on why this is the case, but I've decided to move it to the blog post when I publish it. I thought of a way to efficiently count references acyclically, but I need to work it out before I made any more claims.

Hope this helps!


Something I've been wondering is whether a Rust-embedded language has a place for making the "glue API server"-writing experience better. Rocket is an amazing library, but Rust's strictness - much as I love it - still slows down the iteration phase quite a bit. Imagine being able to hack out an API, twiddling JSON and reshaping data, in something like Passerine, and then step the different pieces down to Rust code as they solidify


I'm currently thinking of ways to make macros that serialize Rust structs to Passerine values (and vice-versa) a la serde. I think being able to hack out an API in a high-level language and then tossing it down to something like Rust for the low-level perf gains is a great idea.

Furthermore, I'd like for this to be possible:

    - project
    |- Aspen.toml
    |- src
    ||- ... passerine code
    |- ffi
    ||- ... rust code
Where Passerine could just wrap and call the Rust code automagically. This is still a ways off though...


Sounds awesome! You've done great work already, I can't wait to see where it goes from here :)


You can hack out an API in Open API Spec (I recommend Stoplight as a GUI for that) and then stand it up as a mock server for initial UI dev.

If the language has a code generator written for it, you can then generate method stubs and types for your server. If not, you can still use your spec as a validation document that you feed into a library before responding to a request.


That's not really what I mean... I mean that throwing together an API, even one with a bit of business logic, is still nontrivially easier in Flask than it is in Rocket (despite Rocket being the gold-standard for this in the Rust world). If we really want the Rust ecosystem to compete with, say, Go when it comes to regular web APIs, I think we're going to need more than just a framework. An embedded scripting language could be the ticket


Some really interesting design choices here: Haskell/ML style FP base, fibers as a concurrency primitive, both exceptions and ADTs.

More than those, I LOVE the macro system. Such a clean syntax!


Thanks! <3


Sorry for the OT comment, but this language name means “small female genitalia” in some parts of Italy. I guess just FYI in case you need to localize. :)


I really like how this has zero dependencies [0]

[0] https://github.com/vrtbl/passerine/blob/master/Cargo.toml


I've found that most small crates can be replicated in an afternoon. It's much more fun (and performant!) if you do it yourself. :)


From a glance it seems to be mostly Haskell-ish but with Scheme style macros.


Hmm, at a somewhat longer glance, I would say it’s very Scheme-ish in its semantics but only superficially like Haskell (and really more like ML generically). Specifically, despite algebraic type definitions, Passerine is dynamically-typed with an emphasis on structural/“duck” typing. The author plans to add a Hindley-Milner type checker but the language does not seem to be “type-centric” in the way I think of Haskell - types will be added for verification, but not as a near-first-class programming construct. I get the impression it might be more like mypy and that runtime type errors will still be a thing.

From the perspective of a very young functional programming language, it is very tricky to balance a powerful type system with powerful macros and I suspect that macros are the priority. The developer also cites the Erlang VM as a major inspiration in terms of a dynamic “expect things to fail and handle it” approach.

Still looks very interesting! I love Scheme (despite having become a static typing curmudgeon) and will be checking out this project.


Any type system with dependent types (`where` in Passerine) is undecidable, meaning error reporting for these types has to be moved to runtime.

There's a quote from someone somewhere, and it goes something like this:

> Ocaml is still trying to develop a macro system that Racket folks won't laugh at... and Racket is still trying to develop a type system that doesn't trip Ocaml fans into a tizzy.

Given that Passerine takes cues from both ML and Scheme, I've kinda given myself the Sisyphean task of bridging this divide. I'm still trying to find the right balance - I did macros first, so I can say I know more about how they work in Passerine - but I hope with a complementary static type system Passerine will become that much more useful.


> Any type system with dependent types (`where` in Passerine) is undecidable, meaning error reporting for these types has to be moved to runtime.

Maybe I am misunderstanding you, but this is very confusing. Most dependently-typed languages do in fact check types at compile time: the undecidability problem usually implies a compromise that “correct” programs will be incorrectly rejected (the compiler gives up earlier than it needs to) instead of letting the type-checker get stuck in an infinite loop. But it does not mean type-checking is simply moved to runtime! I am sincerely confused where you got this idea, which makes me wonder if I misunderstood you.


Can you elaborate on the undecidability bit? The task that isn't decidable is, what, "Given a program, determine whether the types are respected in all assignments" ?

I'm pretty sure that the calculus of constructions has dependent types, but wikipedia says that it is strongly normalizing, and I'm pretty sure that determining whether a program is valid in CoC is decidable? (Right? I could be wrong..)


> I'm pretty sure that the calculus of constructions has dependent types, but wikipedia says that it is strongly normalizing, and I'm pretty sure that determining whether a program is valid in CoC is decidable?

If you write a program in Coq where termination is not trivially true, you need to do the work and write a manual proof of termination. Given a program and something you claim to be a termination proof it's decidable whether that program with that proof is terminating and hence valid in CoC. Given an arbitrary program but no termination proof, I don't see how it can be decidable whether a termination proof of that program exists. You can write a Turing machine simulator in CoC (but won't be able to write a general termination proof).


For some reason I thought CoC just didn’t allow programs that lack a termination proof. Oops.

Ok, so the assumption I left out was, “for a turing complete language” or something like that (uh, in a sense in which “takes in a description of a Turing machine, an input to it, and a maximum number of steps, and runs the tm for up to that many steps” doesn’t count as turing complete)


> For some reason I thought CoC just didn’t allow programs that lack a termination proof. Oops.

I'm not sure about CoC-the-abstract-calculus-on-paper, but I guess it cannot allow programs that lack a termination proof. Coq-the-concrete-implementation-of-CoC does allow some. Or not, depending how you look at it: What it will really do is infer a termination proof behind the scenes for certain simple cases, like iterating over a list and the recursive call always being on the tail of the current list. So everything has a termination proof in Coq, it's just that in simple cases you get it for free.


I'm no type theory expert, but if you can describe natural numbers as:

    Even (n: Natural) | n % 2 == 0
Then what's stopping you from doing:

    Undecidable _ | loop {}
I'm not sure, but I think that's because strict dependent language systems like Pie (similar to CoC) use induction over the structure of a type for decidability. If I had 'the little typer' on hand, I'd elaborate more on this point. I think there's some relation between undecidability and the type Absurd, but I'd have to look into it more.


Just FYI, most dependent type systems don’t typecheck when a parameter to a type constructor is a function, since they can’t “deduce” very much about a function except its signature. They can take proof objects (including objects involving evaluated functions, like ‘n % 2 = 0’) but not functions by themselves (like ‘loop {}’).

This isn’t a technical problem with unification, the issue is a fundamental conflict between “intensional” versus “extensional” equality. Wikipedia has a good example that communicates the underlying confusion: https://en.wikipedia.org/wiki/Extensionality

AFAIK all depedently-typed general-purpose languages use intensional equality since it’s consistent, but there are proof-checkers that use extensional dependent type theories (inconsistency is less of a problem than you might think, and actually makes some higher-order topology and set theory much easier to program).


Oh, I think I just confused refinement types with dependent types. The above still stands though.


Is there an easy way to import rust data types into passerine(maybe via some derive macro)? I understand the author is exploring the PL design space, but for an embeddable language a good interop story is what sets them apart.


This is actually something I've been thinking about a lot! I'm not sure about the time frame, but ideally `passerine` (the crate) would provide a derive macro (similar to serde) that could serialize Rust structs to passerine, and vice versa. Given that Passerine has an algebraic data type model similar to Rust, I think they would overlap quite nicely.


Looks really cool! Since I'm just (as a "hobby") starting with Haskell, it looked pretty familiar. (I have no previous experience with ML and Scheme, which is also frequently mentioned as an influence).

At the risk of appearing too lazy to first read everything about it before asking this question (maybe the appearance is true ;):

what are, in a nutshell, the advantages over Haskell and main differences in terms of "not re-inventing the wheel"?


I think the README addresses this question well, if you're familar with Haskell and know what to look for. The main differences are the macro and the fiber system for metaprogramming and concurrency respectively; Passerine aims to keep functional programming fun, so you don't need to worry about Monads or Lenses or Contraroutines* (jk) before writing a simple program.

* This is an inside joke.


This memory management looks pretty shiny, but is there any way to get down to manual allocation as well?


I ran your script and this is what errors it had:

Would you like to proceed with the installation? (y/N)

/dev/fd/63: 48: read: Illegal option -n

/dev/fd/63: 50: /dev/fd/63: [[: not found

Im new to programming. In any case it did install completely. Thanks for the new language. Looks real cool.


What operating system do you have? It seems like your `read` command does not support the `-n` argument - but that's not the point, I really need to work on removing installation friction.

If you have git and cargo installed (needed anyway), you can try:

    # project structure
    cd ~
    mkdir .aspen
    cd .aspen
    mkdir bin
    # download
    git clone https://github.com/vrtbl/passerine
    git clone https://github.com/vrtbl/aspen
    # build
    cd aspen
    cargo build --release
    mv target/release/aspen bin
    # source
    echo "export PATH="~/.aspen/bin:$PATH" >> ~/.profile
    
Restart your shell, and it should work. Type `aspen` and hit enter and see if you get a help message with version 0.4.0. If you have any more issues, reach out on the Discord server so we can reach a more permanent solution (if you haven't already).


[[ is a Bash feature. The command for install uses POSIX sh. Maybe that's the problem. I use Debian and Bash. Thanks for the reply.


Ah, It seems like like my #!/bin/sh suspiciously supports bash features... I'll have to update the install script. Thanks for pointing that out!


If curious see also

Show HN: Passerine – an extensible programming language – v0.8.0 released - https://news.ycombinator.com/item?id=25347140 - Dec 2020 (8 comments)


Looks very interesting.

I would love to see an online repl/playground for this, similar to the TS playground (https://www.typescriptlang.org/play/).


The core language has no dependencies, and can compile to wasm, so it's on the docket.


The github readme seems to mention it's "structurally typed", but I'm still not confident if the types are checked statically or dynamically? Could you please clarify this on he website & readme?


That's because Passerine is in a bit of a transition zone. It's dynamically typed right now, but it will be statically typed in the future (the next release actually, fingers crossed). See the FAQ section of the README for more info. Once the type system's in place, I'll definitely update the website/README to match.


Never understood why languages like this don't just adopt full Lisp syntax. I'd have been much more interested.


My goal was to replicate lisp without the redundant parenthesis. There are a few conventions common in lisp: expressions usually occur on a new line, blocks begin with, well, `begin`, there is spacing between definitions, macros operate on forms, etc. etc. I thought, why not use the conventions when writing a language to the advantage of the notation used to represent that language?

A chain of symbols actually creates a `form` like a lisp list. So `a b c + c d e` is the same as `(a b c) + (c d e)`, which in list would be `(+ (a b c) (c d e))`. Additionally, newlines can be used to separate forms. So you can imagine that each expression on a new line is grouped in parenthesis - except in the case of operators, which can be split for legibility.

Some more: If you do group an expression with parens explicitly, you can split it across multiple lines like in lisp:

    hello (this is) a form
    (hello (this is) a form)
    (hello
       (this is)
        a form)
I that with enough work, one could use Passerine's macro system to turn the language into a lisp. Heck, you can already go the other way and turn it into this:

    syntax function name(args) body { name = args -> body }
    
    function thingo(x, y) {
        x + y * 2.7
    }
 
Notation is a powerful tool. Lisp is one two. It's hard to sneak a lisp past the eyes of a general population, but I think that the core of lisp: code as data, 'the Maxwell's Equations of Software' needn't be forever tied to s-expressions.


You could just read

    linear = m b x -> b + m * x
as

    (linear = m b x -> b + m * x)
and assume this to be an expression given to some kind of macro ;)


Because lisp syntax is never something everyone appreciates. See, for example, Dylan.


Dylan is less appreciated than Lisp.


Well, I think it's really cool. Gives me a Haskell vibe, while being far more approachable.


might try this out just because I like birds




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: