Hacker News new | past | comments | ask | show | jobs | submit login
Ante: A low-level functional language (antelang.org)
446 points by cheesestain on June 17, 2022 | hide | past | favorite | 217 comments



Hello, author here! As the website says, the compiler itself is still in a very early state where basic things like functions, types, traits, inference, monomorphisation, and codegen are implemented. The fun stuff of algebraic effects, lifetime inference, and refinement types are not however. Though I can elaborate on implementation strategies of these for anyone curious. For example, algebraic effects in existing languages can be quite slow at runtime due to the use of continuations, the translation to a monad stack, and/or dynamically finding handlers at runtime. It is my plan to monomorphise them away and inline all these cases as in the following paper (1). With this, handlers are inlined into functions, continuations are normal closures, and continuations that aren't called multiple times are inlined.

(1) Zero-cost Effect Handlers by Staging: http://ps.informatik.uni-tuebingen.de/publications/schuster1...


I'd really like to find "the low-level functional language" without "the fun stuff". Or at least it should have simple and boring subset that is usable without "the fun stuff".

Something like Caml Light - precursor to Ocaml - but with translation to C instead of bytecode, so it will be fast and will have comfortable integration with C libraries.


This is definitely an interesting thought. My thinking is that "the fun stuff" tends to help make the language more functional, so removing it you are left with a more imperative language resembling a C clone with traits and type inference. Then if you want easier C interop you must remove traits and either remove modules as well or provide a standard method of mangling module names into function names. At that point I think you may as well use an existing "better C" language like Odin, Zig, Jai, C3, etc.


what makes Ante low level? Just from a cursory look over the website seems pretty high level to me.


My definition of low level is no tracing GC, values are unboxed by default, and users still have control to do low level things (raw pointers, other unsafe operations) when needed, even if it is not the default.


> My definition of low level is ...

This is a very deep and mature assessment. I have high expectations for the future of Ante. Higher than Rust, in particular, provided it supports some analog of destructors.


The generally accepted definition of a low level language is a language that provides little or no abstraction from a computer's instruction set architecture. In actuality, C is a lower level functional programming language than Ante, because C functions are first-class citizens.

I like the lack of GC! What is the method for always incremental compilation? The benefits are obvious, but isn't it problematic to have two authoritative representations of the same source? It would be fantastic if you get that to work well!


Fwiw C is actually a huge abstraction over modern hardware.

It's a good representation of the PDP-11 and similar-era computers. It's also a good abstraction for modern microcontrollers.

C's view of the world is also a really poor fit for today's larger CPUs. Your computer has to jump through a ton of hoops to make itself seem C-like. C has no concept of vectorization, speculative execution, branch prediction, multiple cores, caches, MMUs, etc.

For basically any desktop/laptop/smartphone CPU, the C runtime is more like a little VM than an accurate model of the hardware underneath.


> C has no concept of vectorization

OK, but vectorisation is exposed by the instruction set, as instructions[1]

> speculative execution, branch prediction, ... caches, MMUs

How would you expose speculative execution & branch prediction? Cache behaviour is well understood so what's wrong with it? What would you do to expose MMUs to the programmer, and what would they do with it/how would it help?

[1] but aren't there libraries that expose it just fine in a portable way?


Speculative execution and branch prediction are adaptations to compensate for C's failings as a description of desired semantics.

But they are thoroughly ingrained into the technology, and will not easily be rooted out.


Your first sentence doesn't answer any question. The second is irrelevant. Please make some concrete suggestions.


There seems little value in "exposing" crap we would be better off not building in the first place.

The more expressive your language is, the less the runtime and hardware needs to guess about.


So what are the $%^&* semantics we're supposed to build into the language to avoid the need for speculation via OOO exec and branch prediction. Because as a guy interested in languages you might have something to teach me, assuming you have any idea what you're talking about.


Language design is hard.


Eh, isn't it a bit of a stretch to claim you can program functionally in C? You can mimic it, but without easy closures you can't follow _any_ functional programming patterns that are built on composing functions into new ones in expressions (e.g. binding the operation argument to a fold)

C is great, and it would be cool to see something that is similarly close to the instruction set but has functions as values


There is a book about it [0], you can download it here [1]. It uses SML to teach FP. I did the exercises on the train in Termux and Vim.

[0] https://www.semanticscholar.org/paper/Functional-C-Hartel-Mu...

[1] https://ris.utwente.nl/ws/portalfiles/portal/5128727/book.pd...

(edit) add download link and background.


A functional programming language is a programming language in which functions are first class citizens. C is such a language.

Of course there are many programming patterns that are in more acceptable functional programming languages than C. Whether a programming language is considered functional is not the same as which patterns are supported in the language.


Your idea of what FP means is completely nonstandard.

For the record, there is not one accepted definition, but we can get close by saying that FP languages are those based on lambda calculus as their semantics core. And the primary mechanism in lambda calculus is variable capture (as done in closures).

C is based on the Von-Neumann model and has absolutely nothing to do with the lambda calculus. No reasonable PL expert considers it functional.


There's a lovely Perlisism for this: "A programming language is low level when its programs require attention to the irrelevant." Which is interesting in this context because it's clearly not a good fit for Ante, where it looks like many traditional features of "low-level" languages no longer require such attention! So maybe it's not Perlis-low-level; maybe it's "machine-oriented"?


> fun stuff of algebraic effects, lifetime inference, and refinement types

> I'd really like to find "the low-level functional language" without "the fun stuff"

But current stable Ocaml has neither of the "fun stuff" mentioned and compiles to native code. So isn't that exactly what you want?

It doesn't even need lifetime analysis because automatic garbage collection be praised.

And algebraic effects are awesome. Sure they are not mainstream yet but conceptional there is a strong parallel to the Common Lisp condition system which is quite established. Not sure why you wouldn't want to have them. Also it is still a long way until we will see them used in user facing stuff in Ocaml.


Ocaml doesn't compile to C. Sometimes having translated C source is major gain.

I'm not Ocaml implementation expert, but I suppose Ocaml exception handling and garbage collector could be tricky to aware about when one extend or embed Ocaml.

To be honestly, my fellow did Ocaml embedding once. He made able to load natively-compiled plugins written in Ocaml into C based software. And it worked. I didn't dig into details, though. There was at least one bug regarding garbage collection (and it happens that I fixed it).


`ocaml-ctypes` currently supports "reverse-bindings" (making OCaml functions available from C) out-of-the-box and mostly takes care of the intersection you are talking about, so this already works quite well.

The only gain from emiting C code is portability to weird architecture that would be covered by C compilers but not the OCaml one; which is arguably a pretty niche use-case.


I thought OCaml was dropping support for 32-bit ARM. I wouldn't call that a weird architecture to want to run on.


For functional lang transpiling to C, you can look at Fennel - a Clojure dialect over C. However, if you want functional and statically typed, then you are probably out of luck.


This was basically my motivation when I designed the language I use for my day-to-day tasks where I program. I just wanted a ‘functional’ sort-of language that compiled to C.

Even though I’ll never release it, it was a rewarding theoretical and practical exercise.

As an aside, I really love how detailed and example-ful Ante’s site is. It looks like a ‘fun’ language and I hope the creator keeps working on it.


May be it is time for "coming out"? (Excuse me for the joke).

Really, perhaps your language is The Next One.


I strongly doubt that, most people would look at it and say it was a concatenative language (it’s not, but that is a semantics issue) and immediate disregard. The language, while being a modal dependently typed language, there is no mandatory ‘safety’ features and memory is largely manually managed. And after the blowback, complaints, and negativity posts about Hare on HN, I just don’t care to argue the fact that my language exists.


There is a group of programming language enthusiasts that would be interested in all sorts of languages.

Then there are people who will criticize just about anything for the sake of doing it. Just ignore those people and make a day more interesting for the people who share your passions.


That is the argument that has been made by my few friends who are interested in PLT as well. The other option I’ve considered if starting a blog or similar (although my son keeps suggesting a YouTube channel) and doing a longer series of posts concerning the language, type theory, and category theory inspirations present in the semantics of the language.

I would love to imagine I could be the next Andrew Kelly (creator of Zig), I don’t know that I can actually be a language founder.


I get it that programming languages are quite different from most of other projects in that normal projects can have a small user base and still be considered a success in solving a given problem, but with programming languages the mere fear of not being able to reach a sizeable community is enough to not even try out a new approach.

Nevertheless I think there is space for experimental/research languages. Just don't burden yourself thinking in terms of "this is the next great thing" and it will be fun, and perhaps your work and the ideas underlying it will even leave a mark, even if that doesn't necessarily mean that the particular incarnation of that particular language will ever reach adoption.

We not only stand on the shoulder of giants. We all stand on the shoulders of billions of dwarfs.


> I don’t know that I can actually be a language founder.

You will only know if you try. Things that are worth it take courage me thinks.


Why compile to C when you can use LLVM?


C can sometimes be an easier target since you don't have to learn LLVM's API. C is also more portable than llvm since there is a greater variety of C compilers for a greater variety of architectures than llvm targets.


C as an API changes far more slowly than LLVM, too. Using LLVM is a real maintenance commitment.


Thanks for sharing!

The dot product example gave me pause because map2 seems to be the same as zipWith. Does that exist in Ante? Without context I might have thought map2 was going to act as bimap. Take that for what you think it's worth :)

Also I might be having a brain fart -- but isn't the dot product in your example equal to 32?


map2 is indeed another name for zipWith. I believe I got that name from Racket if memory serves. Compared to zipWith I like its symmetry with the 1 argument map. I also wasn't aware of bimap! I can't seem to find a function of that name online, though I did find the BiMap haskell package, is that what you're referring to?

And yes, the dot product should be 32, thank you :)


Not OP but they meant https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-...

As you have the normal map function for Functors (using Haskell):

  > :t fmap
  fmap :: Functor f => (a -> b) -> f a -> f b
you can have bimap for Bifunctors:

  > :t bimap
  bimap :: Bifunctor p => (a -> b) -> (c -> d) -> p a c -> p b d
which specialised to pairs is:

  > :t bimap @(,)
  bimap @(,) :: (a -> b) -> (c -> d) -> (a, c) -> (b, d)


I was wondering why the dot product wasn't 32! Good thing I checked. Also this seems like a cool new language, I need to learn a functional language, I wonder how far this language will go in development.


I calculated the dot product in my head because it looked off. Then I calculated it in my computer. Then I came to the comment section before losing all confidence in what I thought was a dot product of two vectors.


As it turns out you and Haskell are wrong with regards to that naming while this language is correct.


Looks like a great start! Your attention to programmer ergonomics is admirable. Added to my weekend hacking reading list.


Very neat project! I noticed that Ante doesn’t have explicit region type declarations. As I recall, existing algorithms implemented for ML can sometimes infer very large regions which causes memory usage to balloon. It looks like smart pointers are part of the plan to address that possibility, but I’d love to hear more about your thoughts on memory management.


Correct, a key goal is to have no explicit region/lifetime annotations. There have been several papers on region inference after the originals by Tofte & Taplin, all attempting to refine the original analysis by inferring shorter lifetimes. First by analyzing when a region can be safely emptied and re-used, then by abandoning the stack discipline, etc. Unfortunately, none of these are viable in a real program in my opinion. Although they each infer shorter lifetimes in a few cases the core problem of "collections will unify the lifetime variables of all elements in the collection" and "branching on a value and conditionally returning it extends its lifetime, even if the branch was not taken" remain unsolved.

An ideal solution to me needs to solve these problems. Since there is already a large body of research trying to address this on the static side and failing, I believe it needs to be solved with runtime checks. The specifics of which I'm still exploring but its worth mentioning these would only be necessary to tighten existing lifetimes so one can envision annotations or compiler options to elide these if desired. Lifetime inference in MLKit (and I believe ante as well) tends to speed things up by turning more dynamic allocations into stack allocations, so there is some room there for runtime checks without making the result more expensive than the version with dynamic allocation I believe.


Hey, thanks for calling my work "not viable!" (Co-author of "Better Static Memory Management" here)

Seriously, this looks promising and I'm very interested to see where it goes.


Ack, my apologies, I didn't intend to be so inflammatory! If memory serves AFL was one of the first schemes (or perhaps the first?) to abandon the stack discipline to provide better inference in quite a few cases compared to TT. I remember the graphs in that paper giving me more hope for region inference to be a practical memory management scheme. Later, I believe the imperative regions paper improved on the AFL scheme a bit at least in their single 'life' example, but without more test cases or graphs it is more difficult in general to assess that paper or how common this case matters in practice. I'm curious what you believe the future of research for region inference to be considering most of the papers written after TT's retrospective paper seem to have moved in the direction of explicit regions rather than inferred ones.


Entirely joking about being offended, that work was a long time ago. I think it was an interesting exploration but we were not able to get compelling results, I think mostly because we did not at that time have affine types (move semantics). Thus if you put something into a container in one place and took it out in another, it would have to infer the region that spans both, which might as well be the whole program in most cases.

These days I program in Rust and find Rust's approach to explicit regions to be a workable compromise, though reference-heavy types can get pretty ugly and hard to work with (and I'm pretty sure that variance of lifetimes is confusing to everybody who isn't a PLT theorist and some who are).

The approach I personally find most interesting is the Lobster language. There (and I'm probably oversimplifying) the semantics are reference counting, but you so analysis to remove a huge fraction of RC operations. I believe the Perceus work is similar.

I'm happy to chat anytime. Recent work has been using somewhat exotic types provided by Rust (associated types, existentials, lots of inference through product types) to represent UI. So far I've basically been using what Rust gives me, but it's interesting to imagine what changes to the language/type system might buy you. For example, there are a few places in the code where there are downcasts, but I suspect that with a sufficiently strong type system you could prove those downcasts are infallible.


I’m curious what the rationale is for not making regions explicit in the source code. It seems like a downside of region inference for a systems language is the unpredictability of the inferred regions.


I maintain a JIT compiler for a very high performance Haskell like functional programming language. The compiler automatically generates the minimum malloc/free calls, mirroring the hand written code of an experienced C programmer. It works really well and the generated machine code was shown to be faster than hand-written optimised C++ code that people had been maintaining for years. The compiler code is unfortunately closed source and I have zero interest in writing academic papers so I wont elaborate further. Just telling you to hint hint perhaps try something like that :) The execution flow analyser was a b** to get right but worth it. I also recommend checking out some of the recent LEAN 4 papers. Some of those papers are moving in the right direction.


As a person named Job I'm a little confused by the second example on the website, but that's probably my own interpretation bias :p

Joking aside, looks cool, good luck with the project!


I don't see a LICENSE file anywhere in the repo or the website. Nobody is allowed to use Ante?


> Nobody is allowed to use Ante?

:eyeroll:


https://docs.github.com/en/repositories/managing-your-reposi...

>However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work.


What do you think is more likely, that the author forgot to add a LICENSE file or that he actually doesn't intend for ANYONE to use the language he created? Give me a break


The original comment was asking if the current legal status is that nobody is allowed to use ante without acquiring a licence separately (which is true) not whether the author's intention is that nobody use ante (which we don't know, but it seems unlikely.)


I thought the original comment was simply a big report: missing license file. With a snarky comment tagged on. I thought everyone assumed that, being shown by the author here, it was the latter.


As a person named Job I'm a littly confused by the second example on the website, but that's probably my own interpretation bias :p

Joking aside, looks cool, good luck with the project!


This is a really awesome project and I hope it goes far!


I avidly follow descendants of Scheme and Haskell (I figured out for example how to build Idris native on an M1 Mac), and I plan to follow Ante. I just spent $5k of lunch money on an M1 Ultra Mac Studio, to use as a math compute server. How do I keep it busy? Haskell. My "Hello world" parallel benchmark, using reverse search to enumerate the 66,960,965,307 atomic lattices on six atoms, took 14 hours in 2009. It now takes four minutes.

I'd love to start again with a new, small language. I love Haskell, but it's decades old, and one struggles to work with it without being torn apart by the gravitational tides of the black hole at the center of its galaxy. All jokes aside, Haskell really does reward a PhD in category theory, or the self-taught equivalent later.

Functional languages make parallelism easier, and Haskell has put particular effort into this: Adding a dozen lines to a program can keep the 16 performance cores of my Mac Studio running full tilt. I have yet to read any account of how someone else's favorite language makes parallelism easy, that is aware of the comparison with Haskell. If I thought there was a reasonable alternative, I'd jump at trying it.

Rust appeals because it replaces GC with modern control of allocation lifetimes. I love how Ante will also explore this. I use cyclic data structures; how one handles this functionally, and how one handles this using reference counting, is exactly the same problem.

Parallelism should be the first consideration, designing any new language in 2022. Parallelism is the reason I keep using Haskell, despite newer alternatives. Any language that runs on a single core is a toy. This is a plea!


Also have a computer with a ridiculous number of cores (32) that I want to keep busy.

I've found Clojure to dominate the ease of parallelism. For problems that are actually stupidly parallel the number of changes to your code is often measured in letters rather than lines. For example, you might change map to pmap or you might change reduce to reducers/reduce and change nothing else but now be fully leveraging all your cores. For problems that aren't stupidly parallel I feel like Clojure shines even more. Most languages didn't implement software transactional memory and encourage it as the default way to work with things that vary over time, but Clojure did. On account of that you can have a not massively parallel algorithm that would be full of tricky lock code in another language and still end up leveraging all your cores by doing something as simple as map or pmap, but not run into horrible issues.


try elixir! parallelism is child's play on the platform. plus its functional without drowning you in type theory.


The syntax looks a little funky to me but it's still quite interesting. Is there a reason why you would make fn(1) and fn 1 equivalent? For me personally it makes readability worse and looks strange when chaining functions like in their last example on their landing page.

On mobile horizontal scrolling through the code snippets will trigger switching to the next snippet on my phone.


> Is there a reason why you would make fn(1) and fn 1 equivalent?

If your standard function call convention is just `f x`, but you also support precedence operators, then `f (x)` automatically becomes possible.

It reminds me of an old Lisp joke, though:

    f x   -- too mathematical!
    (f x) -- too many parenthesis!
    f(x)  -- just right!


> f x -- too mathematical!

:) yet we are very happy to use the same syntax in shell scripts or the shell itself.


I've heard this argument before, but this isn't true.

In shell, f x y z, x y z are all augments to f. Doesn't matter if f takes one argument, all are passed to the function.

With many functional languages this gets very confusing. IE what ie what does `f f x` do? In shell I know for sure.

In the example f f x, it might be easy to parse. But in f x y z, any of x y z might be functions.


It's pretty easy to reason about that particular example if you have a type system. In a simply typed language, "f f x" is just a type error, so it doesn't do anything. In the polymorphic lambda calculus, the typing will force f to either be the identity, meaning that "f f x = x", or it will be a constant function, so you just replace "f f" with the constant.


Most functional languages parse a b c d e f as a(b, c, d, e, f), it does not matter what b, c, d, e, f are. Do you know any language where this is different?


No functional languages do that.

OCaml parses a b c as ((a b) c). In case the compiler can determine that a is a function taking 2 arguments, it will optimise the code, so that it’s effectively a(b, c). But in general, that’s not possible, especially in the case where the compiler determines that a is a function with a single argument (in which case, it’s return value must be another function, which is in turn called with c) or when a is a first-class function (e.g. passed as an argument)


My toy FP language did. :) It’s perfectly possible to just parse a list of arguments and figure out currying in a later compiler stage. In my experience it even somewhat helps with producing nicer arity-specific error messages, which user might appreciate.


OCaml and Haskell parse `a b c d e f` as `((((a b) c) d) e) f`.


And while different than Algol-descended languages, I don't think that's particularly confusing. (Not that you were saying so, just continuing the conversation.) You can put together a confusing expression with it, but I can put together confusing things with the Algol syntax with not too much effort too. I've got the source code with my name on the blame to prove it.


I thought more languages did this but at least nix and ocaml do not actually behave like I thought.

In Ruby however it is a bit more ugly

  def f x
    x + 1
  end

  puts f f 1
> 3


I don't understand your objection, what output would you like to see instead?


GP's point is that while yes, we know since `f` has arity 1 there's no ambiguity, in general you might not have the arity of any given function fresh in your head, and therefore can't tell (in Ruby) just from looking at `f f 1` whether it means a single invocation of an arity 2 function, or two invocations of an arity 1 function


Ah! Right. It helps to have all functions be of arity 1 to disambiguate, yes.


Many counter examples to this, xargs for one.


xargs is not a counter example. It is not a shell builtin, it is a program that takes a bunch of string arguments like every other program a shell would call.

xargs -0 -n1 bash -c 'mv $1 ${1//.js/.ts}' --

Everything to the right of xargs is a string argument passed to xargs.


So an identifier representing a function? Isn’t that the same for most languages?


I guess the difference is that nesting calls (commands) is much less common in the shell, and (closely related) commands don’t really have a return value.


On the shell, it is all the other way around, so to use command substitution it is like this:

    f $(g x)
To pass the output of g x to f.


is it? In practice I find that my shell one liners are orders of magnitude more complex than what I would dare to write in any other 'proper' language:

   grep "hello ($(cat patterns.txt| tr '\n' '|'|grep ')$^)"  <(ssh other.host "cat ~/file.txt") |tee >(grep 'a' >As.txt) >(grep 'b' > Bs.txt)
[yes, gratuitous use of cat, sue me]


We’re not, but your point is still valid.


As an old lisp fan, I never got this. (f x) and f(x) have the same number of parenthesis.

and the nice thing about (f x) is that the parenthesis group f with x; so you have the whole call inside the (). Consistent and simple to understand.

vs i.e. print(f"a string"), where it isn't even clear that the "f" is a function call.


That's the joke. The number of terms doesn't change, and the last two have the same number of parens. The statements relate quantities of those things as though they're a problem, when in reality the "just right one" only changes the order slightly.

Hence the joke. It's one of those jokes that earns a loud sigh from me, rather than a chuckle.


> the nice thing about (f x) is that the parenthesis group f with x

The drawback is that they are put on the same level, whereas in most people’s minds the function is a fundamentally different thing from the argument(s). The “f(x)” syntax reflects that asymmetry.


What creates that distinction ? in the lisp / fp world you quickly stop considering functions as separate entities.


The function represents the operation or computation you want to perform. The arguments represent inputs or parameters for that operation or computation.

Of course, theoretically you could also view the function as a parameter of the computation and/or the arguments as specifying an operation (in particular if those are also functions), but for most concrete function invocation that's not generally the mental model. E.g. in "sin(x)" one usually has in mind to compute the sine, and x is the input for that computation. One doesn't think "I want to do x, and `sin` is the input of that operation I want to do". One also doesn't think "I want to do computation, and `sin` and `x` are inputs for that computation". It's why you may have mentally a sine graph ranging over different x values, but you don't imagine an x graph ranging over different functions you could apply to x.


But even in high school topics start to talk about functional equations (calculus, e/ln). I'm not sure the <function> vs <value> doesn't come from the mainstream imperative paradigms and only that.


The distinction isn't between functions and values in general, it's between the function being called and the arguments passed to the function being called. The difference isn't in the things themselves, it's in the role that they play in the specific expression we're reading.


This argument is rather strange. Maybe for people who never interacted with different -fix notations ? Human language binds concepts with arguments in all kinds of direction .. I'd be surprised this is enough to annoy people.


Natural language isn’t precise and a lot is inferred from context. Exact order does often not really matter. In formal languages, however, you want to be as unambiguous and exact as possible, so it makes sense to use syntax and symbols to emphasize when elements differ in kind.

Incidentally, that’s also why we use syntax highlighting. One could, of course, use syntax highlighting instead of symbols to indicate the difference between function and arguments (between operation and operands), but that would interfere with the use of syntax highlighting for token categories (e.g. for literals of different types).


We also alternate between various positions in programming languages, so it's not a matter of formal precision.


Not sure what you mean by “alternate between various positions”.


Prefix for function, infix in expressions, and postfix if you use forth


> (f x) and f(x) have the same number of parenthesis.

That's the joke


I have heard that if you count { and ( as parens, a Java program for example has just as many parens as. lisp one.

A lisp paren can do both jobs: expression and scopes.


> A lisp paren can do both jobs: expression and scopes.

Using different symbols for different purposes makes sense, it helps humans to parse correctly faster.


That's right, which is why the parenthesis is combined with a leading symbol like (let ...).

Mainly, you don't look at the parenthesis when reading; you look at that let and your eyes rely on indentation for structure.


Okay, but now you have to look at more things.

Also, the ending paren doesn't have anything to help you see what it is on its own.


You're not supposed to look for ending parentheses in Lisp; getting the right number of them is more or less your editor's job, and they are maximally stacked together like this: )))), as much as the given nesting and indentation permit. Given some (let ..., the closing parenthesis could be the third one in some ))))) sequence; you rarely care which one. If it's not matched in that specific ))))) sequence where you expect, then that's a problem.

)))) is like a ground symbol in a schematic:

    (+5V
      (+10V
        (-10V
          (INPUT3 ...))))
                     -----  local "ground" for all the above

    (different circuit)


That doesn’t seem very user-friendly. ;) Or at least that’s always my perception when programming Lisp.

As a side note, I believe that different people fundamentally have different programming languages that objectively suit them best, due to differences in their respective psychology and way of thinking and perceiving. It’s interesting to discuss trade-offs, but in the end there is no single truth about which language is better overall — it depends on the task and on the person. There’s no “one size fits all”. What’s valuable is to understand why a certain syntax or language might work better for certain people.


Just ignore the parentheses. Setting them to the background color may enforce that.


It does, but there are still many different purposes. In JS:

( can mean: function call, part of an expression to be evaluated first, regex capture group

{ can mean: scope [of ... loop, if, lambda, function etc.], object definition, string interpolation code i.e. ${..}, class definition

There are probably other things I am not thinking of.

The one that trips me in JS is code like x.map(v => {...v, v2}) breaks because the compiler sees the { as the beginning of a scope, not the object definition I intended.

The working solution is x.map(v => ({...v, v2}))

But x.map(v => v+1) is allowed.

I don't think the compiler could figure out what you meant because an object definition might be a lambda function body. For example { myvar } can be both an object definition, and a function that returns myvar.


Sure, why care about splitting algorithms into functions and naming them appropriately when you can just write it all in a 3k line function.


That's obviously nonsense.

Java, one pair of parens:

    int x = 1;
    int y = x + 1;
    System.out.println(y);

Clojure, six pairs of brackets:

    (let [x 1 y (+ x 1)] ((. (. System out) println) y))


I wish we had stats about sexps (we shall call it the kinthey scale). As a kid what you said was the first thing my brain caught on. It compresses the number of things I had to remember it's ~always (idea arguments...). We can now discuss interesting problems.


I had the same feeling when I first started using a lisp (which was Scheme to work through SICP and the Ableman&Sussman MIT course). I was utterly entranced by the simplicity and uniformity. It was absolutely a factor in the way syntax works in my personal language (which I use for almost everything in my day to day). I really do agree that learning a lisp can truly expand the way in which a dev views and thinks about code.


Brown university PLT labs also vouched for this approache. Their textbook starts with a descriptions of sexps as syntax saying that's the only parsing theory covered here.


It's a stupid joke.

The negative reactions that a lot of people have toward Lisp's parentheses are not because of the call function syntax but because parentheses are used everywhere else in Lisp's syntax.


Pity it doesn't include a line for:

   x f


That line necessarily comes after all the arguments about it.


x.f and x.f()


Those two function calls are the same thing in OCaml and Standard ML.


Coming from OCaml, F# uses the same syntax. Works very well.


> Is there a reason why you would make fn(1) and fn 1 equivalent?

For the same reason you'd make 1 + 2 the same as 1 + (2). Parentheses can be used to group arbitrary subexpressions.


If fn 1 is an allowed function call syntax then fn (1) should be synonymous because 1 is expected to be the same as (1) except for a slightly different parse tree; but there might be good reasons to make fn 1 not a function call (for example, the implicit operator in a sequence of two expression without parentheses could be string concatenation rather than function application).


> the implicit operator in a sequence of two expression without parentheses could be string concatenation rather than function application

You can design a language which uses `e1 e2` to represent any binary operation you like, but I'd argue that function application is more common than string concatenation, so it's more deserving of that syntax. Plus, it plays nicely with currying.


I’ve sometimes thought that a reasonable alternate choice would be:

  f g = composition of f then g
  x.f = f applied to x
  x.f g h = in regular notation h(g(f(x))
Though slightly different precedence rules may be preferable.

I think the k in awk considered juxtaposition-as-string-concatenation to have been a mistake by the way.

Some other reasonable choices may be:

- disallowed syntax

- multiplication (which, for matrices, is a special case of function composition and application)

- inner join which can be seen a bit like function composition but for relations instead of functions

- sequencing (ie instead of ‘;’)


That's an interesting idea, but how does it work for functions of multiple arguments?

If functions are curried, then I suppose the syntax for `f x y` would be `y.(x.f)`, which maybe you could write as `y.x.f` if the associativity worked as such. But that means you have to provide your arguments in reverse order?

If functions are not curried, do you write `(x, y).f`?


Haskell does this. Basically languages that are following the ML style have this syntax including Haskell. You must not have experience with this family of languages at it is very common and a huge part of functional programming.

A good number of "functional programmers" only have experience with JavaScript these days.


Close, I saw this when dabbling with Nim and I remember I found it confusing, since I mostly write TypeScript and that is not a thing there so my post was mostly my ignorance speaking. I guess when one is used to it it will not look confusing at all!


It seems the parenthesis are only used to indicate the priority, like in Haskell, so you wouldn't really use them for `fn (1)`.


> Is there a reason why you would make fn(1) and fn 1 equivalent?

1 and (1) are isomorphic. A single term tuple can be converted to the single term, and vice versa. Having an implicit conversion doesn't seem too crazy.

The biggest issue I suspect would be confusion about the most idiomatic way, or a mix of styles in real-world code bases, that causes confusion or inconsistencies (increases cognitive load for the reader).


I don't think this applies in this case. The brackets here are used for resolving precedence only. They are a syntactic feature which are not represented in the language semantics.

Where brackets are used in some languages to construct tuples, you generally need special syntax to represent the case for 1-tuples, like Python where "(1)" is equivalent to "1" but "(1,)" is a 1-tuple containing a single 1 value.

Also in most FP semantics, x and the 1-tuple containing x are not equivalent so the mathematical isomorphism doesn't hold. The tuple itself could be undefined (bottom/unterminating or null, if the language has such a concept), or could contain an undefined value or could be completely well-defined. These three cases are not represented in the semantics of the unbundled value.


I'm not sure I like a single item tuple being equivalent to just the item. Can you ask for the length of a tuple? The length of a tuple with two 100 element lists would be 2, and if you looked at the tail the length would be 100.


I agree. It's better to have single-item tuples still require a comma.

(1) - the number 1

1 - the number 1

(1,) - a tuple with one item, which is the number 1

1, - a tuple with one item, which is the number 1


Right. In particular if you identify tuples with lists (which seems reasonable), you run into typing problems, because singleton lists/tuples suddenly have to be unified with the element type.


To be clear, I didn't say they were equivalent. They're equivalent up to isomorphism, not equal.


Looks very cool! Can somebody enlighten me what's happening in the Algebraic Effects example? Specifically this part:

    handle f ()
    | flip () -> (resume true + resume false) / 2.0
Does `handle f ()` call `calculation` and the `| ...` part "injects" the `flip` effect? I am also quite confused by the part following `| flip ()`. It somehow returns true or false with a probability of 50%? And why does this give you the expected value of the whole calculation function in the end?


Author here, a good way to understand algebraic effects is as "resumeable exceptions." In this case the expected_value handler says to run `f ()` and whenever that computation "throws" a `flip ()` effect to handle it by resuming the computation with the value true returned for flip. The continuation continues as normal, subsequent uses of flip are also handled until the computation finishs. Then we evaluate the rest of `+ resume false) / 2.0` and resume the computation (a second time!) in the same place, this time with the value false. Finally, we add both values returned and divide by two to get the expected value.

This way of computing expected value works because for each flip we essentially have a case-split: the value can be true or false with a 50-50 chance, so the expected value of the whole thing is is 0.5 * the result of the true branch + 0.5 * the result of the false branch!

This is more of a fun use of effects than a practical one. If you're still curious about effects, check out the full page on them here: https://antelang.org/docs/language/#algebraic-effects which includes some actually useful effects like State, Generators, looping constructs, parsers, etc.


Thanks for the explanation, I will definitely check out the docs on algebraic effects!


Explanation based on my familiarity with Koka:

This expression matches on the effect flip() and handles it with the code on the right hand side of -> which becomes the value of the function invoking the effect. In this case the handler runs the continuation for both possible values (resume true and resume false, i.e. the function was called once, but it will return twice, with true and false substituted in the place of flip() in the place which used this effect) and returns their mean as the average.

I.e. each time calculation calls flip(), the effect handler will continue the function for both true and false and then take the mean of those two results. Comments explain that flip() should simulate a coin flip, which generally gives equal probability (1/2) to true and false. Each flip() in calculation was simulated with equal number of true and false values, so it fits the expected distribution. Each time taking the mean of those values is just an expansion of the integral calculating the expected value, as per definition of expected value.

Note that there is some recursion here. While the flip() handler is executing for the first invokation of flip(), another instance of the handler will execute for second and third invokations (inside resume true). I.e. it calculates the expected value by taking the mean of the expected values of each branch created by a flip. When a branch doesn't have any more flips inside, its expected value is taken to be the constant value that this branch returns.


I don't fully understand it yet, but I think it resumes twice, adds the results, and divides by two.

the `| flip ()` is pattern matching on that effect term, I believe. So essentially: when you see `flip ()` inside anything in the argument of `expected_value`, capture the continuation there, run that argument once with `true` and once with `false`, add the results, divide by two.


If anyone is interested in other programming languages also written in Rust, there's a community-maintained list! https://github.com/alilleybrinker/langs-in-rust


This is cool! I really like the syntax for algebraic effects, which would be great for mocking. I also like the way that "impls" can be passed either explicitly or implicitly; the lack of global coherence seems like a reasonable tradeoff. Have you thought about using existential types to support dynamic dispatch (like Rust's "trait objects")?

That said, I'm a bit skeptical of the utility of refinement types (and dependent types in general). They greatly increase a language's complexity (by obscuring the type/value distinction and making type inference and checking more complicated). I'm not sure the benefits are worth it, particularly because of the so-called "specification problem."


Thanks! IMO it is very important effects have an easy syntax to read/understand since they will be both used pervasively and be foreign to most new users. I do have an idea for "traits as types" which would cover static and dynamic dispatch: https://antelang.org/docs/ideas/#traits-as-types. So a use like `print (x: Show) : unit = ...` would be usable with static or dynamic dispatch and more complex cases like `print_all (x: Vec Show) : unit = ...` would use dynamic dispatch always. Perhaps this may be too confusing though, and there are other considerations as well which is why its only an idea for now.

Your concern on refinement types is definitely valid and is something I've been thinking about. They are certainly useful in some cases like array indices or passing around certain predicates but whether these are useful enough to offset the implementation cost and brain tax is an open question.


What is the specification problem? I don't think I've heard the term before.


This looks really lovely, I look forward to following the maturation of Ante in the future. I've often thought that the niche of a general purpose low-level FP language was a promising space. The only other langs that fit in there are ATS[1], which is notoriously complex/difficult-to-learn, and Futhark[2], which is more GPGPU/scientific-computing specific.

We've got Rust, which is essentially a C-style lang that steals all kinds of goodies from the ML family; it's nice to see Ante as a kind of inverse to this: i.e. an ML style lang that borrows some of the nice bits from traditional imperative langs (and hopefully maintains their performance characteristics).

Looking through the language tour there already seem to be a plethora of very sensible/ergonomic design decisions here. The 'loop' and 'recur' keywords are a great feature, making a nice little functional nod towards while loops. As a long-time Haskell user the explicit currying initially turned me off, but after seeing a couple examples I can see how it's actually a very reasonable solution, and moreover the ability to curry out of order is really nice instead of having to use 'flip' or similar combinators (as a side note, the explicit currying reminds me a bit of APL's α and ω arguments in dfns, a feature I'd love to see pop up more). The paired tuples also seem like they'd be a pleasure to use; certainly a bit more flexible than tuples in other ML style langs. Making '.' the pipeline operator is also a smart bit of syntax, and I can see it being very accessible to OO programmers in that it looks (and acts) like the method chaining they're familiar with. Refinement Types seem like a good alternative to full on dependent typing (ATS has an analogous (and more general) proof system for ensuring things like array indices are valid (an essential feature in a low level FP lang), but it involves threading those proofs through your program which seems much more clunky than what's presented here).

Overall I'm really excited to see where Ante goes, it seems like a very pragmatic functional language, something the world definitely needs more of...

[1]: http://www.ats-lang.org/Home.html

[2]: https://futhark-lang.org/

EDIT: In regards to low level FP langs there's also Carp, Erik Svedäng's nifty little statically typed Lisp. It's like Scheme with a Rust-y memory model:

https://github.com/carp-lang/Carp


Author here, thank you for your interest! There are definitely a lot of small design decisions that add up in language design, from using `|` for match cases to avoid double-indentation, to the use of pairs over tuples, to how methods are resolved and chained, it is nice to have others appreciate the small things sometimes :).

I'll avoid posting it here but if you do want to follow ante's development the best place is on its discord which is linked on the github page.


Hello, I just wanted to point out that if I had not read comments here and had not looked for a second example linked to "Job", I would not have realized there were dots under the first example. You may want to make them more visible ^^;


Very interesting stuff! I didn't even thought is possible to implement a low level functional languages, I always thought a functional language requires a ton of abstractions.


Aside from Lisps being yet again an example of "Simpsons already did it.", there are also other examples and projects that go roughly into this direction. Like Roc[0]. Tim Sweeney[1] seems to be working on integrating functional programming into games development, which is highly competitive and very performance sensitive. John Carmack also has been advocating for using functional programming techniques in game development (partially).

Generally I think FP has a bit of a misguided bad rep when it comes to performance. What makes code slow is indirection that a compiler cannot reason about and work that doesn't need to be done. There are FP techniques that can help a compiler or CPU to generate/transform efficient code and ones that can confuse them so to speak. Needless mutation can sometimes lead to bad code as well (from a compiler or CPU perspective), because it solidifies control. Functional programming is fundamentally a constraint, computers, interpreters and so on like constraints, because they can then go about shuffling around stuff without breaking those constraints if that makes sense.

[0] https://www.roc-lang.org/

[1] https://en.wikipedia.org/wiki/Tim_Sweeney_(game_developer)


Not at all, there only needs to exist enough low level primitives (aka compiler intrisics) to build the whole stack.

That is how Lisp Machines and Xerox PARC workstation OSes used to be built, or Mirage OS for a more recent example.

And I do consider Lisp functional, since when I reached university, the options were Lisp dialects, Mirada was fresh, Standard ML and Caml Light were rather new.

There was yet to appear the mentality that OCaml / Haskell === FP.


It's certainly possible, although certain styles are more appropriate than others.

For example, ATS is ostensibly an ML-like language; but its "low level" examples look more like "C with stricter types" (e.g. its Cairo examples)

https://en.wikipedia.org/wiki/ATS_(programming_language)


At the end of the day what prevents functional languages from being a good fit for low-level programming is that effective low-level programming cannot be referentially transparent.

Might be my 2 cents, but i think Rust can hit a very sweet spot for functionally-leaning low-level effective programming.


I disagree, the reason almost all ‘functional’ programming can not be said to be referentially transparent is because of the level of abstract the designers take as a base. The work Greg Morrisette did (which include Robert Harper for some branches) on Typed and Dependently-Typed Assembly could be used as the basis for a ‘C-level’ programming language that is ‘functional’.

At a slightly higher level, if memory is explicit, and that would include input and output buffers and the like, then it is possible to make functions exist inside a ‘memory monad’, similar to Haskell’s state monad. Then any and all operations involving the memory used can be made referentially transparent.

Now, the real question is would anyone want to program in such a style? I know I wouldn’t mind, it’s while I broke down and designed a personal programming language for my daily use. But it’s a matter of taste and tolerance.


> At the end of the day what prevents functional languages from being a good fit for low-level programming is that effective low-level programming cannot be referentially transparent.

I'm not sure I agree. "Straightforward" FP, e.g. a bunch of functions defining local vars and calling other functions, can be pretty hard to make low-level (it pretty much assumes a GC, and indirected closures; although tech like linear types can help).

However, the more abstract, point-free, pattern-heavy FP seems like a reasonable fit (e.g. pipes/conduit)


I can't see why not - in normal computing, there is I/O. On embedded it also just "I/O" but a little more brutal than streams. Still pretty much the same though, at some point you must reference the outside world.

What am I missing?


Very interesting to see you essentially replace Tuples with Cons! I'll be following that development for sure. Coming from Scala, Tuples are entirely how you represent Product and then that has tons of downstream implications on case classes and constructors. This leads to situations like Scalaz creating NonEmptyList to get Tuple-like behavior out of List. So your approach has the potential to unify these similar but different types and I find that fascinating!


I'm confused how data structures work in this language, and there's no documentation about it as far as I can tell. Is this going to be a Rust-style vectors+iterators system? It it going to use purely-functional concatenative structures? The first example you show invokes `map` and `sum` over an Array, but what is this code actually doing? Making a copy of the data in the array? Creating a mapping iterator?


Rust-style with Vecs+Iterators is the current style, yes. So the map+sum example creates an iterator rather than copying the array.

I'd like to remove iterators in favor of Generators (implemented with Algebraic effects) in the future though since they're much easier to define. Actually switching over is on hold until effects are implemented and speed tests are done to ensure they're compareable speed to iterators (which they should hopefully be due to monomorphisation of effects + handlers).


How does string interpolation work? In what context, exactly, are the placeholders checked and/or evaluated? How are missing or incompatible placeholder values handled? The semantics aren't obvious (for instance, how do you deal with string inputs, which would contain placeholders that cannot be checked in advance?).


String interpolation works by expanding to the concatenation of several strings. So a string like "the ${foo}." is expanded to "the " ++ foo ++ ".". There are some things I'd like to change about the current design. Namely it should probably defer to a StringBuilder of sorts, and it should possibly do it lazily so that interpolation can be used in log calls without worry of whether logging is enabled.

These are all checked at compile-time though, since interpolation can only be done inside string literals we can check for all uses of ${expr} inside literals and ensure e.g. that expr is convertable to a string. Since these are only in string literals, all placeholders are known in advance so there are none that cannot be checked at compile-time. (A string like "foo \${" ++ "foo}" must both escape the interpolation begin ${ and is concatenated to the actual string "foo ${foo}" with no interpolation. Otherwise it would be very confusing having interpolation happening in unintended circumstances (and would make string operations less efficient by having to check for this)).


So it's purely syntactic sugar around concatenating plain string literals and various expressions, it has no relation to print or other I/O, and in case a string comes from input (BTW, I don't see any mention of serious IO besides print for "logging") or from some computation it isn't subject to interpolation.

I don't think limiting string interpolation to enhanced string literals in code (leaving out generic strings) in order to allow static checking is a practically acceptable restriction. For example, logging frameworks meant for long-running application tend to specify message templates in configuration files or databases, possibly hot-reloading them at runtime.


More or less, yes. I may later change it to some type like `Interpolated env` to represent a lazily interpreted string that interpolates the types in env. So "${1u8} and ${2}" would be typed as Interpolated (u8, i32).

I'd like to keep this orthogonal to IO or other specific use cases so that it is applicable to any use case users may have rather than tailored to existing ones. You're also correct there is no real logging framework or anything of the kind except print really. Ante is still in a quite early state and there aren't really any substantial libraries to speak of. As-is, interpolation is really just some nice sugar to make some common operations more ergonomic, like f-strings in python.

I think applications that need their own dynamic interpolation requirements should define their own interpolation to handle their specific needs. It would be unacceptable from a performance and predictability standpoint for example to support the example of loading files into a string and have them interpolated at runtime automatically for all file loads.


How is this low level exactly?


quoting the github readme:

> In general, ante is low-level (no GC, values aren't boxed by default) while also trying to be as readable as possible by encouraging high-level approaches that can be optimized with low-level details later on.


with no GC, is there compile time segfault checking like rust?


Author here, that is the purpose lifetime inference is meant to serve. It automatically extends lifetimes of `ref`s so that they are long enough. A key advantage of these is avoiding lifetime annotations in code, at the cost of some lack of control since the lifetime is increasingly handled for you. You can also opt out by using raw pointer types though and easily segfault with those.


Pointers and manual memory management apparently.


low is relative :-D my thought exactly.

IMO the only low level language is assembly. Everything else is some form of abstraction. C/C++ and the likes I tend to call lower, since in 2022 it is closer to the hardware, and then sugar languages like python, c#, js, and the likes I call high level.


Even assembly languages are abstractions.


Strangely the language "below" assembly, Verilog, is a lot more abstract and tries to pretend to look like C while generating hardware, so writing it is more like imagining how to trick it into doing what you want.


That's because a HDL is not a lower level machine language, but instead they are languages used to implement a machine that consumes a machine language.

Consider what happens when you implement a x86 emulator in python: you're using a high level language to implement a machine using a particular substrate (a simulation inside another machine). This simulated x86 CPU executed machine code and you'd call that machine code to be the "native" or "lowest level" language with respect to that particular machine.

You can see how that choice of machine language bears no relationship with the language used to implement the underlying machine.


It's not because of anything intentional, it's because Verilog is poorly designed.

https://danluu.com/why-hardware-development-is-hard/

https://danluu.com/pl-troll/


It's not, or only partially. "To try to bridge the gap between high and low level languages, ante adapts a high-level approach by default, maintaining the ability to drop into low-level code when needed." says the website.


Author here, to me the lack of a pervasive tracing GC and values not being boxed by default are important for low level languages. That and maintaining the ability to drop down and use low level constructs like raw pointers for optimization or primitives for new abstractions are essential.


This looks like it could become a very interesting language. Unusually for a new language, I spotted zero red flags.

I did do a quick ^f for "destructor" and "drop"...

The intro mentions memory management, and inferring lifetimes. How do I make something happen at end of lifetime?


Is there a reason not to force a linear typing system (at first) -- a la mercury? To simplify the lifetime analysis? And then in later versions that can be relaxed to affine typing, and then subsequently normal lifetimes?


Lifetime inference originates from region inference which is actually completely unrelated to linear/affine/uniqueness typing. Uniqueness typing can definitely complement lifetime inference, but isn't really generalizeable to it.


Can someone please explain what is an "uninterpreted function", in the context of Ante's refinement types? The Wikipedia article didn't help much (https://en.wikipedia.org/wiki/Uninterpreted_function).


Hi! An uninterpreted function is one where the only thing we can assume about it is that `(x = y) => (f x = f y)`. That is if we give it the same input, we should get the same result out. In the context of refinement types this can be used for, among other things, tagging values with some predicate. For example, if we know `good_index array 3` and `x = 3` then we know `good_index array x`.


Looks interesting. How are closures implemented? Is it possible to use Ante without heap? Are function arguments passed by value?


Hello, author here! Closures are implemented with the environment parameter as an extra parameter on a function type. So internally a function `i32 - i32 -> i32` (wonky function type syntax currently with - separating arguments) which uses an environment of type String is represented as a pair of the function and its environment: `(i32 - i32 - String -> i32), String`. The same way C++ and Rust represent their closures.

Function arguments are passed by value currently, though I may explore pass by move and other options in the future.

I hope for ante to be usable without a heap, though the `ref` type automatically handling lifetimes makes this more difficult. These refs compile to an equivalent of destination-passing in C, but can require dynamic allocation if a function creates a ref and another function calls the first in a loop. I also plan on having an Allocate effect for whenever a function can allocate, so that it is trivial to handle this with your own handle that can do anything. This would be an easier alternative to zig's approach of "pass the allocator everywhere manually," but there are still things I need to iron out, like how it interacts with the lifetime inference issues above, so its still in the design phase.


dotproduct [1, 2, 3] [4, 5, 6] //=> 22

????????


4+10+18, so 32 base 10, or hex: 20, but in base 15 this actually works out.


This had me giggling, thank you


Whoops, typo. I'll update the website


Looks great!

A few questions (hopefully the author still reads it):

* Any plan for support of arrow/monad comprehensions?

* Semi-related: When it comes to generators it might be worth to consider making them clonable (see https://github.com/pelotom/burrido)


(1): No support, some monad or early-error-return sugar used to be considered but effects cover most of the usecases of monads and are easier to use so I plan on emphasizing them. As for arrows, I have to say here that I've actually never used them in haskell so I don't know how useful they'd be.

(2): Thanks for the resource, I'll look it over :). I remember seeing a vaguely related note that multicore OCaml used to provide a `Obj.clone_continuation` function but no longer does. Of course, ante's algebraic effects design is rather different so that may not apply anyway. It may depend on the Generator whether it is cloneable or not. Some generators are just functions that can be arbitrarily cloned, but others are closures which may have restrictions in whether their environment is cloneable. Ante's clone story in general needs more work.


Noob question: how many PL books I have to read to be able to design a programming language like this?

The "7 programming languages in 7 weeks" doesn't seem to be enough. Could anyone recommend PL (1-3) books that nail all of these concepts?


RE lifetime inference: how does it infer lifetimes for mutually recursive functions that return refs to what would be local data? Does it automatically infer a dynamic stack?


Lifetime inference compiles to destination passing so for most cases a single stack allocation in a prior function can be used. For your example of mutually recursive functions allocating in a loop the destination would be a region that will grow dynamically like an arena allocator. Since refs are typed, ref elements of the same type will be allocated next to each other in memory.

You don't touch on it, but there are some more difficult cases with lifetime inference as well. Namely branching the compiler must decide whether or not to extend a refs lifetime. This and a lack of granularity in container types are known problems with region inference (they lead to more memory than necessary being used by assuming the longest lifetimes), and are things I hope to tackle.


I'm left wondering if the CSV-example considers commas inside a string field which is surrounded by quotes.

The other examples are way too confusing for me, except the first one.


It does not! It was meant to be a rather simple example showing what using iterators look like. Perhaps interesting to note that I'll likely be removing iterators in favor of generators which are easier to use. With monomorphisation of effects it is my hope that they'll be just as efficient, but Iterators will remain until tests prove that is the case.

As for the examples, they definitely showcase things that aren't present in most languages and can thus be confusing. That is on purpose though, since ante for me is a language to experiment with novel features I find interesting. The second example on lifetime inference for example can be thought of as like rust's lifetimes but completely inferred and instead of issuing errors for "x does not live long enough" it will instead automatically extend the lifetime of x. So my hope is it is easier to use at the loss of some control (control can be regained by using other pointer types like `Ptr a` for a raw pointer used to implement other things on top of).


Interesting, well have to see how it develops in the future. Although, I normally dislike languages without {}, but I suppose that is just preference.


I hate having to press Shift to get braces. Let's use square brackets which require no modifier key.


Just a question: but are you considering the frequency of usage for a particular symbol in that suggestion? If {} is for opening/closing functions or code blocks/scope AND [] is for making arrays, lists, or some other data structure: do you only want them switched where, idiomatically, where there are less function definitions vs instances of the data structure? Or is it an overall objection to any shift+key operator in a basic syntax?


I don't code in C/C++.


Consider remapping your keyboard layout


I like the ideas!

Not a huge fan of the whitespace delineated syntax though. I like me some curly braces.

Other than that though, it is super interesting!


what timing, just last night I was wishing for a low level ML style language with manual memory management (and/or a borrow checker). Definitely going to take a look!


Are there any benchmarks against OCaml?


Note to the website authors: the carousel with examples is not usable on mobile - trying to scroll the code swipes away to the next item. Would be better as just a series of example blocks.


It's broken on desktop as well. I was trying to select some example code to copy and paste it in a comment here, but that just click on it drags the carousel rather than allowing me to select text.


Carousels should simply be abolished. Just have paragraphs of text/image, one after the other.

Also, black text on white background is easier to read.

And don't make your web design to complex for Firefox reader mode to work: if you do, that's a sign you're doing it wrong.


Ah, yes. The carousel has been a source of frustration especially on mobile for designing the website. Perhaps disabling swiping to scroll on the carousel on mobile would help. Or defaulting to use a dropdown to select the example on mobile instead.


Yet it's great on desktop. What I liked the most about it was that it doesn't auto-advance.

Generally, the entire site has a really nice design, at least on desktop. One of the prettiest I've seen.


Even on desktop I wanted to be able to swipe (scroll gesture) or arrow key to advance the slide, but have to click the little dot icons


I was able to drag it sideways with the mouse.


On mobile, I just go to ⋮ then "Desktop site", much wider display.


Functional is always at odds with low level programming because of heap allocation. You can't control it because of immutability. Just a simple map operation does a heap allocation. How does Ante avoid this problem and give the user control of the heap?

The mechanism should be made clear in the introduction as browsing the documentation doesn't make it clear to me.


The plan is to give users control through the Allocate effect which can be handled in any way desired as long as it returns some memory. It is similar, but easier to use since it is an effect, to zig's approach of "pass the allocator everywhere." I say easier to use since effects are automatically passed around where necessary and propagated via function signatures and can be inferred.

The specific design is still in question though. For one, an effect like Allocate will be almost ubiquitous throughout each function so care is needed to not clog up signatures too much. There's a few potential solutions here if you're interested. From including Allocate within the row-type of a larger effect like `IO = can Allocate, Read, Write, ...` to encouraging effect inference on functions versus manual annotation.


Here the `Allocate` effect is just a syntactically-lightweight way of doing dependency injection, right? Similar to a Haskell type class. I don't see why you'd need to make it an algebraic effect, as it does not need to mess with control flow AFAIK.


It could fail.


In the case of map operations, Ante is using Rust style iterators over collections. In Rust, iterators are just as fast as loops.

The author said somewhere else in the thread they want to switch to generators implemented with effects eventually.


This makes sense. So basically if you style your programming such that a list is never materialized you can achieve zero allocation based programming.

However for things like "sorting" this can never fully be achieved. You cannot sort a generator. Every iterator counterpart needs a list counterpart or there will be literally certain computations that are impossible.


Why, why, why with the significant whitespace? That makes it a non-starter for me.


Age of "var, val" is over. Now age of "significant whitespace". If seriously, seems like indent syntax is default to go nowadays. Python has way too big influence on language designers.


I'm not a fan of python (it handles significant whitespace somewhat poorly). Cases like mixed tab-space whitespace and single-line only lambdas are python-specific problems for example.

I chose it mainly because I like the style and I haven't found it to be an issue in practice yet, especially with flexible rules for line continuations. The biggest detriment to me is the lack of auto-formatters for indentation.

Edit: I'll add to this a point I haven't seen discussed before about functional languages using the `f a b` syntax specifically. Without significant whitespace you cannot really have semicolon ellision with this syntax since almost any line may be interpreted as a function application of the next line. E.g. `a = b + c` and `foo 32` as subsequent lines would be interpreted as `a = b + (c foo 32)`, hence why ocaml requires semicolons. For some people semicolons are fine, but they're just noise to me :)

There's more detail on the specific significant whitespace scheme used on the website here: https://antelang.org/docs/language/#significant-whitespace


Regarding single-line only lambdas, I think that isn't an intractable issue with the grammar.

Guido van Rossum basically said parsing them is hard, and he doesn't like lambdas in the first place, so it won't happen. [0]

Similar to "tail call optimization encourages dirty FP nerds" logic. Terrible decisions IMHO.

[0]: https://www.artima.com/weblogs/viewpost.jsp?thread=147358


Agreed. Other whitespace sensitive languages like Haskell, F#, and Ante handle multiline lambdas just fine


You claim, significant whitespace helps mitigate "goto fail" errors. As for me, indent syntax allows more human errors during refactoring. When moving code blocks around, you can easily misplace whitespaces and compiler/editor has no ability to help you.


This is definitely an issue, but isn't one I run into often. Just like if I'm pasting code in rust I need to make sure it is inside or outside an if statement, I likewise need to ensure pasted code in ante is inside/outside the if statement by looking at its indentation relative to the previous line. The compiler can help somewhat here, just not a lot. If your indentation is not on an existing indentation level you will get an error since you cannot randomly start indent blocks in your code. I do think the biggest detractor of indentation-sensitive syntax in general is the loss of auto-formatting indentation though.


That and Haskell.


ML languages did it first.


You forgot to say why or provide any rationale.


OK.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: