I'm currently actively working on a typechecking system for elixir (three packages: https://github.com/ityonemo/selectrix for compile-time interface, ../mavis for type logic, and ../mavis_inference for performing type inference on compiled code).
The chief difficulties I am running into are:
1) How to do intersections on functions and maps (this is in the article):
f(integer) -> integer is not usable as f(any) -> integer
2) not all of the erlang bytecode instructions are well documented
3) I still haven't figured out how to deal with circular dependencies at the module level.
Erlang makes this easy in these ways:
1) compiled module structure is simple and the way branching works makes the mechanics of type inference trivial.
2) erlangs concurrency makes dealing with, for example in-module dependencies trivial
3) there are only a handful of types in erlang so, aside from functions, their subtyping relationships are very straightforward.
Elixir makes this easy in these ways:
1) Typechecking structs can be more straightforward than records.
2) Writing a composable type logic (mavis) is super easy with elixir's protocols
3) I built (and am refining) a very easy dsl that makes creating and testing logic for the beam opcodes less of a hair-tearing experience:
I'm going to shill Gradualizer[1] in here as well.
1. I think you have to see typing as helpful information. Like:
-spec add(integer(), integer()) -> integer().
add(A, B) -> A + B.
Technically, nothing prevents add from accepting number() or float(). It's too much specification.
2. I was surprised how much there actually is but also disappointed that a lot of errors were in until OTP 23. Gradualizer also overrides some definitions [2]
3. Haven't encountered a problem with this yet with gradualizer, would need to test.
Yeah, I had thought that development of Gradualizer had halted, glad to see it's not. Part of why I wanted to do Mavis is because I wanted to make it easy for developers to make their own custom overrides and ship it with their libraries, especially for Elixir protocols[0]. still haven't figured out how to architect that, but I have an idea.
> I think you have to see typing as helpful information
Indeed. Selectrix will allow you to chain your inference with fallbacks, so, maybe the first level is a fast cache, second level is checking files for user overrides, third level is dialyzer specs, fourth level is inference.
I think it should also be possible to create an addon module with Mavis that lets you type inbound messages for GenServers, given a set of constraints (all possible calls and casts are present in the module itself as an API)
[0] it's a big problem that in elixir, Enumerable protocol destroys typing information, so I want to make an typing override for Enumerable an "example" typing override system.
I think it did partially stopped. I'm the only one committing right now but some of the original devs will review PRs and are EXTREMELY helpful. Great bunch.
The sytnax and integration look great here, I like it.
Although not entirely the same Dialyzer always seemed to be strange and quirky in its design choices - in the same way other parts of Erlang syntax can be a bit odd sometimes. The kind of thing I assume inspired Elixir in the first place (although in general I like Erlang as a language, it’s small enough and my complaints are few - it just feels syntactically quirky sometimes).
But I never felt inspired to use Dialyzer largely for these reasons. Never seemed to “fit” the syntax entirely and made it quite ugly IMO.
Which is why I’m happy to see these modern attempts, they look much better.
Good luck with the project. It should be an excellent learning process regardless.
> 3) I still haven't figured out how to deal with circular dependencies at the module level.
That is usually dealt with by distinguishing between interface and implementation (cf. dependency inversion principle), which should eliminate the circularity. Disclaimer: I know very little about Erlang and Elixir.
You start with f having type 'a0 -> a1', and g having type 'a2 -> a3'.
Looking at the body of f you infer that a0 = a2 and a1 = Foo | a3, after the substitution f has type 'a0 -> Foo | a3' and g has type 'a0 -> a3'.
Looking at the body of g you infer that a0 = a0 (whew!) and a3 = Bar | (a3 | Foo). Now, the last one is tricky, and here is the point where you can easily get your type inference engine to diverge, but the (least) solution is given by a3 = Bar | Foo (because A = A | B iff A is a superset of B).
So, after the final substitution you have 'a0 -> Bar | Foo' for f and 'a0 -> Bar | Foo' for g, generalize 'a0' as appropriate.
Interestingly enough, the article doesn't say anything about propagation of types, which is the crux of the type checking and what makes any typing system useful or useless, obnoxious or helpful. Consider the example from the article:
% a good old list length counter
count(A=[_|_]) -> count(A, 0).
count([], C) -> C;
count([_|T], C) -> count(T, C+1).
What is the type of count/2? Arguably, it's "(list, any | number) -> any | number", which means that count/1 has type "list -> any | number". Well, that's bloody pointless, isn't it? What you probably want is "(list, any) -> any | (list, number) -> number" for count/2, so you could give count/1 its useful type "list -> number", but intersection function types are notoriously difficult to infer or typecheck, even with manual annotations.
I'd imagine that in this context (i.e. one where a 1-arity function initializes a tail-recursive 2-arity function) the 1-arity function is the only one that'd be exposed outside of the module, in which case you can know with reasonable certainty that only things within the current module will ever call it.
If you really do need to export the 2-arity version, you could always do a meaningless addition to force it to typecheck to a number, like so:
count([], C) -> C+0;
count([_|T], C) -> count(T, C+1);
Which should then result in the following:
erl> c(test_count).
{ok,test_count}
erl> test_count:count([], bang).
** exception error: an error occurred when evaluating an arithmetic expression
in function test_count:count/2 (test_count.erl, line 8)
Or alternately, use guards (though in the context of BEAM-level introspection I haven't the slightest idea if this would produce an equivalent check):
count([], C) when is_integer(C) -> C;
count([_|T], C) when is_integer(C) -> count(T, C+1);
Which then gets you a somewhat clearer error message:
10> c(test_count).
{ok,test_count}
11> test_count:count([], bang).
** exception error: no function clause matching test_count:count([],bang) (test_count.erl, line 8)
Yes, if you look at my reply to the sibling comment, you'll see that if count/2 is unexported, TypEr (apparently, that's the proper capitalization) infers the types with non_negative_integer().
In fact, I've just looked at the original TypEr paper [0], and wow, they use exactly this example in the section 6: "One might even jump to the conclusion that success typings are unnecessarily general and, as such, quite useless. However, notice that this success typing succinctly captures all possible applications of length_3 which will not result in a type error. [...] Still, we also find the situation sub-optimal and we will improve on it as explained below", and then they use exactly your proposal, recognizing that non-exported functions are only called from the exported functions, and perform dataflow analysis to constrain the types even further.
Yeah, I mean that it will properly identify that the only time you can do that is when you have an empty list. The final spec will be (in elixir syntax):
(nonempty_list, number -> number) | ([], any -> any)
It should also be able to correctly infer that the count/1 function must be
I did say "should be able to" without regard to any of the existing type inference engines =D I'll be sure to make this as explicit test in Mavis Inference, which is the one I'm working on. https://github.com/ityonemo/mavis_inference/issues/47
I'm watching Gleam[1] for my 'I want to learn Erlang but dont want to deal with having to do runtime type checks so I wish there was a Statically-Typed Erlang' dreams.
Thanks for sharing. To save people a click the author of the tweet is opposed to adding a type system to Elixir not the BEAM, he seems to be seeing project like Gleam in a positive light.
Wow, that's really exciting. I spent some time over this summer working on a toy project in Elixir and really enjoyed the language, but the bigger it got the more I felt the lack of all my familiar TypeScript tools. I look forward to seeing this soon.
I really want to like Erlang because the priorities and USP's of the language are super compelling for a lot of use-cases. I tried to kick the tires on it a couple years ago, and what I found is that the error messages were just so vexing, and it took a minute to understand what they meant, even in toy examples. Maybe it gets better with time, or there is tooling to help with this, but it seemed a bit behind the times with the standard which has been set by more modern programming languages.
It's true it can feel cryptic at times. The good news is improving error messages and ease of use is a major focus area in next year's release. This PR from a few days ago looks like a very nice improvement in that direction: https://github.com/erlang/otp/pull/2849
Ive had so many projects I've wanted to use erlang with because it seems a nearly perfect choice until I do more research. I really miss things like the bitstring syntax when stuck working in most other things.
The scoping punctuation is very easy if you think about it. I begrudgingly accept “I don’t like the syntax” as a reason to not use Erlang, but there’s nothing challenging about it. It’s a much smaller syntax than Elixir.
The punctuation rules are very similar to English: comma means you’re not through with your thought yet, semi-colon means you’re introducing a new clause, period means you’re done with the entire thought.
>The punctuation rules are very similar to English: comma means you’re not through with your thought yet, semi-colon means you’re introducing a new clause, period means you’re done with the entire thought.
Agree, at least when looking at code statically. It's a bit of an inconvenience when editing though. For example, re-ordering lines in a function / adding a new line to the end. That leaves a full stop at the end of an intermediate line and/or a semi-colon at the end. Compiling/running that without fixing (assuming you didn't spot it) can lead to cryptic compiler errors.
I find this hits me a surprisingly large number of times. Maybe it's just the way I write code. But it is an annoyance (for me anyway).
To be fair, any decent editor/IDE will highlight the error. None that I'm aware of has an auto-correct though.
It's a minor annoyance in the great scheme of things. Erlang remains one of my favourite languages. Writing REST services as OTP applications on top of Cowboy* is a very rewarding experience and generates rock-solid, highly scalable, production-quality binaries.
*Webmachine was even better though sadly now not actively maintained.
Just to clarify: Values in elixir are still immutable.
The core team will insist that variables are immutable but bindings are mutable, but in my mind variables are their bindings (there is not a third category distinct from the value and the lexical binding).
I think there is huge debate as to whether this is good or bad - I think though you really really want to have mutable variables in the REPL. In your static code it's trivial to check for variable rebinding and be sure you're not doing it (credo makes this easy to set up)
> The core team will insist that variables are immutable but bindings are mutable
The documentation on the Elixir website refers to it as rebinding, and I don't see any references to bindings being considered mutable.
Rebinding means you've created a new binding with the same name. The original isn't being mutated, it's just not accessible anymore because of the scoping rules.
My point is I don't know what that means. The variable is the binding changing the binding by throwing away the old one is not distinct from mutating it because the binding "isn't a real thing, just conceptual", just as the variable isn't "a real thing"; unlike a value which is actually a thing sitting in a memory slot in your vm. If you can give me an example of where there is a meaningful distinction between the variable and its lexical binding I am happy to be enlightened.
> A mutable variable is a memory slot that can be mutated, though.
No, that's a mutable value.
The variable is the lexical representation of a value. If you change which value your variable binds to, you have changed the representation of the variable; you have mutated it. The underlying values in memory have not changed, they are immutable.
> The variable is the lexical representation of a value. If you change which value your variable binds to, you have changed the representation of the variable; you have mutated it. The underlying values in memory have not changed, they are immutable.
I completely agree with you that values and variables are separate things. However, mutable variables are still memory slots that can be mutated, while immutable variables are not. The memory slot of a variable usually lives on the stack, and contains a pointer to a value that lives elsewhere in memory.
I think the difference will be clearer in a language that distinguishes between introducing a variable and setting a variable.
var foo = "abcd"; // introduce a new variable binding
foo := "xyz"; // set a variable
Here's an example of what you can do in a language that has mutable variables that you cannot do in a language that only supports shadowing bindings:
var strings = ["str1", "str2", "str3"];
var foo = "abcd";
for (string in strings) {
foo := string;
}
println(foo); // Prints out "str3"
In this case, foo itself is a memory slot that can be mutated, independently of the strings, which are immutable values. foo is a mutable slot in memory that can be updated to contain different pointer values.
Now, if you only have shadowing, you end up with this:
var strings = ["str1", "str2", "str3"];
var foo = "abcd";
for (string in strings) {
var foo = string;
}
println(foo); // Prints out "abcd"
Each iteration of the loop is introducing a new binding with the name foo, but it is not changing or removing the original binding of foo. It's just that within the loop, only the local binding is visible.
I think part of the confusion around Elixir is that it only supports introducing new bindings, but the syntax looks like it's setting the value of a variable.
Another example that will hopefully help clarify:
var foo = "abcd";
var bar = "xyz";
println(bar);
That example is nearly the same as this (and in terms of what the program does, it's identical):
var foo = "abcd";
var foo = "xyz";
println(foo);
The only difference is that in the second case, the first foo binding isn't visible after the second one has been introduced. At least conceptually, they both still exist, but because of scoping rules, there's no way to access the first one.
I believe an example of where binding differs from assignment is if you define a function which references a binding in the outer scope, the create a new binding with the same name in the outer scope, the value inside the function does not change:
a = 1
f = fn -> a end
f.() # returns 1
a = 2
f.() # still returns 1
This feels like flawed thinking to me. I know what you're saying, but claiming that variables are mutable would lead me to believe we could have things like counters. I would say it's best to think of variables as the memory address of their binding. We are allow to rebind the name. That's how I see it, at least.
That's not at all the traditional definition of variable. A variable is the code-lexical representation of a value in memory. (Think "variable" in algebra, which is the formula-lexical representation of an abstract value).
Anyways we can't have traditional counters, because of 1) lexical scoping rules and 2) value immutability.
Outside of the famous builtin process model/BEAM (plus the subsequent ‘crash first’ style design of programs which adds resiliency) the pattern matching is what makes Erlang one of the best languages and the one thing I wish every new language copied.
You kinda get it in Haskell and Rust with the maybe style pattern matching. Rust in particular makes good use of it.
But it’s one thing I miss the most not writing Erlang/Elixir for a living... so far. It makes functions and data handling into these clean trees instead of conditionals (not to start a language flamewar but boo Golangs conditional obsession which seems to be the polar opposite in many ways for error handling, or at least it used to be I haven’t tried it recently).
There’s a lot to like and learn from in Erlang. Static typing will give it wider adoption IMO. And I’m happy to see so many developers working on it.
They are not equivalent, as Erlang/Elixir uses structural typing whereas Haskell/ML uses nominal typing.
In Erlang you can define the following:
equal(X, X) -> true;
equal(_, _) -> false.
In Haskell, the following does not compile:
equal x x = True
equal _ _ = False
instead, we need to write something like
equal x y | x == y = True
| otherwise = False
A subtle difference, but in some cases the structural semantics really do make patterns a lot cleaner (shorter without sacrificing readability).
I think that this style of pattern-matching semantics is a consequence of Erlang's roots in Prolog, which works similarly (and not only checks for structural equivalence but even does unification when necessary).
I replied to your post on twitter but I highly recommend Gradualizer[1].
Discovered it last year and started using it. The refinement portion handles unions and guards very well. It's not perfect but I'm integrating it in our codebase with some interesting successes. I'm hoping more people use it on their codebase and can find any bugs they have with it so it can be improved further.
I'd recommend anyone who have troubles with outdatedness (?) of the Erlang, to try Elixir. It runs on Erlang virtual machine, very modern and pleasure to code in.
The chief difficulties I am running into are:
1) How to do intersections on functions and maps (this is in the article):
f(integer) -> integer is not usable as f(any) -> integer
2) not all of the erlang bytecode instructions are well documented
3) I still haven't figured out how to deal with circular dependencies at the module level.
Erlang makes this easy in these ways:
1) compiled module structure is simple and the way branching works makes the mechanics of type inference trivial.
2) erlangs concurrency makes dealing with, for example in-module dependencies trivial
3) there are only a handful of types in erlang so, aside from functions, their subtyping relationships are very straightforward.
Elixir makes this easy in these ways:
1) Typechecking structs can be more straightforward than records.
2) Writing a composable type logic (mavis) is super easy with elixir's protocols
3) I built (and am refining) a very easy dsl that makes creating and testing logic for the beam opcodes less of a hair-tearing experience:
https://github.com/ityonemo/mavis_inference/blob/main/lib/ty...