> My experience with Swift is we tried to make a really fancy bi-directional Hindley-Milner type checker and it’s really great because you can have very beautiful minimal syntax but the problem is that A) compile times are really bad (particularly if you have complicated expressions) and B) the error messages are awful because now you have a global constraint system and when something goes wrong you have to infer what happened and the user can’t know that something over there made it so something over here can’t type check. In my experience it sounds great but it doesn’t work super well.
I feel like B) is very well known in the programming language community. I certainly heard people talking about in at least ~20 years ago. A) should be obvious in the presence of overloading. I guess the type inference algorithm becomes part of the assumed semantics of the language and it's difficult to change after the fact.
Yeah, my impression is that B happened earlier in Haskell.
1. Let's create a super advanced type inference system so that you can write Haskell like a dynamic language but have it be static.
2. Oops, now when you make a mistake, it propagates through your program and causes a type error somewhere completely different.
3. New code style rule: you may omit type signatures on local functions, but always place them on top-level functions so that they serve as a barrier to incorrect type inferences.
What those people miss is that the productive part about dynamically typed languages isn't that you don't have to write the types, but that the types used by dynamically types languages are extremely ergonomic and can fit in many different contexts so you rarely get type errors.
Trying to write untyped dynamic code in a statically typed language thus just results in frustration, those languages were made to work statically and to throw a lot of type errors, they are horrible as dynamic languages.
> What those people miss is that the productive part about dynamically typed languages isn't that you don't have to write the types, but that the types used by dynamically types languages are extremely ergonomic and can fit in many different contexts so you rarely get type errors.
This isn't true at all. I write a lot of python and I don't think I've ever had a duck-typing happy coincidence that wasn't an error. No it's exactly that I'm only depending on the aspect of the type that I need at the call/use site (so I can update types while maintaining those contracts and I don't need to update any signatures anywhere).
That is outrageous. It also cannot be blamed on Hindley Milner. While HM in theory has exponential worst case time, the constructs to trigger it for OCaml and Standard ML are far more complex, carefully constructed and never occur in practice.
OCaml does have +. for floats to make type checking easier, but Standard ML does not. Also, if Swift has operator overloading, why isn't it instantly clear that the RHS is an int, which can be assigned to a double?
This just looks like a performance bug in the type checker, and nothing that is inherent to HM.
People have been complaining about this issue for 10 years. Presumably if there were an easy fix they would have fixed it by now. They painted themselves into a corner somehow with HM, polymorphic literals and function overloading. Haskell doesn’t have function overloading (except for type classes) and OCaml doesn’t have polymorphic literals.
It seems like it’s not only function overloading (which, as you point out, Haskell does too by way of typeclasses) but also the implicit conversion between Int and Double. OCaml, Haskell, and Rust all require you to convert between them explicitly so they don’t need to figure out whether each 1 in the expression is an Int or a Double.
Am I seeing this correctly? Is HM + polymorphic literals + implicit type conversion the cause of Swift’s exploding compile time in such cases?
Ok, but then they should explain this particular case better. The RHS can be figured out just by choosing the overloaded operators.
In C++ terms, if you have int& operator+(const int&x, const int& y) then (1 + 1) is not ambiguous and can be selected fast. Same for unary minus etc.
The Swift devs should then blog about this example and explain step by step what is going on. If the literal "1" can be both an int and a float, that of course would be insane. Is that what you meant by "polymorphic literals"?
Wow, that is crazy. For me (Swift 5.10) that single line is taking 11 seconds. Meanwhile, I have a 16,000 line app (importing AppKit, SceneKit and more) that compiles in 8 seconds.
I've been doing a lot of SwiftUI stuff lately and the compilation times are pretty crap even for simple applications. Change one line and it's a minute or two of compilation again.
Then there's this fucker for whenever you make a programming error involving types:
>The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
Basically, your code has a type error somewhere in it, and you need to figure out where and how it's wrong.
The devx is terrible. I came back to native iOS and SwiftUI. to upgrade some of my old apps after working in flutter for the last six months and I've been astonished at how slow and clunky it is. Runtime performance can also be quite bad if you do things in a straightforward way and optimizing things is not very straightforward.
If you're just doing UI changes, I think you can do pretty fast iterations using just the Xcode preview canvas. And while some light edits actually build in seconds, it does feel like it needs that minute compilation surprisingly often when doing fast iterations.
In my personal experience, no. That doesn't mean it's not a weakness of the language, but in practice you very rarely write a real expression with more than one or two type inferences in it. And when you do, you can just add explicit typing.
Personally I believe HN could do with some enforcement of the rule against self-promotion.
(Edit: I think this is simple enough, just use HN's existing canonical link algorithm and see if the comments for a particular destination exceed the desired rate.)
Be warned that V is infamous for over-promising and under-delivering. None of the features which sound interesting actually work, and there's no sign that anyone working on the language has any idea how they'll make them work.
I'm not arguing with the perpetrator of a scam about why his scam is a scam. There have been complete take-downs of the claims made by the V language. I'm 100% sure you've seen them as well. I have nothing more to say.
What the hell is up with HN people's fascination with V? HN is the only place I ever see people who have drank the V kool-aid. Why is the HN crowd seemingly so gullible? It's the same thing with cryptocurrencies and AI and NFTs; if there is a tech-adjacent hype train, you can bet there's constant spam from HN commenters who have bought it hook, line and sinker.
And therein may lie the source of my frustration: HN is full of not only gullible people who fall for these scams, but also the very perpetrators of these scams, using HN as a tool to generate hype.
There has been a lot of ink spilled about how V is not as advertised. I am not going to repeat everything here, but here are some articles you can have a look at:
The same good old 5 year old article that claims V's networking uses system("curl"), complaints that V doesn't run on every single Linux distro on release, uses debug builds with slow backend to measure performance, and complaints about V using git/make/libc and even electricity.
The 2022 article about type checker bugs that have been fixed years ago, and with false claims like the string.len one.
First sentence in the reddit comment:
> V initially made some promises that seemed completely unrealistic (automatically translating any C or C++ program to V)
The fact that these things were lies at the time should frame any reading of any current promises made by the V project. I have written the project off and am not aware of its current status, but I believe that its recent history (that article from 2022 is not 5 years old) should frame anyone's reading of current promises made by the project.
To bystanders who are interested in V: I recommend that you read the articles (especially the most recent one from 2022) and alex-m's response here, and decide for yourself which side you find the most trustworthy. I have nothing more to add and will not respond further. Goodbye.
Yes and please run the examples from the 2022 articles to verify that these type checker bugs have all been fixed.
(Not that having type checker bugs makes a language a scam in the first place.)
@mort96
It's very unfortunate that you make strong claims like "None of the features which sound interesting actually work, and there's no sign that anyone working on the language has any idea how they'll make them work."
This is an article by a guy who calls himself a "V hater" and the stuff from discord sreenshots wasn't even addressed to him.
What are the lies? Please list them here, I'm genuinely interested. Bugs in experimental coroutines, a new WIP feature no even mentioned on the home page yet?
I mean the whole article gives a huge number of ways in which you have lied about the language, and the whole internet is full of “V haters” who seem to be able to give solid evidence that it’s a scam
I’m sorry but this is kind of reading like a paranoid conspiracy level of denial. There is a reason why everyone hates your product and organisation. It’s not some conspiracy against you. It’s that you lied and it sucks
They are using integer literals, not Ints. Any type can declare the ability to be representable by any literal. Double can be represented by floating point literals or integer literals (since Double, the type, can represent integers, the category of number).
Unfortunately SwiftUI heavily loads the type system in order to achieve its terse view builder syntax. So builds are really slow and the compiler very often just refuses to compile things and offers zero suggestions for fixes. I hate to say it but I think Apple made some fundamental mistakes here that are going to be hard to undo.
So I manage a fairly large ios project and never really took the time to properly validate compile times, so I just started playing around with it and found some nice little surprises.
let horizpadding = self.collectionView.frame.width - ((5*50)+(4*25))
Expression took 1299ms to type-check (limit: 500ms)
Now obviously this expression could have been simplified long ago as a single constant, but the values themselves were written to help others understand what that constant meant (5 elements that are 50px wide) + (4 gutters that are 25px wide)
> the user can’t know that something over there made it so something over here can’t type check
To the Swift developers: just add source positions to your type AST, in addition to the term AST, then you'll know where a type has come from. It lets you give error messages like: expected type A (line X) but got type B (line Y).
Hindley-Milner type checkers perform well in Haskell and OCaml. I don't think this type system can be blamed entirely for Swifts problems.
That's where Dart and Flutter shine. Sadly, people discussing Swift/SwiftUI, Kotlin/KMP, and Dart/Flutter often only have surface-level comments that do not address potential issues like these.
Dart's static type system has been designed to avoid such problems.
Not once have I had, or heard of, a performance issue related to the Dart type checker. That's also one of the reasons why hot reload always works as expected.
I started using swift after the last wwdc and really fell in love with the language. It's really elegant and powerful at the same time.
I just tried the compiler flags, but the slowest expression is only 3 ms in my 1014loc project. Still very helpful to see where my slowest expressions are. I think I will set the threshold to 1ms and avoid slow expressions at all.
How can it be a good idea to have a time limit for the compilation of expressions in a compiler ?
Doesn't that mean that your program can compile fine in the morning, and fail to compile in the afternoon because you have an HD youtube video playing in the background ?
Or the same program could compile fine on your computer, and fail to compile on less powerful CI servers ?
Ocaml is really fast to compile when I play with toy projects. It’s a fairly simple type system compared to, say, TypeScript (I’m not familiar enough with Swift to know how it compares to Ocaml in terms of type system complexity). I’d prefer the simpler language if it means faster compilation.
... covers 80% of the cases where you'd like type inference. I'd even argue it's way more than 80%.
Surprisingly, this is what C++ actually went for, and `auto` works just fine. Same in TypeScript - make a best effort guess from the value, fallback to `any` no value or type is given.
Having to write out a type every now and then costs me way less time than waiting for the compiler to finish playing minimax against itself in the latent space of all possible programs or whatever.
My cell carrier (Telenor) has apparently identified this site as malicious and is preventing me from visiting... now I can disable the filter for my account and visit anyway, but the owner of the site may want to check that out
Sadly, no. It just says: "You're about to go to an unsafe web site! Nettvern from Telenor has therefore blocked this web site to protect you from digital threats." ("Nettvern" is the name of this "feature", and it's enabled by default)
Just last night I was watching a video on Jai from Jonathan Blow, and he was showing off how he could compile a game demo of +50k lines from scratch in under 0.5 seconds, with plenty of room left for optimizations.
Which really makes me question if language design is going in the correct direction. Are Swift programs less likely to contain bugs or are they easier to debug? I know that many of Swift's creators are incredibly experienced programming languages experts, so how do they justify this insane compilation tax?
Jai is a lot slower atm than at 2018, and feature-wise it is closer to C than to Swift with 2 different purposes.
Swift compiler time is spend mostly on LLVM stuff and it isn't much slower than other modern LLVM languages (Rust/Julia etc.) <ofc. this don't count this very specific complex type checks that are super slow>
I think the justification is that outside of pathological cases, it allows writing clean code without too many type annotations, and the compiler just figures out everything for you.