While interesting, every time I take a look at the Swift forums I fear that Swift becomes even more bloated. The community seems to like adding every language feature and their grandmother. Introducing a different object model means either some breaking changes or temporarily yet another model/feature.
Swift went from a language that a beginner can understand and learn in a reasonable time to something where you feel you periodically need a CS degree to follow code. I'm waiting for online-guides which subset of Swift we should actually use.
Every WWDC I spent the next 2 month just to learn the new language features. Just to forget them until the next project.
> you periodically need a CS degree to follow code.
I have a CS degree, and the issue is not so much being able to follow the code, but wanting to.
Roughly 99% of the issues the Swift designers appear to care incredibly deeply about are things that have never, in my now almost 40 years of programming, popped up on my radar as something I would be even marginally concerned about.
And that's not because I am generally unconcerned, quite the opposite. Heck, I've been working on my own language for over a decade despite not really being all that much into languages, simply because I strongly believe that things need fixing, badly.
I think it's this (type of) disconnect that is at the heart of the trouble with Apple software development these days.
I think that a lot of the work in languages and techniques, stems from a distrust of programmers. Most companies seem to think their programmers suck, and take a highly defensive posture in their tooling and process.
I started seeing this in the 1990s, with Taligent (their style guide was a fun read), but I think it predates that.
The 1990s also were the times of software that crashed ‘all the time” and had security vulnerabilities so large that they nowadays would be called open doors.
One can also see all that work on languages as trying to create a language where programmers get help preventing such issues, so that they can focus their attention on making stuff users want and that also produces code as performant as what expert programmers could write in the languages of old.
That’s not distrust, but realization that being vigilant about buffer overflows and data races is a part of a programmer’s job that (maybe) can be automated.p without sacrificing performance.
The kind of stuff discussed in this article, for example, can help compilers move data around less, without introducing potential concurrency issues.
Now, whether this accomplishes that? I wouldn’t know.
> The 1990s also were the times of software that crashed ‘all the time”
Still a problem. I need to hard-restart my machine multiple times per day, because of two developer tools that spawn orphan processes, all day long. It seemed to suddenly become a problem, when the M1 came out, so I assume that the fault actually lies in Rosetta 2.
I'm not sure that Swift was ever super beginner friendly. I'm thinking of protocols and how they mix with generics and associated types, as well as "class vs struct" ref/value semantics, in particular.
But, anyway, I do agree that Swift is a mess now. Personally, I think the last salvageable version of Swift was 3. That's not to say that everything added since 3 is bad, but rather that they've done much more harm than good to the language since then.
Examples:
* Swift used to not have a Result type. Rather, you were supposed to mark fallible functions with the `throws` keyword, and the compiler would force callers to handle the possible failure. However, that didn't work with so-called "escaping" callback functions that are passed as arguments and then held to be called at a later time, because there may be nobody around to handle the failure when it's called. (This is as opposed to some functions that are marked as `rethrows`, which guarantees that they execute any callbacks immediately, and therefore can be passed `throws` functions/closures. So, then they added the Result type. The reason this sucks is because now there are two ways to define a fallible function: either mark it `throws` or make it return `Result`. What's even more infuriatingly inconsistent is that a function that is marked `throws` does not specify the type of the `Error` that it throws; however, the `Result` type is actually `Result<T, E>` and does specify the `Error` type. So, which is it? Are we supposed to care about the specific error types or not? Of course people will come up with some rule of thumb or convention around when to use which, but I think it's clear from how the language evolved that this was more-or-less a design back-track.
* Swift has an unapologetically imperative syntax. Almost nothing is an expression, and even handling null/nil involves imperative constructs like `switch`, `if let` and `guard let` blocks. That's fine. It's not my favorite syntax style, but whatever. Until Apple decided that super-imperative code isn't that elegant for writing UI logic. So, they tacked on this God-awful "result builder" syntax/API/feature with annotations and arcane syntax/semantics that is completely out of place in the rest of the language. I feel the same way about property wrappers and dynamic member lookup. They are totally out of place with how the core of the language works.
Before Swift 3 the language has the very stupid increment/ decrement operator pairs.
In C these operators, paired with pointer arithmetic, allow you to write either incredibly terse yet correct implementations (e.g. strcpy as one liner) or more often in practice, conceal horrible defects where your code is hard enough to read that nobody spots the mistake. If it is the 1970s where you are, your CPU might have an increment instruction, and your compiler might not be smart enough to spot that x += 1; is also an increment. So when C was invented this isn't crazy at all.
C++ inherited them from C, for whatever that excuse is worth in the 1980s.
But in a modern language you should either abolish them entirely or, if you can't bear to do that, neuter them enough that most of your programmers won't manage to blow their whole foot off when they try to use them. Swift chose to abolish them. Go keeps an increment operator but you can't use it in expression context, so that way it doesn't have pre/post-increment.
So, to me that's pretty obviously an improvement in Swift, even if it's incompatible.
Well, just going to throw my rule of thumb in there.
I tend to use the Result type in asynchronous code and Throws in things that execute tmmediately.
I find Result much cleaner than the old returns of callbacks, which tended to be 2 optionals, the succesful result, or the error, or worrse, both or neither.
I won’t deny it can’t be abused to make things worse, i complained to a supplier on making a throwable combine publisher.
> I tend to use the Result type in asynchronous code and Throws in things that execute tmmediately.
Sure, but then you have to have a crystal ball to know which functions will or won't ever be called asynchronously. It drives me nuts that a function should have to know how, when, or where it's going to be called.
It doesn't help that working with Result is generally pretty cumbersome (flatMap only gets you so far- you'll either end up with deeply nested (flat)Map chains or have a bunch of switch statement noise).
So, for a while I used to just always prefer returning Result for everything, since it was guaranteed to "work" for both async and sync operations. But, now I've gone the opposite direction. I use `throws` for all my named functions because the are so much more ergonomic to "compose" and I just wrap the call in a `Result {}` closure/lambda when I have to pass it as an async callback.
It is weird. I think both actors and the upcoming ownership model (for Swift 6.0) are important. But they are huge to consume with (hard to understand, big foot-print in compiler).
Maybe at some point, we can have Swift 7.0 which demotes classes to a special type of struct (you cannot without ownership model) like Arc in Rust, and that can get rid of some bloats.
I see two issues when the language evolves: 1. the type system still needs some work, parameterized extensions and non-existential / existential type differences are the two major issue I have. 2. get rid of Foundation on non-Apple platforms. The community should really say Swift 7.0 is the "done" version like Go 1.0 (or Julia 1.0), and we will do fixes, small type system QoL improvements and improve standard libraries and have no plan for 8.0 any time soon.
They mention C++ quite often, it seems like they plan to become more than an Apple-first language. If Swift had been designed to be cross platform first and then had optional, but well supported and official "ObjcInterop" and "ApplePlatform" modules it could have taken a serious portion of Go and Rust's market share. But as it is nobody outside the Apple ecosystem is even considering it.
As to "Val", there is inout (and sinkset) which Herb Sutter also suggested for C++ to replace all the &ref, const ref&, *ptr, and value parameter mess which to a degree Rust also inherited.
The default let parameters (const ref&) is also what Carbon defaults to. And with explicit copies there are now destructive moves as in Rust. They write "Our goals overlap substantially with that of Rust and other commendable efforts, such as Zig or Vale" - a wasted opportunity to mention "Rust" in the title here!
I just think the fun keyword is silly and the wrong width, just use fn already! But I like any language where reference and value semantics are not hidden behind compiler magic.
You might be surprised. Check out Vapor [1] and Soto [2], which are built on Apple's SwiftNIO (non-blocking I/O) [3]. It's actually a very nice ecosystem. All the tools are in place to build web servers, Lambdas, etc.
I think the point they are trying to make is that unless someone already has history as Mac/iOS app developer, they are in practice not even considering Swift when doing a project that targets Linux, regardless of how good the libraries are or are not.
If we exclude the aforementioned group, then adoption of Swift for non-Apple platforms seems like an almighty bust (close to nonexistent?) compared to the lofty goals the Swift team were mentioning in the early days.
I agree. I've assumed that once Lattner left as Swift lead, that Apple basically decided that Swift would be Apple-only. That's the only reasonable explanation for them adding Combine, result builders, etc into the language.
For server and system code, Swift is mostly competing with Rust and Go, as people consider fast, compiled, type safe languages. If you look at Docker downloads, Swift has 10M, Rust has 50M, and Go has 1B+.
Compared to Go, yeah, it's nowhere close. But compared to Rust, I wouldn't quite call it a bust, and could see teams choosing it for the easier learning curve.
So yeah, it's a relatively small niche at this point, but I also think it's fair to say the effort to make Swift a contender on the server has been more than just iOS developers being forced to make web apps.
One of Apple's cultural touchstones has been that it can focus; that it knows when to say "no", even to could-be-great ideas that genuinely excite them ([1] example of Jony Ive retelling how Steve Jobs taught him this).
There's little evidence of this discipline when it comes to Swift: got a great idea? got people excited? Breaking change? In it goes!
They might want to add some motivation for this, I have no clue why they are obsessing over how copying works, aren't there more important things to focus on?
Being in control of object lifetimes makes a huge difference for performance-critical code. Swift’s approach to performance is “give everything an almost Python-like level of dynamism, then aggressively inline and optimize everything that’s statically known”. But this can be problematic when you know more about your object lifetimes than the compiler does.
I switched to Rust for most of my personal projects because I was working on something performance-sensitive and got really, really tired of finagling my code to try to convince the optimizer to hoist some retain/release call out of a tight loop.
Swift went from a language that a beginner can understand and learn in a reasonable time to something where you feel you periodically need a CS degree to follow code. I'm waiting for online-guides which subset of Swift we should actually use. Every WWDC I spent the next 2 month just to learn the new language features. Just to forget them until the next project.