Hacker News new | past | comments | ask | show | jobs | submit login
Apple is rewriting Foundation in Swift (github.com/apple)
261 points by hnand on May 22, 2023 | hide | past | favorite | 250 comments



The project to rewrite Foundation in Swift is many years old, and had been stalled.

New in 2023 is instead NOT to match all of Foundation, but to make some core libraries that make Swift usable on many platforms. E.g., they broke out the very large internationalization/i18n tables required for unicode support into a separate module.

It's very unlikely Apple would move off their own Foundation, which is well-understood and well-tested. This strategy permits them to incrementally sidle over, but continue with both for some time. Aside from servers, it's likely to look promising for multiple tiny devices - trusted computing, watches, headsets, airpods, etc.


This is not the same thing as swift-corelibs-foundation, which is the old project that stalled out. The officially announced plan here is for it to fully replace the old implementation of Foundation.


Shame really. Apple's platform has so many good ideas I wanted Swift to bring to the world as a cross-platform open-source effort.

:(


Wouldn't that be this new Swift Foundations project?


This is a new effort that is committed to shipping on Apple platforms. It will also be the first time 3rd parties can get code changes into Foundation.


What, sending patches in feedback assistant and then getting an engineer to merge them in wasn’t it?


Long overdue -- there's a very noticeable difference in runtime errors with Swift libraries vs the rest. This is especially true of AVFoundation, which is an absolute mess of hidden and undocumented state. Swift optionals and enums will go a long way to fixing that.


One of the interesting side effects of this is that those new libraries are going to have significantly worse introspection and hotpatching opportunities, so they better code them correctly, or things are going to really suck.


Would you mind to elaborate why this is the case? I am not in Swift/Obj-c ecosystem so I'm just clueless.


Apple, like all software companies, ships bugs. People who write software for their platforms also ship bugs, but sometimes they have to deal with the bugs that are in the OS and system frameworks. Sometimes these get fixed in future updates, but sometimes they don't, and in any case until then something needs to get done for software to work correctly, not crash, etc.

Apple, like most companies, does not provide official mechanisms to patch bugs out of their software. They also keep most of their system frameworks closed-source. However, it just so happened that they write most of their software in programming languages are are highly dynamic and easy to reverse engineer, allowing third parties to dig into implementation details when they need to, understand how things work, and most importantly, change things as needed. It's risky and dangerous work, but every major app you've ever used on an Apple device ships with several fixes like these at any given time.

The Apple of today has very different goals from their frameworks. Shipping symbols that help people understand how the system frameworks function is "needless bloat" that "exposes implementation details". Indirection and patching functionality is seen as overhead that can be removed for improved performance. App review does not like it when people ship self-modifying code, and as a result patching system frameworks is also not permitted due to codesigning. Swift in particular removes a lot of dynamic dispatch functionality, gets stripped far more aggressively, and compiles to code that is far less readable that what we had before.

What this really means is that we are moving closer to a system where what Apple ships is what you get. Being able to make tradeoffs in this area is necessary, but the ones Apple have chosen mean that they have to be very confident that they don't ship bugs. If they aren't capable of doing this, well, it's not a good tradeoff.


Obj-c is basically a dynamic language on top of C, whereas in Swift most things are statically determined at compile time.


This will not directly affect AVFoundation, or any other framework which is not Foundation itself.


Not as part of the work from this announcement, no, but it is surely an indication that Apple is moving in a direction where those other libraries will eventually be re-written in Swift too.


(Oops, I commented before looking at the link and thought it was pointing at something like this - https://www.swift.org/blog/future-of-foundation/ - not the git repository! But you know what I mean)


Swift has been a hugely successfully language.

Mostly at slowing down iOS runtime performance per clock cycle, and increasing compilation times.

Its also been hugely successful at punishing companies for using it, since it didnt really work for a few versions.

While there have been fewer crashes (sort of) due to the switch, as someone who worked for years in Objective C and years in Swift, I honestly never felt that Swift significantly improved much except allocation rituals and properties.

I do however enjoy Kotlin for the backend because it has some nicer semantics for object construction.

While I do admire the work that Apple has put into Swift, I really felt that Apple was trying to "impress everyone", with a whole lot of bikeshedding and sick amoutns of code generation to convince us it was the right thing to do.

Frankly, apps run over 10 times faster in obj C and xcode with Swift is probably my least favorite and sort of non-ergonomic experience close to DOS batch file dev in Windows Notepad.


Given karismatic a prize.

I wonder if the current state of affairs is what Chris L had in mind?


I don't know, but Chris L was always a C++ guy, he never really understood Objective-C in my opinion, which is why Swift ended up the way it did. I doubt he ever used it very much, or had a deep understanding of it.


I’m pretty sure Chris Lattner has a good grasp of Obj-C, after all they had to design Swift in a way it could easily interface with Obj-C. While Swift went through some versions to become stable, they were always transparent on the roadmap and had a very open attitude towards community proposals. Looking back I think it was a very well executed transition and would consider it a success in every way.


I just... completely don't understand the appeal of Swift beyond it being Apple's in-house language. I was excited for Foundation to become open-source. Using a language you'll really only find in 1 environment is a negative. I'm going to go feel ashamed for not getting excited about a language now. I should be celebrating "more languages", but Swift is just Apple. No one outside Apple chooses Shift unless they want to build something in Apple's ecosystem.


While it’s true that the language is really only useful on just Apple platforms, it is still quite an amazing language. I have professionally used about half a dozen languages over the last many years, written hundreds of thousands of lines each in all those languages, and swift is a true breath of fresh air. It checks all the boxes for me, very concise and elegant to write, yet very strictly statically typed. A nice upgrade to OO programming with use of protocols, with strong encouragement to use functional programming here and there as well. Runs reasonably fast, on par with Go or Java. Overall just really nice ergonomics. named parameters and a wide variety of other seemingly minor syntactical characteristics act as a meaningful upgrade to make the overall process of writing code much more enjoyable for me.

The fact that Apple provides all these incredible platform specific frameworks and libraries for graphics, audio, games, GPU kernel programming, and more, it’s just the icing.


> upgrade to OO programming with use of protocols

How exactly are protocols a Swift "upgrade" to OO programming? They were in Objective-C since the mid 90s, adopted by Java as interfaces, copied by C# etc.

Also, protocols in Swift have a huge performance downside, because they decided to have them work across structs and classes: if you use a protocol in a function argument, the compiler doesn't even know the size of the argument at compile time, so copying the arguments has to be dynamically dispatched unless the compiler can figure it out in an extra optimisation pass...that's not possible when doing separate compilation.

> Runs reasonably fast, on par with Go or Java.

A downgrade...particularly considering the epic compile times.

> named parameters

Also there since the 80s, in a less weird way.

https://blog.metaobject.com/2020/06/the-curious-case-of-swif...

> ...professionally used about half a dozen languages ...

What languages were those, if I may ask?


Strange comment. I never said that Swift invented these language features; not sure what the relevance is that many of its features have existed for a long time and some other languages also offer them. Swift ties together a lot of features that in combination make it quite enjoyable to use. It's fine if you prefer other languages, there is certainly plenty of choice. Speaking of protocols, they offer some significant differences to protocols in ObjC which are very effective for writing saner and less bug-prone code, but I don't want to expound here on all aspects of the language. (there are many other simple features that make a big difference to the coding flow; computed properties, language-level optionals and the syntax surrounding their myriad uses, the list is quite long of good features to speed up safe, productive workflows)

Never had a problem with long compile times -- once you have your initial build, 90% of the time or more the incremental build process is very fast.

The languages I used for about 15 years were ObjC, C, C++, Clojure (and other langs before that). I find Swift to be a thoughtful blend of the key features of these four languages -- each of the things I like most about those langauges are in Swift, but it's the syntax in particular that I like most.


> I never said that Swift invented these language features;

Hmm...not sure how else to interpret "upgrade to OO programming with use of protocols" other than that there was an upgrade to OO, and that upgrade was with the use of protocols. Now it looks like you meant an upgrade of the way protocols are used in OO, but you won't say how they Swift's protocols actually constitute an upgrade.

OK. ¯\_(ツ)_/¯


Protocols replace many uses of OO from objc, making code much easier to understand. In addition, swift protocols are much more powerful in almost every way.


> How exactly are protocols a Swift "upgrade" to OO programming? They were in Objective-C since the mid 90s, adopted by Java as interfaces, copied by C# etc.

Typically you would write more protocol-oriented code - rather than using inheritance (which is mostly there for Objective-C compatibility) you define protocols and implement them for types.

This is a lot closer to traits in Rust than interfaces in Java. Among other things, a developer can define how a third party type implements a protocol they control, without subclassing or wrapping, as long as it does not require additional data.


So just like lots of Objective-C code, including the old NeXTSTEP Kits?


Swift's protocols are an evolution of protocols in ObjC. They offer additional features to make them safer and easier to write.


I know both languages, did my learnings of Objective-C while porting code from NeXTSTEP into Windows 95, what additional features?


I won’t teach Swift here, but Google is your friend. You can start by studying all the languages differences with regards to extending existing protocols, and how optional protocol conformance and methods are handled. Swift in many ways (not just with protocols) is like a fix for many headaches that objc provided in these areas. It’s partly why Apple encourages “protocol oriented” programming in swift over “object oriented” because of how rich the feature set is compared to objc.


Which are basically another way to mix protocols and categories from Objective-C.

No need for Google, as mentioned, I know both languages.

As for protocols not being encouraged in Objective-C, I migth own the wrong NeXTSTEP manuals.


It does indeed sound like you have some very old documentation.

(as a sibling comment pointed out, swift protocols are typically used for things that in objc you would use inheritance)


Old enough to have enough Objective-C protocols and categories in action.

The kind of documentation that inspired Java authors.

https://cs.gmu.edu/~sean/stuff/java-objc.html

https://en.wikipedia.org/wiki/Portable_Distributed_Objects

https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere


Quite a lot has changed in objc since the nextstep days! And a lot more changed as Apple learned from that and built swift. My guess is you don’t know these languages as well as you think if you’re not clear about these differences.


>Runs reasonably fast, on par with Go or Java.

Having to deal with weakself, no reference cycles, very limited closures, having to deal with Xcode, no integration with any other IDE because Apple Apples, debatable generics, SPM, debatable cross platform abilities, fucking Tasks and Actors, SwiftUI being locked to versions of Swift, extremely limited community, few high quality open source libraries for something that only performs as well as Java is quite the hard sell.

For that price, you could also get Kotlin which fixes most of Java's problems and provides access to all the JVM as well as Kotlin/Native, with top tier DSL abilities and a really well thought out stdlib, coroutines, reified funs and much more.


This comment is of very low quality. I agree with pretty much _nothing_ with it (and tbh most of it feels like trolling), but I’ll just comment on the thing where it’s obviously plainly wrong: no other IDE.

Swift has LSP integration, and has an official extension for VSCode, which works very well. They even have a blog post about it https://www.swift.org/blog/vscode-extension/


It's Microsoft's Powershell, but for Apple's OS.


IBM briefly did a "swift on the server" though at the beginning of 2020 IBM was no longer interested, but the page is still up. https://developer.ibm.com/languages/swift/

2016: https://9to5mac.com/2016/02/22/ibm-swift-cloud-kitura/

2016: https://www.infoq.com/presentations/swift-server/

2020: https://www.infoq.com/news/2020/01/ibm-stop-work-swift-serve...

You can run it in docker: https://hub.docker.com/_/swift


This was so predictable. I remember those articles, and thought "Gee, IBM doesn't know what they're doing, they're just jumping on random bandwagons"


Still waiting on Apple Silicon DB2 driver. Any day now, I know it!


> but the page is still up.

for now

just saw an article about IBM deleting most of their website because its a mess. It was some linkedin developer's post.


Link to the recent post about house cleaning to remove 80% of IBM's website (though on Twitter not LinkedIn).

https://twitter.com/bryanfcasey/status/1659941975519375360


My experience was the opposite, I really like Swift but its usefulness is held back by the lack of a wider community. I'm not sure if that's due to a lack of investment on Apple's part, or maybe that it lives in an unhappy compromise between C++ and Python where there's always a more mature alternative that's good enough, but the language itself is quite nice to write.


Same, I find swift so much more ergonomic and beautiful than rust, but the development story outside (and inside) of xcode is just ugly


AppCode being sunsetted by Jetbrains made me a little sad. Not coding in the Apple ecosystem anymore. Out of IntelliJ, Visual Studio and XCode, it made the worst impression on me


Swift is fully compatible with Visual Studio for those who prefer it. They have an official extension which works very well https://www.swift.org/blog/vscode-extension/

(Also, it’s Xcode)


Scala is likely a big reason as well.

It is the most similar in terms of language features and style but it has access to Java's unparalleled library and tooling ecosystem.


If they just had a proper cross platform standard library and package manager it might be more successful, it is a pretty cool language. But the north Korean esque closed apple ecosystem makes that impossible, all of that is intentional.


The package manager has been open source, cross platform and available since pretty much from the start.

The standard library is becoming cross-platform as we discuss it here.


> The standard library is becoming cross-platform as we discuss it here.

I'd argue that it's too late already, the ship has sailed, Swift is heading the same way as C#.


I'm having a hard time seeing Swift becoming a well-respected and easily considered language for your next project. The sheer lack of anything outside of what Apple has made already makes it a non starter. In comparison, .Net is absolutely amazing.


Is there an alternative with similar features and ergonomics? I am def. not aware of one.


For which purpose? It depends what you want to do with it.


Well lang features do not really depend on the purpose the 3d party libs do. Let's say the purpose is web services.


I doubt the web story in swift is that strong anyways, I'd pick python, ruby or php.


Why is it not better for Apple to make Swift truly cross-platform?

Is it just that the maintenance cost is not worth it? Or would it threaten its ecosystem in a way that I don't understand?

It would seem that making Swift cross-platform would make Apple's ecosystem more accessible to developers. The barrier to entry would be lower if developers only needed to learn the APIs and not a whole new language and its tooling too. This would help the ecosystem stay healthy for a longer time.


The Swift compiler and standard library are pretty tightly linked. The Swift type system is getting pretty complicated already, handling a whole range of lifetimes, async, the combination of protocols, generics, existentials, etc. with some magic handling for arrays and maps in the library. So the cost of cross-platform effort would be significant.

The greatest barrier to entry is XCode. It's archaic, impossible to extend, and has a very wide surface that would overwhelm any development team.

(Java works fine on macOS, but developers want to deploy to iOS.)


.NET MAUI[1] and AvaloniaUI[2] run on iOS pretty well, and one can use Rider/VS Code/other editors to develop apps.

[1]: https://learn.microsoft.com/en-us/dotnet/maui/ios/cli

[2]: https://docs.avaloniaui.net/tutorials/developing-for-mobile/...


Hope this keeps getting investment and support by Microsoft. Choice and competition are very welcome, even though i code neither iPhone Apps nor C# ATM.


Kotlin/Native deploys and works on iOS, but due to Apple holding back on opening up Swift, it has to do everything through ObjC interop. But make no mistake, to Apple, that's a feature: if other languages have a hard time integrating in their ecosystem, they'll go away.


Apple’s problem has never been the “barrier to entry”. They know developers will make apps for their platform no matter what. If Swift becomes more universal it makes it easier for Apple developers to also develop for Android (etc) which Apple would see as a bad thing.


Wouldn't developers also develop for Android no matter what?


There are still some iOS (and especially macOS) specific apps because the cost of porting a native app is not insignificant.


Some Apps are exlusive to iOs and some start out there, because that is where the money is.


Not as long as iOS users spend more money on average.


That has only really been the case for iOS and even then it’s not necessarily a given.


I thought Apple just wants to be different, and making Switft not cross platform can make porting software on Apple ecosystem harder...


Totally valid point that swift is only useful for developing for Apple products, but this same criticism also applies more or less to Kotlin on Android. If you are dealing with iOS it is a joy compared to objective C. I would also add that it is a really nice language in general that is modern, expressive and general ergonomic.


Kotlin wasn't developed for Android. And can be run anywhere.

It was JetBrains pet project to fix archaic stuff in Java without making a Scala. It was designed from the get go for a wide base (as wide as java).

I'm not sure how it became Android's main target, but I doubt it was the original purpose.


Further to that point, Kotlin mainly targets the JVM (which can run pretty much everywhere), but also can be compiled to JavaScript, LLVM, and a bunch of other targets.

Honestly the only times I've evaluated Kotlin for personal projects was bc of that flexibility.


recently kotlin can even be used to compile to native and wasm, but gradle makes it almost less usable than swift


With various levels of implementation quality, Kotlin/Native had to be rebooted, on JS it hardly offers anything better than TypeScript, on the JVM it is mostly used by Android shops anyway.


It was adopted on Android quite fast after Jetbrains made the integration in Android Studio almost flawless, partly because back then the Android runtime was stuck on the Java 6 syntax plus some syntax sugar that the IDE provided to mimic lambda functions.

It helped a lot that I think Jetbrains took advantage of all they could to make development in Kotlin very smooth, I remember back then people joked how Kotlin worked so flawlessly on Android Studio almost like it was the main language while on Xcode Swift was slow, the highlight would stop working constantly, refactoring was not supported, often indexing the project would hang forever etc.


> Kotlin wasn't developed for Android. And can be run anywhere.

I agree and would go so far as to say that Dart is a better comparison to Swift than Kotlin is.


I think everyone jumped from Java to Kotlin due to Oracle suing Google over Java in Android. It also provided a huge opportunity to be more deliberate since Android was maturing.


Oracle sued Google over the Android's implementation of Java's API. Kotlin still uses those APIs as it is compiled to Java bytecode on Android.


That is an urban myth.

If that was the case, they would have adopted Dart instead of keeping an high dependency on the Java ecosystem for Android.

Android Studio, Gradle, Kotlin compiler, the libraries that Android depends on from Maven Central, all Java.


Because Google and JetBrains are in bed together in what concerns Android development tooling.

Try to use Kotlin without the Android ecosystem, it is just syntax sugar for the JVM.


Java is just syntax sugar for the JVM too, it's just that Kotlin is a better one.


> this same criticism also applies more or less to Kotlin on Android.

i think kotlin has more use outside compared to swift.

But a language has to start somewhere, so i wouldn't hold it against swift. It's a fairly well done language regardless, and nothing apple specific about it tbh (which are all just libraries).


I thought Kotlin was pretty widely used as a Java alternative on the backend


An example is smart contracts on the Corda blockchain, which is programmed in Kotlin. It may be me, but I have seen more Kotlin outside of Android than on it.


Kotlin seems kinds of pointless with the features in newer Java versions


No amount of lipstick on the java pig will make it as ergonomic to use by default as kotlin.


Harsh but have to agree.


It is enough that it is the Java Virtual Machine, not Kotlin Virtual Machine.

Hence why Kotlin only really matters on Android.


Just looking at Java’s upcoming string interpolation makes me so happy I moved to Kotlin years ago.

https://openjdk.org/jeps/430


Would Java had advanced without Kotlin on its doorstep?


Lots of languages are converging on features and idioms. Scala and Haskell have had an influence on New Java too.


Yes, because Kotlin hardly matters in regards to the JVM.


At least I got to integrate Spring Boot well with Kotlin. Same goes for writing Minecraft mods.


> just... completely don't understand the appeal of Swift beyond it being Apple's in-house language.

It's like Rust in that it offers memory safety by default without a big performance hit.


I think many also find its syntax more approachable compared to that of Rust. It's got a lot of bells and whistles, but newcomers don't have to use them right away and can pick up more advanced bits as they become comfortable with doing so.


> I think many also find its syntax more approachable compared to that of Rust.

Chris Lattner (of LLVM fame) did design the language with the goal of hiding complexity until it is needed.

> The concept of progressive disclosure in Swift, where you can start with something very simple and then learn complexity as you go, was totally driven by making it teachable. I personally spent a lot of time trying to make sure that the introduction to Swift could just be print("Hello World"). No semicolons, no \ns, none of that public static void main stuff. Making it simple and approachable was a strong goal, and not in a weird way where in a teaching environment there is this “Swift Prime” language that’s similar but different.

https://oleb.net/blog/2017/06/chris-lattner-wwdc-swift-panel...


I did the Swift tutorial a short while ago and it was a breeze. I found it more approachable than Rust (I do like Rust).


> without a big performance hit.

You might be surprised (I was). Most of the benchmarks I’ve seen place it more in the neighborhood of golang and v8, rather than the C, C++, rust neighborhood you might expect.

Another commenter in this thread highlighted that the ref-counting GC is what keeps it out of the C / Rust performance neighborhood.


Ref-counting GC is pretty slow. It’s great for avoiding stop-the-world pauses, which can be really important for UI stuff like what Swift is used for, but you won’t win any throughput awards.


So well engineered for its use case. Was Go optimized for throughput?


Swift doesn't have a GC. The automatic reference counting is a feature that just inserts retain/release statements at compile time, so there is no additional process that handles that. I would suspect that the performance hits originate from other things.


I don’t see a lot of utility in policing an overly narrow definition of what constitutes garbage collection.


ARC is a type of GC though.


I remember seeing a few benchmarks where v8 and go were competitive or sometimes even slightly more performant.

So not even sure that GC has always a perf cost is always true.

Not saying it doesn't in some cases, just so that we are clear but the truth seems to be more nuanced.


From what I've seen the ref counting can cause a big performance hit. Maybe this has improved in the last couple years?


> From what I've seen the ref counting can cause a big performance hit.

Apple has been writing OS components in Swift for a while now. It certainly doesn't seem to be producing the performance issues we saw when Google attempted to write components for Fuchsia in Go or Microsoft's effort to create new features for Longhorn in .NET.


But it's mostly replacing Objective-C code which was already not particularly fast – as opposed to the C++ or C code used more often in performance- or memory-sensitive areas.

My experience with Swift is somewhat limited, because every time I've tried to use it, I've run into glaring performance issues and had to switch language. It might be reasonably performant compared to Go or .NET, but it's nothing like Rust.


What kind of stuff were you doing in Swift to notice performance issues? I've been developing macOS and iOS apps for a while now and it doesn't seem much slower than Objective-C.


Was probably due to FFI for Go I'd assume.

Do you have a reference somewhere I can read up?


It was discussed here when they decided their networking stack in Go would need to be rewritten for performance reasons, and banned the future use of Go for Fuchsia system components.

https://news.ycombinator.com/item?id=22409838


Apple has always been preferential to reference counting (see Objective C) and it seems like they may have spent a fair bit of effort optimizing Apple Silicon for it.


> it seems like they may have spent a fair bit of effort optimizing Apple Silicon for it

According to information released when the M1 came out: retaining and releasing an NSObject takes ~30 nanoseconds on current gen Intel, and ~6.5 nanoseconds on an M1


It's good to reduce the cache hot best case time of course but isnt the more fundamental sin of RC in the extra read/write memory traffic, cache footprint and cross core cache line ping pong when incrementing object refcount fields.

(or if going with BRC, correspondingly there shouldn't be a advantages for this custom CPU feature)


Reference counting on Objective-C was plan B, after the failure of implementing a safe tracing GC in a language with C's semantics.

So they went with plan B, having the compiler automate the retain/release messages used by the Cocoa framework.

Everywhere else in Objective-C, the memory is still manually managed, or via memory pools.


Reference counting is slow because it has an additional increment/decrement operator on each lifetime of a scope.

Add a little bit of salt to insult you need it to be atomic if you want it to run on SMP. This means for each time you have create/release the lifetime of an object you will make a lot of memory barriers, and create a lot of cache contention.

But in practice the overhead is actually nought, and most of the time you rather deal with I/O bound problem more than an additional atomic increment operation. Modern processor is fast enough to deal with them in few cycles in around the order of 10 nanoseconds


Swift's refcounting is atomic (as is objc's). As long as you're not under contention most benchmarks I've seen show negligible overhead (from the addition of atomicity, the refcount overhead is still there) for uncontended access. But IME if you do have many threads walking the same data structure you end up spending stupid amounts of time fighting the refcounting. This applies even if the data structure is immutable and has guaranteed lifetime as swift's type system doesn't seem to allow that to be expressed, and as a result it seems to do a lot of ref churn we'd consider unnecessary.


> Swift's refcounting is atomic (as is objc's)

Most of the time it’s possible to avoid atomic instructions and still be thread-safe. https://dl.acm.org/doi/10.1145/3243176.3243195:

“BRC is based on the observation that most objects are only accessed by a single thread, which allows most RC operations to be performed non-atomically. BRC leverages this by biasing each object towards a specific thread, and keeping two counters for each object --- one updated by the owner thread and another updated by the other threads. This allows the owner thread to perform RC operations non-atomically, while the other threads update the second counter atomically. We implement BRC in the Swift programming language runtime, and evaluate it with client and server programs. We find that BRC makes each RC operation more than twice faster in the common case. As a result, BRC reduces the average execution time of client programs by 22.5%, and boosts the average throughput of server programs by 7.3%.”

I remember reading that this made it into Swift, but cannot find it, so I’m not sure anymore.

And of course, the Swift compiler tries to avoid unnecessary refcount updates.


On apple hardware, uncontended refcounting (swift or objc) has the same perf as non-atomic refcounting. The cost exists, but it isn't terrible, once there's contention between threads the perf drops through the floor. The real killer is there are a bunch of places where the swift evaluation model means they're forced to ref churn, which comes up 100% typical workloads like the million triangle objects in my swift raytracer, all being hit by numerous threads :D


IME Swift’s refcounting is either incredibly inconsequential or a dealbreaker, with very little in between. They’ve done a very good job of optimizing it to the point where it’s barely measurable even in perf sensitive code… until you hit the scenarios where it completely murders performance and there’s nothing you can do about it.

Hopefully the upcoming ownership functional will help in those cases.


While generally true that you’re not gonna find swift anywhere else, this company[0] is making their browser cross platform and using swift as the language. Apparently there’s a proof of concept runtime for windows that they’re actively working on.

[0] https://arc.net/


You got a point. On the other side, arguably, one could say the same about C#, JavaScript or Python.

It took real effort from Microsoft into the Open Source space to make it more widely popular. Far away from Java or JavaScript, but I see many similarities between the development and evolution of Swift and C#.

Almost everything starts as a domain specific language. Same goes for JavaScript. First browser only, then through Node, suddenly JavaScript everywhere.

Python gained massive traction through Jupyter. Before that, I found Python a clumsy alternative to JavaScript. Since then I have changed my mind. ;)


Python came preinstalled on pretty much every Linux box before Jupyter IIRC.


There was a lot of excitement when it first came out.

I tried Swift for Linux, and soon realized it was a meh experience (NS this, NS that... NextStep is still there) and switched back to Rust.

It could have been a strong Rust alternative.


That is partly what this article is about - a swift-native set of foundation modules.


Yeah but I am too invested in Rust right now


As someone who mostly writes Go, I enjoy Swift the language. I am guessing that your concerns are centered on Swift the ecosystem? (E.g., it's only used for Apple things, so there are probably not libraries for Thing X.) If so that makes total sense to me. If not, I'd be interested to hear more.


> No one outside Apple chooses Shift unless they want to build something in Apple's ecosystem.

“No one”

Yet The Browser Company (The one that is hyping the Arc Browser) is writing their browser in Swift to support Windows. [0] which that is their main product.

The Browser Company is not “No one”.

[0] https://m.youtube.com/watch?v=Xa_fNuaSE_I

EDIT: So this video doesn't show someone choosing Swift outside of Apple and using on a different platform (Windows) and doesn't disprove the claim of "No one outside Apple chooses Swift"?

Surely you can do better than some of the very low effort replies below.


The Browser Company is absolutely no one. They're barely more influent than me creating a repo on github, and that kind of impact is pretty damn low as is. They're a 0.000% browser share company that runs a tight marketing campaign of only inviting tech influencers to their browser. Mind you, said browser has good ideas, great ones even. But overall, they're absolutely no one.

iOS/macOS devs use Swift on other platforms because it's the only language they know, yeepidodadey. Ignore the fact that 99% of their project is cinterop with Chromium


The who


Yet another chrome wrapper, but with VC funding.


And?

Brave (VC funded) is also a Chromium wrapper and Edge (Microsoft owned) is also one as well and both of the somehow managed to beat Firefox in usage. So what is you point?

Chrome and its derivatives is the reason why browsers like Firefox is failing to keep up and continues to lose users.

Using anything other than Chrome for a modern web browser is a losing battle. (Brave already tried that with Firefox and quickly switched to Chrome)


> And?

I was answering a question asking what it was. Are you saying my answer was wrong?

> Brave (VC funded) is also a Chromium wrapper and Edge (Microsoft owned) is also one as well and both of the somehow managed to beat Firefox in usage.

Well that's just false.

Firefox has 7.7% marketshare.

Brave comes in at less than 1 tenth of a percent.

Edge, does better after all it is the _default_ browser on the most popular desktop OS by a massive margin. It gets <6% of the market.

So Firefox has greater market share than both of those combined.

> Chrome and its derivatives is the reason why browsers like Firefox is failing to keep up and continues to lose users.

You mean Chrome. None of the "derivatives" other than Edge (due to default browser syndrome) and Opera (which gets 2%) have any marketshare, and neither does anything to prevent Google from doing whatever it wants.

> Using anything other than Chrome for a modern web browser is a losing battle.

The reason is very very simple: Chrome is the modern IE, and making content that only works in Chrome (or wrappers) is considered acceptable by the same mediocre developers that made IE only sites a decade ago.

On the other hand, I get where you are coming from: people also said the same thing about using anything other than IE.


Everyone knew IE was going away. Chrome is not, from what we can see, and is not going away any time soon. If anything, Chrome/Chromium-based browsers are eating other browsers' lunch, just look at the numbers. I absolutely hate Chrome doing their own non-standard things, but you live in a special bubble.


> Everyone knew IE was going away.

No. That's an exciting fictional world in which you live, and was not the case. Even in the last years preceding Google's chrome release and massive marketing push, when Firefox and Safari were taking "significant" marketshare, it was assumed IE would be forever, and IE-only sites were still common (and the norm in "enterprise" software).

There is literally no difference between a developer that says everyone uses/should use chrome, and a developer a decade ago saying the same thing about IE.


You know who. [0]

[0] https://arc.net


The band?

Sorry for the joke, know it is frowned upon, did it anyway.


Even if it was a 100% Apple language, that would still be quite relevant. Apple's devices are pretty pervasive, especially among the well off.


You're saying two things as if they're a contradiction, but they aren't

It's valuable to Apple to have a language perfectly tuned to their stack, as the official entrypoint to all their APIs. If you need to use those APIs, you're excited about Swift. If you don't, you aren't


The same appeal as .NET on the Windows ecosytem, Objective-C on NeXT/OS X, Java/Kotlin on Android (plenty of stuff that is Android only beyond the basic standard library, C++ OS specific SDKs,....

Many people are more than happy to do all their career on a specific platform.


If they had waited a little bit Rust would have been a great option. That's a shame.


Unlikely, rust is rather inconvenient for gui applications.

And not at all compatible or easily bridgeable with obj-c, which was not optional.


Not really. The two are somewhat sister languages, but they have been syntax-optimized toward completely different focuses.

Swift for example defaults to copyable value types and reference types that are refcounted because that is what is most often needed for evented application code, while Rust defaults to non-copyable objects (with wrappers for things like reference counting) because of its systems development focus.

Swift also had a hard requirement of a decade of co-existance with Objective-C. A significant number of Swift types toll-free bridge with objc (and corefoundation) alternatives, and that had a considerable impact on the standard library. Their base library would be different from Rust's "std" due to needing different implementations of strings, vectors, dicts and so on.

The two do take quite a bit of inspiration from one another, and will gradually grow to support an ever-larger overlapping set of use cases, but the design constraints of the existing language will still mean that one or the other is better for a specific task.


No, that's not possible. Just one example -- In Rust/C++, developer is responsible for managing the memory, while in Java/Swift etc the language does most of the work for you. This alone can fundamentally change many aspects of language design and mean completely different experience for developers


Ah I see, I didn't know Swift was garbage collected. That makes sense.

EDIT: wait, I'm reading that swift does not have a garbage collector. I guess I don't see how it differs from Rust then.

EDIT2: ah I see, everything's a smart pointer basically and memory's released at the end (RAII-style), whereas Rust has the borrow checker (but it's still RAII-style). I guess both implement RAII ideas, but Rust seems to do it at compile time and Swift at run time.


Is there a way they could have known that?


Who outside of Apple is using Objective C?

That's like...Apple's whole deal. Proprietary everything top-to-bottom.


The appeal of swift is that it is lightyears better than ObjectiveC. To ObjC developers that are used to being treated like trash by Apple (abysmal documentation, stone age IDE and tooling etc) that's huge


Apple's documentation was actually vastly superior before Swift. In fact, I often refer to the documentation "archive" instead of the latest docs.

The decline in the docs is due to a number of factors, but I would say mainly it's (1) the relentless annual major OS update schedule, (2) the proliferation of OS (macOS, iOS, watchOS, tvOS, xrOS?), (3) the dual language stack, (4) Apple personnel turnover.


Apple seems to abuse the fact that people make a living translating the lack of documentation into online courses, books, and articles. The market for this is an ongoing enabler at this point.


What’s changed there?

I do quite like Swift overall -- it’s usually terser than Obj-C and I like the much stricter and more expressive type system. But you pay for that with much longer compile times, and the tooling feels much the same (Xcode is still Xcode).


Lol, Apple is still treating developers like trash, just a new language now.


There's not even anything technically interesting about the language. Reference counting? Really? We have so much more than that and you just went with reference counting. Ugh


> There's not even anything technically interesting about the language

It's a modern statically compiled language with complex generics, that supports providing a generic interfaces in libraries with retaining ABI compatibility. Which no other modern "system" language supports. That's fairly technically interesting to me.

> We have so much more than that and you just went with reference counting

Like what?

The options for memory safe shared ownership are refcounting or GC.

Assuming you're talking about rust, that's just C++: object lifetime is lexical, and if you need it to last longer you have to use Arc/Rc/shared_ptr. The purpose of the lifetime and borrow checkers is to ensure exclusive access, and reduce the copy/destruction churn that you get from the C++ model (a hypothetical C++ that only allows the use of unique_ptr instead of raw pointers - obviously C++'s type system and approach to memory safety is not a Good Thing).

But it's important to realize rust did not create a new solution to object lifetime management for shared objects.

It's also important to realize that rust was designed in an environment where there was no existing code to interoperate with, whereas Swift was designed to work with the existing Darwin APIs and objective-c which are all refcounted. So even if no refcounting was the goal you'd end up with a new language, designed for a specific environment, and the default behaviour would not be correct.

Now that the language is more established, and it's less critical for every part of the language to have objc interop they are working on pure ownership semantics, for the same reason as rust: it saves copies without requiring a refcount[1]

[1] https://github.com/apple/swift/blob/main/docs/OwnershipManif...


What would you use instead of ref counting? Swift is an embedded language for Apple’s own chips, they can optimize the hell out of refcounting at silicon level.


There really is only so much you can do - especially once you've got refcount churn on objects being used across multiple threads. Swift's refcounting can get truly annoying at times.


Modern, state-of-the-art, generational GC? Not being snarky, that’s literally the comparison.


There was never any way to implement a modern GC while keeping compatibility with Objective-C, which was the primary design goal of Swift.

Apple had a bad experience with GC in ObjC with libauto and learned their lesson that C and GC just don’t mix. You can never have a truly modern GC in C, because C cannot provide any of the guarantees that a GC needs to move objects around.

Using a modern GC entails sealing the language off in a bubble. Since Swift is mostly used to interface with system frameworks, you would end up paying a cost when interfacing with the system frameworks written in ObjC, making your modern GC useless most of the time.


Seems a lot of people feel the way you do. I had to make something using swift once, and could barely find a community to help me decipher the cryptic swift documentation centered around their coreml library. Apple is keeping Swift on life support, imo.


CoreML is a bit of an exceptional challenge beyond just Swift. Admittedly it could really use a lot more documentation from Apple, but it would be unfair to judge swift based on an uncommon and under documentated library (I suspect that it would get a lot more use if it were not so hard to set up).


Considering Swift 1.0 was released in September 2014 I'm not exactly bowled over by how long it's taken Apple, with all their resources, to rewrite Foundation.


Actually, Apple delayed stabilizing the ABI (which would be required for rewriting low-level libraries) in order to work on other features that the community (ie, non-Apple users of Swift) needed more. If Apple had prioritized the needs of their own developers, this would have been done years ago. So kudos to Apple.


Well, Foundation was a mature Obj-C library at that point, Swift’s future was unclear, and Swift needed to have better Obj-C interop anyway.

I think that Foundation being a Obj-C library effectively forced Apple into eating their own dog food (for the sorely needed Swift/Obj-C interop), so I don’t think rewriting Foundation in Swift would have been a good strategy for Apple.


I'd prefer this approach, rather than hastily rewriting a stable codebase in the coolest trendy language.

9 years is not that long for a programming language. PG's (in)famous eassy on Python Paradox [1] was written in 2004. By the time Python was 13 years old and it was still a "cool kid's language".

[1]: http://www.paulgraham.com/pypar.html


Rust began in the late 2000s and only in the last few years has started to reach a level of mainstream acceptance. Adoption for new programming languages takes significant time. Compiler development takes an awful lot of time. For Swift to be this mainstream after being introduced in 2014 is (IMHO) really quick.

As for Foundation, core frameworks almost never get rewritten and for good reason. It's almost never worth it. Best case? You get functional equivalence. Worst case? You introduce bugs into something that millions or even billions of devices depend on. Those bugfixes may take many years. Every now and again we see bugfixes that are 10-20+ years old in things like GCC or the Linux kernel.

Rewrites should never be undertaken lightly, particularly with less mature languages. Apple is (IMHO) being relatively aggressive with this.


It's how they've always done things. It was about 10 years after the last 68k Mac shipped before the entire classic OS was power. Same with moving everything from carbon to cocoa.


(Mac OS 8/9 was never fully PowerPC native.)


Because some 3rd party software either wasn't PPC, or hadn't actually tested their PPC support so it couldn't be used, or in some cases interpreting the 68k code was actually faster due to cache sizes.


Well one can find lot of examples in programing language world. e.g. Go's generic implementation, Java's value type implementation etc where things can take really long time to get implemented. It takes delicate confluence of factors like community interest, technical/financial resources and implementers agreement on desired approach to get a feature done.

And all the money in world can't make it any faster.


Swift has its own standard library, so there wasn't much reason to do this. In particular, it would've made things slower to mix them until the implementation was very mature.

Also, this project's been going for a few years I think, since it's used for Swift on Linux.


> Also, this project's been going for a few years I think, since it's used for Swift on Linux.

That doesn't seem accurate:

"The Foundation package is an independent project in its early incubation stages."

"In the future, we will explore how to sunset the existing swift-corelibs-foundation and migrate to using the new version of Foundation created by this project."


swift-corelibs-foundation is what I meant, it contains a portable implementation of Foundation. Don't remember which parts were in Swift though.


The performance "optimizations" are all about eliminating the bridge between ObjC and Swift.

I'd love to see a comparison of how the functions work within their own realm, e.g.:

- Speed of a date function in pure ObjC, tested in ObjC realm with no Swift bridge

- Speed of the same date function in pure Swift, tested in Swift realm with no ObjC bridge.

Otherwise, introducing an extra bridge with the language, then getting rid of it isn't a performance improvement, it's getting back to the baseline.


> Otherwise, introducing an extra bridge with the language, then getting rid of it isn't a performance improvement, it's getting back to the baseline.

This is a strange comment, because Swift would have been Dead On Arrival in 2014 without that Objective-C bridging, an Apple-created language that doesn't work with Apple application frameworks.


Perhaps I articulated incorrectly.

The bridge should have obviously existed (and should still exist for a very long foreseeable future if not forever). I was refering to having a benchmark with and without bridge and telling Swift is faster is a bit off, as that performance decrease was what Swift had to bring in the first place.


Do you mean as some kind of Swift vs. Objective-C argument?

I don't think Apple is interested in having that argument in 2023. There's no point. Swift has won the "language war", politically within Apple and in developer mindshare outside Apple. (I say this as someone who continues to write Objective-C.)


Nope, Swift clearly won (which was created by Apple themselves to replace ObjC in most places if not all eventually).

I think the metrics in their current form don't really mean much if it's only measuring the performance "gain" by getting rid of the ObjC/Swift bridge.


I might be in the minority here (though, reading the comments, it seems others have the same opinion), but I find Swift to be a step back compared to Objective-C.

I really like the verbosity of Objective-C - this makes the code more readable and easier to understand, even (or especially) when reading a program the first time.

Being a superset of C (and C++ with Objective-C++) means that, when necessary, one can easily write a C (or C++) method (or class) but also include existing C/C++ libraries.

I also find some of the improvements Swift brings to actually make developers more reckless and dependent on the language/compiler (yes, I liked to manually manage memory).

And a bit of a personal frustration: I find a swift package/project that I want to use, download it and ...start fixing/updating the code, so that it compiles with the latest Swift version. I still have Objective-C code I wrote more than 10 years ago that compiles and runs with minimal or no change.


>I might be in the minority here

I think for sure you're in the minority here. Swift was really a breath of fresh air for my iOS dev work -- it's just so modern and easy to read. That said -- I'm still reluctant on SwiftUI though. Seems like one of those technologies that works great for small apps and tutorials but falls apart at scale but admittedly have little experience with it so perhaps I'd be wise to reserve judgement.


I’m with you on the SwiftUI skepticism. After 10+ years of native iOS development - can’t believe I’m saying this - but I thoroughly enjoy Flutter, it’s open source I can easily modify anything I want, the tooling is great, VS Code plugin is great, and hot reloading a live app is just amazing. It’s everything SwiftUI should be, let’s see what new stuff they announce at WWDC.


> yes, I liked to manually manage memory

I think you are in the minority there, even among Objective-C programmers.

In any case, Objective-C ARC is there for those who want it.


For clarity, this is not https://www.foundationdb.org/

Which Apple acquired years ago and then released it open source.


Ah, so Deno KV is using an open source database? I didn't realize that.


The Swift feature I miss in other languages is `enum` cases with parameters and `switch` blocks having to cover all cases, it's so convenient.


Both of these are also shared with Rust FWIW.

I believe Swift takes the switch farther than match in Rust, because it tries to deal with API stability (e.g. a system library adding new possible cases)


Proper sum types are so useful, it's a tragedy so many statically typed languages lack them.


same here. I don't understand how this feature hasn't landed in every other compiled languages (i'm looking at you golang)


This has been happening in public on GitHub since January, FWIW.


Previous discussion at https://news.ycombinator.com/item?id=34339153 (4 months ago, 371 comments)


Not a popular opinion.

Current Apple and Swift felt somewhat like Apple and Dylan in the 90s.

May be time could tell. A lot of people hate Objective-C, and was cheering for Swift in 2014. Now nearly a decade later, it seems time and resources could have been spent somewhere else.


Not really, Dylan failed to ship on NewtonOS, Swift not only is shipping across all Apple platforms, it is replacing Objective-C.

Metal is the only green field framework that was still being built with Objective-C, and yet most just use the Swift bindings.

Maybe Swift isn't Objective-C without C, as originally announced, but something else isn't coming to replace it either.


What makes you say that? I don't think you can find many developers that would be willing to go back to Objective C.


Not sure why the only alternative to Swift is "going back", particularly when Swift is already going back in many ways.

How about going forward to an alternative that actually isn't a step back.


Swift is an amazing improvement over objective c and I don’t think there are any languages out there that are obviously superior for the use case.

The only people i ever hear complain are objc diehards and general apple haters.


Wrong on all counts.

I can't think of many use cases where Swift is actually better, though I do understand people believe it is. Why is...an interesting phenomenon.

In fact, it is difficult to conceive of a language less suited for the use cases Swift is touted for. See Swift playgrounds. See also: Mojo.

I think Objective-C seriously needs a replacement. Though it got a few fundamental ideas right, the details are almost all in dire need of improvement. Which is why it it is such a shame that the replacement is so unsuitable.

I really don't hate Apple. And my portfolio doesn't hate Apple either.


I’ve been using swift full time since 2014, before that i was using objective c full time.

I assure you that swift is a huge improvement for iOS and mac development, and the statistics shoe pretty clearly that this is also the consensus.

Some older developers might be wedded to objc because of old habit, but if you ask 100 developers that are new to apple platforms to choose between the languages, I’d be surprised if even 1 picked objective c.

It would of course be strange if a modern language, developed specifically for use on apple platforms and heavily supported on the framework side, would not be more suitable than an old fashioned language from the 80s.

Not sure what you mean by playgrounds, they are simply amazing and keep getting more powerful. And how does mojo even enter the discussion?


That will only happen by moving to another platform.


Will it still all call send_msg and friends? Like can I still call into Foundation via C FFI or Objective C?


Yes. This is intended to be a reimplementation of Foundation with the same API, including being usable from obj-c. They've added a few compiler features recently to cover the gaps where there were some kinds of obj-c types that couldn't be defined from Swift.


For a second I thought this was referring to foundationdb. Was baffled


[flagged]


What a weird usage of that word…


GP was probably looking for the word "astroturfing":

astroturfing:

> The disguising of an orchestrated campaign as a "grass-roots" event – i.e., a spontaneous upwelling of public opinion.

https://en.wiktionary.org/wiki/astroturfing

agitprop:

> Political propaganda disseminated through art, drama, literature, etc., especially communist propaganda; (specifically, communism, historical) such propaganda formerly disseminated by the Department for Agitation and Propaganda of the Central Committee of the Communist Party of the Soviet Union.

https://en.wiktionary.org/wiki/agitprop


It's obviously not astroturfing since it's a statement from Apple themselves. Astroturfing, if we were talking politics, would be if Apple secretly funded swift user groups across the world to make it appear that people liked the language.

Obviously they don't have to do that, since people love it.


Oh, it is astroturfing to be sure. But I was just having fun with the propaganda inherent in the sentence too.


Which part of the statement are you disputing? It's clearly a very successful language, with a strong community.


[flagged]


[flagged]


That's a moo point anyway as Apple had put a ban on engineers using ChatGPT.


Clarus?


That would be a moof point.


I mean a realistic version of that comment is that you can use GTP* to make a set of unit tests. Then iteratively write a new version in swift to be bug compliant with the old version using GPT*. Then profit? The question really is why is swift <-> objc bridge so bad you need this software?


>The question really is why is swift <-> objc bridge so bad you need this software?

HUH? I don't even understand the question. A bridge is something redundant in itself. It doesn't have to be "so bad" to want to get rid of.

Why keep going through a bridge when you could (after this effort) call the lib directly?

And why maintain the same code in 2 different languages, an increasingly deprecated one, and your new one?


> The question really is why is swift <-> objc bridge so bad you need this software?

Do you want to keep using a UTF-16 backed String type while the world (including Swift) is on UTF-8? (Oh, and I am posing the question to Java too...)


I freakin knew foundation was amazing. Argh.


July 14 will tell. ;)


not an apple fan,from obj-c to swift,why not just c++, NIH plus I-want-to-control-everything?


Because C++ is not memory safe, lifetime safe, type safe, has notoriously brittle ABIs, ...?

Seriously if you're going to take the cost of rewriting massive amounts of code, why would you not rewrite it in a language that offers actual improvements to the language that's being rewritten?

Apple already has (and uses) objective-c++, and the majority of objc I've ever interacted with is objc++ once you're outside of headers (just look for .mm extensions).


If you look at C++ and then look at Swift I think you will understand why they didn't make C++ the main language.


Objective-C was not invented at Apple. Hard to see how NIH applies there.


What a dumpster fire Swift turned out to be. My first experience with it saw compile times go up more than 10x from the previous objc code base. Everytime more and more of apple's libraries were replaced because all the objc stuff was considered obsolete the compile times got worst and worst.

Its not like I dislike the language either but I think compile times matter so languages should not pretend like that doesn't exist. I can forgive Rust its giving you more than just type checking but Swift does not offer the same things yet it takes just as long or longer to compile.


They both compile slowly because they have similarly complicated type systems. What do you use in Rust that Swift doesn’t yet provide?


You can have a good type system and have fast compile times. In every single language with 'complicated type systems' it always comes down to type inference on polymorphic types which causes these issues, this has been known for over 40 years yet the same issue seems to be rediscovered again and again like it was something new.

I sure hope the author learned the right lesson from the failure that is Swift and Mojo turns out to be better. I do not want a language with a complicated type system and long compile times. I want a 'good enough' type system and fast compile times.


> it always comes down to type inference on polymorphic types which causes these issues

Unless I'm thinking of something different, doesn't OCaml have type inference on polymorphic types and fast(ish?) compile times?

I was under the impression that the breakdown of causes for Rust projects compiling slowly is very project-dependent as well. From what I can remember common culprits were monomorphization and/or LLVM, though I'm not sure if that has changed recently.


Fast (and principled) inference for not-too-complicated polymorphic types is an explicit design goal for OCaml and Haskell languages. And this design goal constraint quite a lot the type system.

Typically, there are many type system features that break this property and where OCaml and Haskell requires type annotations (for instance polymorphic functions that takes polymorphic functions as arguments and use them in a polymorphic way).

As a consequence, typechecking for OCaml programs tend to take from 10% to 60% of the whole compilation time for typical programs. However, for OCaml programs that make heavy use of GADTs typechecking can dominate the compilation time (probably due to the exhaustiveness check but I have yet to empirically check that).

Infamously, Swift made the choice to introduce a typechecking algorithm with an exponential complexity in the size of expression (to support function overloading and literals) even for simple types.


Fascinating! Thanks for the insight!

Does the up to 60% figure include programs that make heavy use of GADTs, or are those even worse in terms of the proportion of compilation time spent type checking?

I had no idea Swift's type checking algorithm is exponential. How easy is it to run into those conditions?


For GADTs heavy programs, it depends a lot on how much work the exhaustiveness check for pattern matching is doing behind the scene: With GADTs, one can encode a Turing machine in the type system. In that pathological case, analyzing that a pattern match is exhaustive might require to check if the encoded Turing machine stops in n steps. Consequently, it is perfectly possible to write programs where the compiler spends 99% of its time on analyzing pattern matching. But it is also possible to write GADTs heavy programs that don't require such expensive analyses.

I have no first-hand experience with Swift but the previous discussion at https://news.ycombinator.com/item?id=12108876 and the description of the problematic algorithm seems to indicate that numerical or mathematical code can run quite easily into the issue.

However, I must add that type inference algorithms being exponential in the source code size is nothing new: OCaml and Haskell inference are also exponential in the source code size. But that is because the size of types can be exponential in the source code size. Moreover, type inference complexity in OCaml and Haskell is linear in the size of types. And since in practice, no one ever uses exponentially growing types, type inference stays well behaved.

The specific issue with Swift is really more that even for simple result types, the inference may still have to do an exponential amount of guess work.


There are a very large list of foundational capabilities so that it’s not practical to respond with a concise answer. One example is automatically closing resources on behalf of the caller thanks to lifetimes. Another example is the best compiler feedback of any mainstream language.


Catch data races at compile time?


I don’t know much about Rust, but Swift can prevent data races at compile time by using the Actor type.


It could catch them before that using ownership, at the cost of lower performance.

Though Swift concurrency is mostly about "concurrency" not "parallelism", which is the one that has data races.


If your concurrency strictly can't involve parallelism then sure, no data races, but also then your performance is miserable on modern hardware.

If your concurrency can invoke parallelism, then you need to explicitly protect against data races, or else you need to define what happens (Java does this, nobody else has) or else you shrug and say too bad now the program is nonsense, you lose (C, C++, Go, many modern languages do this, it is easier)


> If your concurrency strictly can't involve parallelism then sure, no data races, but also then your performance is miserable on modern hardware.

Often worth it when that hardware is battery-powered though. Using those extra cores isn't free.


Ownership hasn’t shipped yet.


It has an ABI-level ownership thing already. What's missing is ways to control it.


While low compile times are nice I don't find them strictly necessary for the day-to-day because I'm not constantly doing full builds. Swift's incremental compiles are more than fast enough for my needs, especially on M1/Ryzen 5000 or better.


Still?

How big is Foundation anyway? Or are we talking about the movie they released?


I think you’re thinking of the Foundation series (not movie) on Apple’s streaming service?


It's sad Apple didn't jump on the Rust train, it would give them a major boost in security and maintaining ease.


Not sure if that's still the case, but Graydon Hoare was part of swift team. Swift also slowly introduces support of lifetime analysis with regards to concurrency, but the path they chose is pretty different : they started with an easy to use language, and moved toward advanced concurrency-related security features, trying to keep the language convenient. Rust went the other way.

This makes a lot of sense considering swift's primary target: app developers that build end-user facing software, as opposed to core system developers for rust.


Hot-take from someone who is currently helping to port GNUStep's implementation of the Foundation framework to the Sega Dreamcast:

Gross. The whole appeal of Foundation was having this incredibly convenient, beautiful, high-level object-oriented API ontop of low-level, statically compiled performant C...

The true path forward for computing is clearly an alliance among the holy trinity of god's C languages: C, C++, and Objective-C.

Yes, the future is in Objective-C++'s hands.


Foundation has not been a pure layer on top of CoreFoundation for over a decade. Some class implementations are unified on the ObjC version, not the pure C CoreFoundation version.

Objective-C got good enough that all processes on Apple platforms have the objc runtime in their address space whether they want it or not and many pure C or CF-like frameworks are built on ObjC under the covers.


Objective-C was always "good enough", and in fact the rewrite from an Objective-C Foundation to the C-based CoreFoundation saw a huge regression:

https://groups.google.com/g/comp.sys.next.advocacy/c/uOQnC1x...

The objections to Objective-C at Apple were always political, not technical, with a very entrenched "no Objective-C, ever" faction. Took a quarter of a century, but it looks like they finally won.


Wow, thank you for sharing!

Dude, clicked your profile links, and wtf? Objective-S looks awesome! I had never even heard of it, but creating a more easy-to-use language ontop of the Obj-C runtime is epic!

I actually wrote my own C-based meta type system for giving my core library and applications that use it a language-independent object-oriented C layer for interop. That's actually how I discovered the magic that is Objective-C and how powerful the C runtime actually is... that's why I'm sad to see it slowly disappear, and anything like this that keeps it alive and uses it is really cool to me. :)

I just wrote unit tests for making sure the Objective-C runtime is working and doesn't accidentally get broken in the indie SDK, KallistiOS, for the Sega Dreamcast: https://github.com/KallistiOS/KallistiOS/pull/202, haha.


I mean it’s there by virtue of the shared cache, but if we’re talking about what actually gets “loaded” I believe it’s still possible to write binaries that don’t bring in libobjc.


Somehow I doubt porting MacOS to old game consoles was Apple's priority when making this decision.


I'm not porting MacOS. I'm porting just the Foundation layer, and I'm not using Apple's implementation, I'm using GNUStep's.

But hey, maybe having a core, foundational framework written in the most performant language out there with the most hardware support should've been a priority. ;)


Objective-C++ is underrated.


It really is a beautiful language, and some of the most powerful aspects of Swift are due to its Objective-C roots.


Well, this is one more step toward a fully Swift-only platform.

Very soon it will not be possible to publish an app on iOS not built with the house language.

For those who don’t know, using C/C++ and opengl was easily done on iOS for many years, as llvm/clang was the official compiler.


Happily Swift has interop with C, and they're working on actual C++[1], which would be a lot of wasted effort.

The question is very simple: Swift is meant to be a safe language, how feasible is that if a large chunk of the core types and runtime are implemented in C, C++, and Objective-C/C++? Migrating to Swift is the obvious and sensible step, unless you propose re-implementing them in yet another language?

[1] https://github.com/apple/swift/blob/main/docs/CppInteroperab...


You're going to be relieved when you find out Swift has been using LLVM this entire time.


We’ll see.

I have a bad feeling about this.


Bad take.

Swift heavily relies on LLVM. As long as you have LLVM IR, your support isn’t going anywhere. Not to mention Apple continues to be an outsized contributor to LLVM and clang (largely because they started and open sourced the projects)


Apple funds LLVM, but it didn't start LLVM.

"The LLVM project started in 2000 at the University of Illinois at Urbana–Champaign, under the direction of Vikram Adve and Chris Lattner. LLVM was originally developed as a research infrastructure to investigate dynamic compilation techniques for static and dynamic programming languages. LLVM was released under the University of Illinois/NCSA Open Source License,[3] a permissive free software licence. In 2005, Apple Inc. hired Lattner and formed a team to work on the LLVM system for various uses within Apple's development systems."

https://en.wikipedia.org/wiki/LLVM


Apple hired Chris Lattner straight out of college. LLVM was the subject of his Masters Thesis.

It existed, but certainly wasn't yet ready for prime time. You can watch Chris give a Tech Talk at Google introducing Clang and LLVM and going over the features added in LLVM 2.0 while he was at Apple.

He joined Apple in 2005. and this talk was in 2007.

https://www.youtube.com/watch?v=VeRaLPupGks


Apple only started Clang, llvm already existed - although for a very long time apple was the main contributor (I assume until clang became competitive with gcc no other companies were interested in clang or llvm, and once clang was competitive it meant llvm was able to be a competitive backend as well).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: