Hacker News new | past | comments | ask | show | jobs | submit login
Swift 5 Exclusivity Enforcement (swift.org)
115 points by return_0e on Feb 5, 2019 | hide | past | favorite | 61 comments



I put Swift into the category of "almost memory managed" languages and it's never fully clicked in my brain. Thinking about strong/weak/unowned references is more difficult in my opinion than when to malloc/free something. Following the examples in the article just reinforced that maybe Swift's approach is a little too complex.


The examples are complex to follow because Swift model works just fine for all the normal cases. It only becomes problematic in some edge cases that regular programmer rarely meet.

And with all the help that the compiler now provides (even before swift 5), it's becoming really really hard to shoot yourself in the foot.


I must be specially talented in this area, because I still run into memory issues fairly often. Just a couple days ago, I had a weak reference which was turning nil (Swift weak references are zeroing), and I couldn't figure out why. Both objects were still live. Property observers don't report the nil change, apparently. I couldn't figure out how to get the debugger or Instruments to provide any help, either.

Eventually I just gave up and added an extra 'parent:' argument to an interface, for the one place it was actually needed. It's a bit more awkward than just keeping parent references in the tree nodes, but not too bad.

For situations with multiple queues or threads, the situation is even worse. I wrap a lot of my GCD code in extra locks, which by my reading of the documentation shouldn't be necessary. Without them, it occasionally crashes with strange memory errors that are impossible to figure out.

I felt like I wasted a few hours trying to track down my issue, the other day, and then come up with an alternative solution, while in any other modern language I could just have used a normal reference and counted on the GC to clean up the cycles when I'm done.

Backwards compatibility with Objective-C obviously has tremendous value to Apple, and ARC is smaller and faster than tracing GC, but I feel like I'm paying for it over and over. In most HLLs, once I get past the low-level parts, I'm using a language specifically designed for my task, and I never have to touch the low-level parts again. Swift feels more like fancy C, in that I don't think I'll ever be able to stop thinking about subtle memory management issues.


> I had a weak reference which was turning nil (Swift weak references are zeroing), and I couldn't figure out why

This is extremely unlikely. If zeroing weak references were broken a lot of macOS/iOS would be broken. You probably have a bug or the memory just hasn't been overwritten yet, but the retain count is actually zero, you overwrote the reference, or something similar.

Turn on Zombie Objects in Xcode then try it again. Objects are never deallocated but will turn into an instance of NSZombie when their refcount reaches zero. See if your supposedly "still alive" object is actually an NSZombie at that point.

> I wrap a lot of my GCD code in extra locks ... Without them, it occasionally crashes with strange memory errors that are impossible to figure out

You definitely have some race conditions or other concurrency bugs then. Common issues include being on a different queue than you expected (use dispatchPrecondition() to verify), being on a concurrent queue when you expected a serial queue, failure to use a barrier block on a concurrent queue (another good case where dispatchPrecondition() can help you), or accessing something both on and off the queue.


Weak references turning nil is perfectly expected.

But you should never access a weak reference. When you need to access a weak reference, you should attempt to take a strong reference. That will either fail, and should be handled, or will succeed, and then will not be nil, and won't 'turn nil'.


concurrency is still a work in progress for swift, so yes gcd still requires extra care to manipulate.

if you want to stick to a pattern, i recommend the actor pattern : create big objects that receive commands (as struct, not class) from any thread but immediately queue them in their own private queue to be processed serially.

That’s the most robust and no-brainer pattern, and the unofficial long term target for swift concurrency anyway.

You can use operation and operationqueue classes as building blocks.


The benefit being that you only have to deal with this issue rarely, rather than all the time with manual memory management.


I find it extremely dangerous if issues arise rarely but not rarely enough stop thinking about them and not as rarely as you would think. Look at this terrifying example from [1]. How long do you have to look at this code to tell if there is a reference cycle or not?

    class ServiceLayer {
        // ...
        private var task: URLSessionDataTask?

        func foo(url: URL) {
            task = URLSession.shared.dataTask(with: url) { data, response, error in
                let result = // process data
                DispatchQueue.main.async { [weak self] in
                    self?.handleResult(result)
                }
            }
            task?.resume()
        }

        deinit {
            task?.cancel()
        }
    }

[1] http://marksands.github.io/2018/05/15/an-exhaustive-look-at-...


I think the "catch" is that dataTask will retain self? Yeah, I agree that capture semantics with closures is a bit complicated to get right.


Yes, dataTask will retain self in spite of the fact that [weak self] is used in the only place where self is explicitly referenced. This is baffling.

After reading the linked article I have come to think that reference counting is not a good fit for a language that mixes object orientation with functional idioms including tons of closures.


Right. This is one of the things I don't like about automatic reference counting, that it's almost a complete solution, but cycles require more thought (but less cases) to deal with than manual memory management itself.


It would be interesting to compare the cognitive load of the Swift way and the Rust way. Rust gives you better guarantees, but you must annotate your code.


Rust doesn't just give you better guarantees. It gives you no runtime checking, and potentially even better performance than naive C, because it can automatically tell LLVM when there is no memory aliasing going on.[0] Also, it gives you more confidence, because you can, for example, give out an array to someone and be sure that it won't be modified. And usually that kind of confidence is enough to write safe, race-free parallel code.

I don't think Swift's runtime exclusivity checks give you any of that.

[0]: https://github.com/rust-lang/rust/pull/50744


Swift arrays have pass-by-value semantics (under the cover, they use copy-on-write to keep performance good) and are thus safe to pass to functions without concern that they'll be modified. (Unless it's passed as an `inout` parameter, which requires a prepended "&" at the call site.)


Rust can use its own bovine superpowers here https://doc.rust-lang.org/std/borrow/enum.Cow.html to enable developers to write code that's invariant over whether it's dealing with a fresh copy of something, or just a borrowed reference. So, it's not quite correct to say that Swift's use of copy-on-write gives it better performance. Rust can express the exact same pattern quite easily, and do it with far more generality.


I don't mean to imply that Swift's array performance is better than anything other than what its own performance would be if it didn't implement COW. (My intention was to convey that the normal way of passing arrays comes with confidence that your copy won't be modified by the callee.) Rust sounds nice, I'll have to try it sometime.


Are you sure that code expresses the same pattern? It looks like Rust's COW can only transition from Borrowed to Owned by making a copy. Swift's COW works by dynamically inspecting the reference count.


Right, unless I'm mistaken, what you want is `Rc` with `Rc::make_mut`[0], not `Cow`.

[0]: https://doc.rust-lang.org/stable/std/rc/struct.Rc.html#metho...


Swift is designed to support dynamically linked code. If Rust had the same design goal, it would need a lot more dynamic checks too.


Dynamic linking isn't a problem. Rust's checking is local, and doesn't require checking things in other functions, just their signatures. Swift is the same.

There are some cases were dynamic checks are required (e.g. with reference counting/shared ownership: class in Swift, Rc/Arc in Rust). Swift will automatically insert checks in these cases, whereas they have to be manually written into Rust code (via tools like RefCell). Swift and the APIs inherited from Obj-C use a lot more reference counting that Rust does by default, plus it's more difficult for the programmer to reason about the checks (e.g. to be able to write code that avoids them, to optimise) when they're implicit.

In summary, Rust and Swift have essentially the same rules, but Rust requires manual code to break the default rules, whereas the Swift compiler will implicitly insert those necessary checks (in some cases).


That's the point: with dynamic linking the static signature encodes too much.

In Rust sometimes you need to switch from FnOnce to Fn, or from Rc to Arc. These changes are binary incompatible. You have to recompile every client.

Swift can't tolerate that restriction. UIKit doesn't want to have to pick from among Fn/FnOnce/etc for every function parameter, and commit to it forever.

Swift types and functions need to evolve independently from their clients, so static checks are insufficient. That's why you see more dynamic checks. If Rust had the same dynamic linking aspirations it would insert more dynamic checks as well.


No, Swift needs the static signature of the library for dynamic linking too, by default.

You are correct that Swift wants to be able to change signatures without recompiling clients ("resilience"), but this is very limited, especially for changes that would affect the `inout` checking (e.g. one cannot change an argument to be inout): https://github.com/apple/swift/blob/master/docs/LibraryEvolu... (note the list of non-permitted changes includes changing types).


This doesn't sound like a very good example. Fn inherits FnMut inherits FnOnce, so a library should just ask for the freest one it can support. Usually that just falls out of the meaning of the function. For example, a callback should only be called once, so you ask for an FnOnce, or maybe some function will be called in parallel, so you ask for an Fn. And with Rc, RefCell, Arc, etc. a dynamic approach is also pretty well-supported.


Apple's APIs last for years, in some cases decades. In this world, properties like "this object can only have one reference" and "this function can only be called once" become arbitrary constraints on the future.

Think about instances where you've refactored a RefCell to an Rc or a FnOnce to a FnMut, and consider what it would be like if you were unable to make that change because it would break the interface. It would be profoundly limiting.


> Dynamic linking isn't a problem.

Rust does not have a stable ABI at present, so practical use of dynamic linking would require falling back to some baseline ABI/FFI for everything that crosses shared-object boundaries (such as the C FFI), or lead to a requirement that a shared object must be compiled with the same compiler version and compile options as all of its users. This seems like it would be a severe pitfall.


I'm referring only to the thing being discussed in this post. To expand for precision: dynamic linking doesn't force the Swift compiler to insert more dynamic checks for exclusivity enforcement.


AIUI, Rust can do it "the Swift way" (a misnomer, since it was first) for any given object simply by adding a RefCell<…> or Mutex<…> type constructor, as appropriate. These types are commonly used in combination with reference-counting smart pointers, Rc<…> or Arc<…>. (The smart pointers focus on making the "multiple ownership" aspect work, but this entails that these types have to return shared references, which are ordinarily read-only. The RefCell or Mutex type constructors then focus on enabling "runtime-checked exclusive access" on top of the native, shared reference types).


The problem with RefCell/Mutex is granularity. If you wrap the entire object in one of them, the mutable borrow you need to write a field blocks anyone else from accessing the object at all, even to read unrelated fields. Note that Swift 5 exclusivity enforcement is imposing exactly this limitation on Swift "structs", but not its heap-allocated "classes".

In Rust, an alternative is to wrap individual fields in RefCell/Mutex, but that results in uglier syntax – you end up writing RefCell/Mutex and .borrow()/.borrow_mut() a lot of times – and adds overhead, especially in the Mutex case (since each Mutex has an associated heap allocation). There are alternatives, like Cell and Atomic*, that avoid the overhead, but have worse usability problems. I've long thought Rust has room for improvement here...


You can get rid of the heap allocation for Mutex by using parking_lot, and there is work to merge this into standard library. The rest of what you wrote sounds right to me.


The cognitive load is higher on Rust, not only do we have to annotate the code, to have the same semantics as Swift we have to write Rc<RefCell<T>> everywhere, which isn't that ergonomic.

On other side, as discussed recently at CCC regarding writing drivers in safe languages, Swift's GC as reference counting generates even less performant code than Go or .NET tracing GC, so there is also room to improvement there.


I always assumed that RC has much better worst-case latency than tracing. Given that Swift is a language whose primary target is user interfaces, doesn't it make sense to optimize for that?


RC is the easiest GC algorithm to implement, that is all.

It is quite bad for shared data, because handling lock count requires a lock and also trashes the cache, both bad ideas in today's modern hardware architectures.

Also contrary to common belief, it also has stop-the-world issues, because releasing a graph based data structure can originate a cascade of count == 0, thus having a similar behaviour.

Which when coupled with a naive implementation of destructors can even cause stack overflows, due to the nested calls of the data being released.

So when you start adding optimizations for dealing with delayed destruction, non recursive destruction calls, lock free counting, a cycle collector, you end up with a machinery similar to a tracing GC anyway.

Finally what many seem to forget, just because a language has a tracing GC, it doesn't mean that every single memory allocation has to go through the GC.

When a programming language additionally offers the support for value types, stack allocation, global segment static allocation and manual allocation in unsafe code, it is possible to enjoy the productivity of a tracing GC, while having the tooling to optimize the memory usage and responsiveness when required to do so.

Having said all of this, RC makes sense in Swift because it simplifies interoperability with the Objective-C runtime. If Swift had a tracing GC, they would need a similar optimization like .NET has to deal with COM (see Runtime Callable Wrapper).


But how many times the problem truly happened? Is in my experience so rare that the experience is alike with a GC.


It depends on your domain. Any data structure that has links in both directions becomes a pain, and it's pretty common to have trees like that. It's usually solved by having parents own the children but not vice versa, but then you run into cases where you want the whole thing to live so long as someone holds an owning reference to a child.


Agree. Is similar on rust, where circular links are also hard to do. Easy graphs is a killer feature of a GC.

But also, is a localized problem. I have more issues with obj-c in the past, some ergonomic of swift make it even rarer.

However, certainly suck for the ones that hit this!. I'm building a relational lang in rust and damm, I miss a GC!


Many people in the Rust community use some sort of ECS in cases where something like a GC is needed. The whole point of ECS is that they work a lot like an ordinary GC runtime or an in-memory database, the underlying representation ensures that the system knows about all the "entities" that it needs to, and can reliably trace linkages between them. It might be easier to just use Go in cases where tracing GC is needed, though.


I've only heard of using ECS for games. I'll have to look into them more closely.


When would you want the graph to exist due to some leaf more being retained but you wouldn’t know about it enough to maintain a ref to the root mode?


In a language with tracing GC, you'd use a strong reference in both directions, so anybody who holds the child can always walk their way to the root (if you expose those references). Without such GC, you basically have to pass the root around, because it's what's holding all those children alive.


- Doubly linked lists.

- Cocoa delegates that keep references to some parent of the objects that they're delegates of.

- Some event listeners (in the same way as delegates).

I wouldn't say it's "rare". It popped up pretty often for me when I was writing Objective C.


What’s not solved with weak refs?


What's not solved with manual memory management? The point is not that you can't solve it, the point is that reference counting isn't the whole solution for garbage collection unlike a tracing gc for instance.


Weak reference counting is still reference counting.


It can always be solved with weak references.


Only if you know that you haven't solved it already: https://news.ycombinator.com/item?id=19093677


Alike a tracing GC..,.


You can add a cycle collector on top of a reference-counted language. It's been discussed for Swift, but there are downsides which might make it not worth adding.


Looking from a distance, what Swift does is actually taking best practices from modern C++ (14+) and incorporating them on the language level instead of introducing new library tools.

If you regularly code C++, the "fuck, yes!" moment is there when you learn Swift.


The flip side is that Swift has copied some bad parts of C++ as well: ridiculous compile times, value-type performance traps, ever-increasing complexity, mediocre IDE support, no ABI (I know, coming soon).

Either way, to me Swift feels much closer to C++ than to Objective-C in spirit. I don't think it's a coincidence that there are at least two C++ committee veterans among its contributors :)


You're actually describing what Rust is doing, more than anything. Swift has a lot more in common with Objective-C than C++, even though it does move away from the most challenging parts of Objective-C itself.


And here I thought this was going to be something about enforcing some exclusivity of swift to apple devices...


With their recent attempt at patenting programming-language concepts, you'd be excused to think so. I thought the same.


I'm not totally versed in Swift. In the example provided is it ambiguous because the closure captures `count` and count itself is being modified by the method calling the block?

I don't understand why this is an example of an unsafe operation. Wouldn't clearly defined behavior of closures clarify the "3 or 4" question?


> I don't understand why this is an example of an unsafe operation. Wouldn't clearly defined behavior of closures clarify the "3 or 4" question?

I believe that it's possible to specify this, and that's basically the behavior we had before SE-0176 was implemented. The issue with this is that it was slower for a dubious benefit (the semantics are obscure and non-obvious), so it was decided that it's just better to disallow this and get the benefit of clear behavior and better optimization opportunities, at the cost of removing this somewhat uncommon and not-all-that-hard to-rewrite-for-clarity pattern.


> it was slower for a dubious benefit

Isn't this kind of arguable? The benefit is that it avoids the need to make unnecessary copies in some cases, as is basically acknowledged in the article:

> The exclusivity violation can be avoided by copying any values that need to be available within the closure

Right? And in addition to the local cost of the extra copy, there's also the more ubiquitous cost of these run-time checks. Yes, there's the potential benefit of better optimization due to the non-aliasing guarantee, but I think it's far from clear that it's an overall performance win.

While I think it's reasonable to adopt a universal "exclusivity of mutable references" policy in order to achieve memory safety and address a fear of a (vaguely-defined) notion of "mutable state and action at a distance" (referred to in the article), particularly for a language like Swift, I think it would be improper to dismiss the associated costs, or even to imply that the costs are well understood at this point. Or to imply that this policy is, at this point, known to be an optimal solution for achieving memory (or any other kind of code) safety.


I'm going to refer you to the proposal, which explains the rationale behind the change better than I possibly could: https://github.com/apple/swift-evolution/blob/master/proposa...


I'm assuming that -Ounchecked also removes the runtime diagnostic?


Yes, it removes exclusivity checks and bounds checks.


neat. maybe by swift 10 they will have windows builds and i can check it out.


How do you use swift on ubuntu


You should be able to download a toolchain from here for your version of Ubuntu and have it work: https://swift.org/download/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: