Hacker News new | past | comments | ask | show | jobs | submit login
App Developers on Swift Evolution (curtclifton.net)
83 points by ingve on Dec 22, 2015 | hide | past | favorite | 77 comments



Apple's engineers are right on this one. From a performance point of view, the only way Java (and dynamic languages like JavaScript) get away with having almost everything be virtual is that they can JIT under a whole-program analysis, add final automatically in the absence of subclassing, and deoptimize in the presence of classloaders. In other words, Java optimistically compiles under the assumption that classes that aren't initially subclassed won't be, and if that assumption later turns out to be invalid it deletes the now-incorrect JIT code and recompiles. This isn't an option for Swift, which is ahead-of-time compiled and can't recompile on the fly.

If you don't allow an ahead of time compiler to devirtualize anything, you're going to have worse method call performance than JavaScript.


Yet Obj-C seems to get by quite fine, and everything is super-dynamic there.

Being able to dynamically hack behavior is a huge boon when coding against a closed-source, 3rd party platform.

Doesn't matter so much in an open-source context, where you can just modify code and ship your own version of the library if necessary.


Depends what you mean by "just fine". Objective-C method dispatch is very slow. It's significantly slower than that of JavaScript, for example.


Just fine in the sense that it's the dominant language on iOS, and iOS sets the bar, IMO, for snappy experiences on modern platforms.

Stuff implemented in java and javascript, not so much.

Maybe saying objc is cheating a little bit, because it's so easy to drop into straight C to optimize. But stuff like UITableView (edit: scroll => table) is all heavily based on objc message sending, and it's easy to get great scrolling perf using that api.

I don't know the exact number, but you can get a lot of message sends done in 16ms.

(Another example -- GitUp, insanely fast git client, objc.)

(On the other hand, I do remember that C++ based BeOS on 66 MHz PowerPCs back in the day was double-insanely fast, so maybe Obj-C is slow, on a language level, and we just don't know it because everything else is so much slower, on an implementation level. But then again that stuff was all virtual calls. I just don't buy the argument that at the _system framework_ level dynamic dispatch is a barrier to good performance.)


> Just fine in the sense that it's the dominant language on iOS, and iOS sets the bar, IMO, for snappy experiences on modern platforms.

> Stuff implemented in java and javascript, not so much.

I agree with the data but not the conclusion. iOS' performance is great in spite of using Objective-C as its language, not because of it.

The only performance-related features that Objective-C brings to the table are (a) being able to piggyback on the optimizations implemented in popular open source C++ compilers; (b) being compatible with C, so you can easily inline C into Objective-C when Objective-C's slow performance bites you; (c) not requiring tracing garbage collection. When you're actually in heavy Objective-C land with method calls in tight loops, performance is pretty bad, both because message sending is slow and because methods can't be devirtualized. Apple knows this, and they've basically reached the limit of what they can do without incompatible changes to the Objective-C semantics. Hence the performance-related design decisions in Swift.

(As for UIScrollView/UITableView, I agree that they're very fast relative to Android or current Web browsers for example. I know the reasons for this and they have nothing to do with the implementation language. Algorithmic and engineering concerns often trump programming language performance.)


(Oops, I meant table view, not scroll view. Thanks.)

I agree with you on the specifics, but somehow still reach different conclusions. :)

Edit: I guess if you take the lense of wanting Swift to replace the combination of ObjC + plain C, what you're saying makes sense.

It's a shame though if you end up losing the very-helpful ability to dynamically modify system provided behavior.

And even for relatively performance intensive stuff, like table scrolling, objc is plenty fast.

Every language obviously has its domain of performance-appropriateness -- C too will hit its limits; x264 w/o the asm optimizations is ungodly slow, for example.

I just think for "apps", generally, objc is a speedy language. Empirically this is true. Explanations that call for this being true "despite the language" are suspicious to me; if it's not the language, what's the magic pixie dust?

And personally I suspect the swift folks aren't going the more static route because of actual real-world user-facing performance walls they've run into, but because that's what they're into -- they're C++/static language/compiler guys.

Last example: Look at the most performance constrained stuff apple is currently doing, the watch. Think that's Swift? I'm pretty sure it's not.


> (As for UIScrollView/UITableView, I agree that they're very fast relative to Android or current Web browsers for example. I know the reasons for this and they have nothing to do with the implementation language. Algorithmic and engineering concerns often trump programming language performance.)

I'd be curious to hear more about the reasons. I'm not familiar with the web side, but Android layout performance can be painful, especially on older devices running Dalvik.


>iOS' performance is great in spite of using Objective-C as its language, not because of it.

Nope. Objective-C is a great language for performance. Remember the 97-3 rule. That is, only roughly 3% of your code are responsible for almost all the performance. You just don't know which parts those are.

The dynamic features of the language give you awesome productivity to more quickly get to the point where you find out which 3% to actually optimize heavily. Doing this optimization is straightforward because you have all the tools at your disposal, the road to them is smooth and the performance model is predictable.

I have repeatedly achieved performance that's better (yes!) than equivalent C programs, and with a little bit of work you can actually maintain nice OO APIs.

>(b) being compatible with C so you can easily inline C into Objective-C

I really don't understand where this (common) misunderstanding is coming from. Objective-C is C, or more precisely a superset of C. You don't "inline C into Objective-C". You use different aspects of Objective-C as appropriate. If you are not capable of using the full capabilities of the languages, that is your fault, not a problem of Objective-C.

This seems to have been lost, with people (inappropriately) using Objective-C as a pure object-oriented language. It is not. It is a hybrid object-oriented language. In fact, the way it was intended to be used is to write your components using C and use dynamic messaging as flexible packaging or glue.

> because methods can't be devirtualized

Sure they can. I do it all the time:

   SEL msgSel=@selector(someMessage:);
   IMP devirtualized = [object methodForSelector: msgSel];
   for (i=0;i<LARGE_NUMBER;i++) {
       devirtualized( object, msgSel, arg[i] );
   }
Did you mean they can't be automatically devirtualized by the compiler? I hope you understand that these are two different things. Of course, it would be nice to have compiler support for this sort of thing, which would have been miles easier than coming up with a whole new language:

   for (i=0;i<LARGE_NUMBER;i++) {
       [[object cached:&cache] someMessage:arg[i]];
   }

Or some other mechanism using const.

> reached the limit of what they can do without incompatible changes to the Objective-C semantics. > Hence the performance-related design decisions in Swift.

Swift performance is significantly worse and waaaayy less predictable than Objective-C, and that is despite the Swift compiler running a bunch of mandatory optimizations even at -O0.


Calling via a function pointer isn't devirtualization. It's still an indirect call, and does not allow the function to be inlined since the actual function is not known until runtime. It merely gets you from something 2-4x as expensive as a c++ virtual call to something roughly as expensive.


Last I checked, a C++ virtual function call loads the method pointer via an indirect memory load. Hard for the CPU to speculate through, so typically a pipeline stall.

A function pointer that's in a local variable, so loaded into a register is a completely different beast, as the measurements bear out.

In my measurements, a message-send is ~40% slower than a C++ virtual method call, whereas an IMP-cached function call is ~40% faster, and slightly faster than a regular function call.


The vtable for an object being actively used will be in L1 cache, and when a virtual function is called in a loop, Intel's CPUs have been able to predict the target for many, many years. ARM may not; I've never had recent to deeply investigate virtual call performance on iOS.

Finding calling via a function pointer to be faster than calling directly suggests that you were not actually measuring what you thought you were measuring.


Actually, Objective-C message sending is extremely fast, only slightly slower than a C++ virtual method call and in the same ballpark as a function call (the relative cost has drifted from 40% to 2x higher depending on OS version and machine architecture) without much of an impact on applications.

In addition, message-sending is extremely cheap compared to other operations, for example roughly 50x faster than object allocation.

Last not least, in the few cases where messaging does become an issue, it is trivial to replace with more optimized dispatch, ranging from an IMP-cached send to a static inline function depending on the performance requirements and API boundaries in question.


When does another language then become a case of premature optimization? I'm serious. If performance is an issue, you can profile Objective-C code and then either deal with the runtime directly, or rewrite critical portions in C. But I would think that there is a lot of Objective-C code out there that runs just fine without any fancy tricks like what I'm describing.


Sure, but that doesn't mesh with Swift's goals, as I understand them. The team seems to be positioning Swift as a language that will scale down to low-level performance-critical pieces in a way that Objective-C can't (though I may be wrong here since I'm not involved in that community very much).


Well, they say they want that. But this is at best aspirational, it isn't true. Certainly not so far and as far as I can tell not in the foreseeable future. Quite frankly, I doubt it will ever be true, because in essence their claim is that they are building the Sufficiently Smart Compiler and despite all the problems now it will be totally awesome when they're done.

Swift currently can't hold a candle to Objective-C on the performance front, which is why all the heavy lifting is done in Objective-C. As a small example, see how many articles there are on "Swift JSON parsing", then check what percentage of those call into the NSJSONSerialization Objective-C class to the actual, you know, parsing.


No, I think you are right. My understanding is that Swift is supposed to scale down, as you say, and performance there is critical. I was considering things from the point of view of higher level programming. Thanks.


They aren't are they? Cached method calls are fairly fast from what i've read.

[1] https://www.mikeash.com/pyblog/performance-comparisons-of-co...


According to that chart, Obj-C method calls are either 4.5x or 7.8x slower than C++ virtual method calls, depending on whether you're looking at 32- or 64-bit code. That alone is pretty bad.

But it's even worse than that: I'm talking about devirtualized calls, which are far faster than virtual calls in C++. More importantly than just the call time, devirtualization can have outsized performance impact because it enables inlining, converting intraprocedural optimizations to interprocedural optimizations. The difference between inlining and not can easily be a 10x-level performance difference with an optimizing compiler like LLVM.


The chart is wrong for message sends. I've been measuring this for a long time, and get ~2.1ns for a message send vs. 1.2ns for a cached message send on a comparable machine.

If you want inlining, use a static inline function.


I immediately agreed with what you wrote, but now there's a voice in the back of my head asking "if this is so obviously right, why is it just happening now, after 2.0? It's not like this Lattner guy doesn't know what method calls cost..."

A post last week featured Lattner writing about Swift treating methods as final when they weren't declared that way (https://lists.swift.org/pipermail/swift-evolution/Week-of-Mo...). I wonder if the early thought was that they'd get enough mileage out of those "cheats", and then later decided that final should be the default.


Probably. It's hard to predict how language design decisions will impact performance (and everything else, such as ease of use and productivity). That's why it's good to have an iterative process (like Swift is now adopting) instead of being rigid, inflexible, and opinionated.


>If you don't allow an ahead of time compiler to devirtualize anything, you're going to have worse method call performance than JavaScript.

Well, Swift as it is (e.g. before adopting final by default) has great performance.

So what gives?


Performance isn't one dimensional. Maybe the benchmarks you're looking at aren't gated on method call performance. Maybe the authors of whatever benchmark you're looking at wrote final manually. Maybe they used freestanding functions instead of methods. Maybe your benchmarks aren't calling functions at all. Who knows?

The point is that anyone who has worked on compilers for OO languages will tell you the same thing about devirtualization. This stuff goes back to Smalltalk implementations in the 80s.


> Maybe the benchmarks you're looking at aren't gated on method call performance.

Most real world programs aren't gated on method call performance.

And things have changed since the 80ies. Heck, even in the 90ies, Squeak Smalltalk was plenty fast for media work using a plain byte-code interpreter (no JIT), because the heavy lifting was done in primitives. Just like Tcl was used in supercomputing.


This is a cultural issue much more than a technical issue. I can relate to both sides. "I want to be able to patch anything myself," versus "I want to be able to reliably reason about how my module works."

The rhetoric on both sides can quickly get stupid. This is one of the major ways in which languages are divided. Ruby folks are used to being able to monkey patch everything, and Python folks tend to avoid it even though they can. JavaScript programmers are divided on the issue: should you add a method to Array.prototype, or is that just asking for trouble? I've certainly seen my own fair share of bugs and crashes caused by programmers substituting types where they shouldn't, and seen my fair share of frustrating limitations in sealed modules that should just expose that one method I need, dammit.

Objective-C leaned towards the "go wild, do anything" approach, which grew quite naturally out of the minimalistic "opaque pointer + method table" foundations for object instances. One of the reasons that you make such a choice in the first place is the ease of implementation, but in 2015, writing your own compiler is easier than ever. So Apple is quite naturally considering application reliability as a priority (since they're competing with a platform that uses a safer language for most development).

Unfortunately, it's a cultural fight where one side or the other must necessarily lose.


Because I have no experience with Swift, could someone more informed explain this to me? How would my subclassing an Apple-provided class and overriding its methods affect anyone but me? In Python, if I write:

    class MyRequest(requests.Request):
        def get(self): do_something_stupid()
then my coworkers can still use `requests.Request` itself without getting my bad behavior, and if they find themselves looking at a flaw in my code, they know not to blame the upstream author. What's different about the Swift situation?

I'm kind of horrified at the idea of an OOP language that wouldn't easily let me override a parent class's behavior. If I break something in the subclass, it's on me to fix it. That never reflects poorly on that parent class.


Sure, it only affects you. The problem is that changes to superclass can now break your code.

For example, perhaps right now the function looks like this:

    def get(self, url, method='GET'):
        ...

    def post(self, url):
        self.get(url, method='POST')
You override get, to add some additional functionality. All is fine.

Then somebody realizes that the original code was silly, and rewrites it:

    def get(self, url, method='GET'):
        if method != 'GET':
           print "DEPRECATED!"
        self.request(url, method=method)

    def post(self, url):
        self.request(url, method='POST')
It seems to do the same thing, and probably passes all the same tests. But suddenly, now your code doesn't get called for post requests and your additional functionality breaks in a mysterious way.

Perhaps its still your fault for doing a bad job subclassing, but its going to look like its the fault of the person who fixed the parent class.


> changes to superclass can now break your code

The same exact thing happens when using composition.


When a point release of the operating system breaks your app, the user doesn't care (or understand) whose fault it is, they're just ticked off. Apple cares about that.


How is that different from your code not working because of an Apple bug (which they take months or years to solve in some cases)?

The end result is still a broken app (just due to a different reason), and the user will blame either the third party programmer or Apple, and mostly usually the programmer.


Part of the reason why it's hard to fix bugs that people have worked around is because the fix often breaks the workarounds. Which causes a problem for Apple either way; either they can leave the bug in and let everyone work around it, or they can fix the bug and break all of the myriad apps that use the workaround. Depending on the circumstances, they can actually put workarounds into the frameworks to preserve bugs for apps linked against older versions of the SDK or for apps with specific bundle IDs, but not every fix is amenable to that kind of solution.

Which is to say, there's a good argument for the idea that reducing the ability to monkey-patch framework classes will make it much easier to actually fix bugs in the frameworks.


>Part of the reason why it's hard to fix bugs that people have worked around is because the fix often breaks the workarounds.

That was a problem for Microsoft, Apple, from all we publicly know from their releases, never treated that as a problem. They went ahead with the fixes even if it broke previous workarounds.


If an OS upgrade breaks lots of programs or even one heavily used program, chances are users will blame Apple, not the developers of said apps.

Also, part of the reason that it takes them long to fix certain bugs may be concerns about seemingly internal changes breaking applications. They cannot refactor methods (rename or even change an argument list) in any class they ship because someone, somewhere, might override one of its methods.

In that light, I think the change makes sense. Of course, Apple could just add private/sealed to all their classes, if only just before shipping, but why make the typical choice more work to write than the exception?


>If an OS upgrade breaks lots of programs or even one heavily used program, chances are users will blame Apple, not the developers of said apps.

The user just knows said program doesn't work. And it's not necessary that an OS upgrade will break lots of programs that a user has, it can break this or that program in subtle ways, or classes of programs (e.g. those using the X api) of which a typical user will have only few installed.

E.g. if it breaks third-party camera apps, users might not normally have 5 of those, to see that they all break at the same time, but just 1 additional on top of Apple's (if that).


The user knows the program worked yesterday, before (s)he upgraded the OS. Correlation is not causation, but I think you will get more hits googling "<OS> update breaks <application name>" than googling "<OS> update uncovers bugs in <application name>" for whatever currently used choices you pick for "<OS>" and "<application name>".


Users don't always use all of their apps apps every day. If you go to the app 20 days after updating the OS, good luck remembering the OS update 3 weeks ago and deducing it was that which broke it.

Especially since, with the ability for apps to upgrade automatically, many don't even keep track when this or that app was updated.

Of course it usually is even more obscure, e.g. the bug only affects PART of the app's functionality, a part which you might have not invoked in your last run before the OS update. So, another deduction for many would be that that feature works intermittently because it's not coded well, etc.

Besides, even when the user can deduce that the OS broke the app, they don't complain to Apple, but to the software maker "fix your app", "where's the update release?", etc.


Users even blame us when the App store fails to download updates and leave 1-star reviews


The difference is that in that case the bug never makes it out to the end user.

Apple writes a new API. It has a problem. You try to adopt it, it doesn't work. You don't just ship that, you do something else.

Unpleasant, but doesn't hit the user.


You missed the obvious fact that the bug change might happen to an API you already use. It doesn't have to be a new API -- just a new release of it by Apple.

So the problem remains: you can't easily extend the class with your own fix to bring out a bugfix release for your app.


If you're the final user of an API, then yeah, go for gold and change whatever you want.

But if you're working in a team (or contributing a patch to a Github project), and you override something crutial like String.subtring(), then things could get out of hand.


As pcwalton points out above, allowing (potential) dynamic dispatching behavior for every method is a significant performance drag, especially when (as with Swift) you won't be supplying a JIT to devirtualize and inline at runtime.


I don't really agree that JS devs are divided on this issue, though of course it's a huge community and I could just be talking about my corner of it. But we saw what happened when two libraries wanted to alter the same built-in. Polluting the global namespace and altering built-ins is a no-no.


It was very divided 4-5 years ago, predominantly when underscore and jQuery were both champing at the bit to be the top utility library. It's shifted towards not monkey patching, but it's not not absolutely decided for many developers.


It was very undivided 8 years ago - it was common knowledge to never, ever override the prototypes of built-in objects. In fact, a major reason that JQuery won out over Prototype & Mootools was because the latter two monkey-patched built-in prototypes, and as a result were incompatible with each other or with any other library that overrode built-in methods.

IIRC, this was an interview question at Google when I applied in 2008, and I think one of the reasons I got the job was because I was aware of all of the pitfalls of it.

I managed to skip using underscore, but from a quick glance through the library, isn't the reason it's called underscore because it provides '_' as a namespace for all the utility functions, rather than altering Array.prototype the way Prototype did?


I read the underscore source code a few years ago. My memory may be failing me, but I don't think it altered any builtins.


When you control all the source, being dynamic is stupid.

When you're coding against a closed-source, external, 3rd party platform, being dynamic is really helpful.

Simple as that, IMO.

(From a distance, Swift sure does have a strong C++ vibe.)


Final by default is correct, since otherwise you are effectively exposing and having to maintain an extra stable API towards subclasses, which is a nightmare and won't be done properly unless it's intentional.

In fact, having virtual methods at all outside of interfaces/traits/typeclasses is dubious language design since it meshes together the concepts of interface and implementation and makes it inconvenient, impossible or confusing to name the implementation instead of the interface.

The issues in the discussion are instead due to Apple's framework code being closed source and unmodifiable by users and developers, and also buggy according to the author.


I'm an app developer. This change will absolutely break some of my stuff, and it's going to suck. Even with that, I do feel OP is taking an overtly political stance (even using the word "banned".) This change is perfectly reasonable within the already-strict mindset of Swift. Having a less-strict language just to work around potentially buggy Apple frameworks would be setting a bad precedent.

Using "final" also has some performance wins by reducing dynamic dispatch. [1]

[1] https://developer.apple.com/swift/blog/?id=27


What will it break? Name one single thing.

Remember, the only change here is the _default_ for classes that don't specify it. As I stated on the list earlier, I guarantee you 100% that when Apple starts vending framework classes written in Swift they will make an intentional decision on every single class as to whether it is final or not. And the language default won't impact that at all.


If Swift changes the default, do you think they'll Audit all of UIKit and AppKit to fix it? They're still transitioning things over to Swift piece by piece still. I imagine they'll let defaults work the way it is unless there is a good reason not to.


Being able to run-time patch an API installed on a device is an entirely different thing than being able to modify and distribute an open source framework.

Both are useful but they aren't the same thing. In one case, you want to able to get your code running on devices that in the wild now. In the other, you want your fixes to go upstream so you can remove any hacks needed to do the former.


> since otherwise you are effectively exposing and having to maintain an extra stable API towards subclasses

How so? I override what i want at my own peril. I'm not going to complain to the author that his change broke my code.

> Apple's framework code being closed source and unmodifiable by users and developers, and also buggy according to the author.

Apple is constantly breaking things. If we can't extend classes, then we'll use composition and at the end of the day, what's the difference? I need code that sits in front of there so I can make it work correctly.


I find this slow march in "modern" language design toward completely static compilation models troublesome to the extreme. It feels like a significant historical regression; they speak as if Smalltalk and the Metaobject Protocol are things to revile and shun, not elegant programming models that we as programmers should aspire to understand and use in our own programs.

To elide these features as a matter of principle implies that you believe your compilation model is perfect, and is able to deduce all information necessary for optimal compilation of your program statically, perhaps augmented with profiling information you have obtained from earlier runs. It also makes upgrading programs for users more difficult since patches must be applied in a non-portable manner across programs. I shan't mention the fact that they make iterative, interactive development an ordeal. The Swift REPL is progress (although REPLs for static languages are nothing new), but it still pales in comparison to the development and debugging experience in any Smalltalk or Lisp system.

There is no reason why the typing disciplines Swift is designed to support should demand the eradication of all dynamism in the runtime and object model.

If you have never heard of the Metaobject Protocol or similar concepts before, here is the standard reference: https://mitpress.mit.edu/books/art-metaobject-protocol

This discussion also reminds me of this essay by Richard P. Gabriel: https://www.dreamsongs.com/SeatBelts.html


OK, but Swift is an ahead of time compiled language, unlike Lisp or Smalltalk. That makes the tradeoffs completely different.


Absolutely agreed.

I honestly don't care about nicer switch-statements when are a big classes of problems that cannot be solved because of the lack of reflection.


Smalltalk and the Metaobject Protocol are things to revile and shun.

They aren't elegant programming models and they aren't even internally consistent programming models.

If you read Smalltalk papers, they aren't really compsci papers at all. They're musings about some ideas they tried and how well they thought they worked afterwards.

The language world is moving to a more coherent, formal, mathematical understanding of type systems, programming languages, and automatable proof systems.


Methods in C# are non-virtual by default and almost every class in the core libraries is sealed and the world hasn't ended in .NET land.

I have definitely done some hacks to work around bugs in frameworks I've used. But I've also had to deal with users who broke my libraries or inadvertently wandered into the weeds because it wasn't clear what things were and weren't allowed.

This is one of those features where the appeal depends entirely on which role you imagine yourself in in scenarios where the feature comes into play.


And I have heard people say that the "sealed" class in the framework is often mistake, as they have experienced pretty much the limitations described here. In other words, Curt Clifton's theory has a lot of merit in practice.


But .NET definitely struggled with cultural issues around making APIs virtual. Because of Microsoft's strong 'no breaking changes' rule they were extremely cautious about adding virtuals- in my experience it was not unusual to see it costed at 1 week dev/test time for a single virtual on an existing method (in WPF).

C++ is also non-virtual by default and I think it's worked out OK.


Some of the cultural issues around avoiding virtual came from hard lessons with v1.0. After shipping v1 they realized that there were a large set of security and compatibility issues with not having framework classes sealed. No-one I worked with really like the idea of sealing all our classes, but the alternative was an insane amount of work. It is just too hard to hide implementation details from subclasses. If you don't hide the details then you can never change the implementation.

I can't say for swift, but there were also real security challenges. It is hard enough to build a secure library, but to also have to guard against malicious subclasses is enough to make even the most customer friendly dev run screaming. My team hit this and it cost us a huge amount of extra work that meant fewer features. vNext we shipped sealed classes and more features and the customers were happier.


Sealing classes by default is troubling. I'm having a bad feeling about the future of Swift. I also think it's growing too big already.


The fragile base class problem shows that not sealing is also troubling, much more than sealing them.


It can be, but the FBC problem is an architectural issue, whereas open/closed violation is a language issue. I feel that using the latter to deal with the former is potentially using a hammer to swat a fly on a glass door.


Good point! however, in the case of Swift, it's just so closely related to UIKit and/or Cocoa.


Hmm...that problem that both Objective-C and Smalltalk don't have...


Objective-C lets you add ivars without breaking subclasses (by computing and caching ivar offsets at runtime rather than compile time), but you still can't safely ever add methods to an existing class without running the risk of a subclass unintentionally overriding that method.


All OO languages suffer from it.


How big of a problem is it really?


Quite big for library writers.

You cannot ever be sure how changes to the public / protected API from a class affects classes down in the inheritance tree.

Specially bad is when methods that were never supposed to be overwriten change their semantics, leading to erratic behaviour on classes that have overwritten them.


Yes, because moving from vtables to message passing eliminates a whole class of semantic errors whose genesis is independent of the implementation details of the language. Thanks for making this clear.


The whole argument boils down to how developers were treated with apples libraries so far. The submission/review process is quite prohibitive, and the core libraries (like almost every piece of software) have flaws. Together with the opaque intransparent radar bug reporting / bug resolution system, you had to resort to method swizzling to keep your sanity (I guess the PSPDFKit guys can speak volumes on that).

Going forward, I hope apple sticks to the open source approach they took with swift. That more of the libraries will follow, with Apple encouraging more community participation.


This is a correct decision. APIs have to be designed to support subclassing properly. It's also a performance win.


Interesting, it's like how Swift forces you to think about nulls. Now you have to think about people subclassing your classes.


As an iOS developer for years, I have never resorted to a runtime hack, subclass and re-implement, or similar trick to work around a framework bug.

Our team rarely shipped a new version of the app concurrent with a .0 release of iOS, so that might be related, but we always found ways to work around issues while respecting the APIs provided.

I understand other products and other developers have had a different experience, but I'm not overly concerned about this particular change.


OK, but: As a (Mac) OS X developer, in 2003 I implemented a custom NSTextView subclass to fix two specific bugs that were impossible to work around otherwise. That subclass was used in everything we shipped for years and years... on OS X 10.3, 10.4, 10.5, 10.6, and 10.7.

(After that I lost track, but I think one or both bugs were finally fixed.)

I feel like maybe this change will make Apple frameworks more stable in the long run, but that will take a tech eternity (10+ years).

In the meantime, the overall user experience will be degraded by system framework bugs that can no longer be worked around. It just sounds more aspirational than realistic.

"Let's make it impossible for developers to work around our bugs -- that will force us to write perfect software!"


Apple has been pushing composition over inheritance for years. No surprises.


I'm not sure so i'm asking : is your comment ironical ? I see inheritance everywhere on uikit.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: