Generally agree with the article, but of course I have my disagreements. Specifically:
> I consider Auto Layout one of the top 5, maybe top 3 technologies to ever come out of Apple. It’s beautiful, easy to work with and there’s nothing I can’t do with it.
It's not beautiful, nor easy to work with, nor does it work for every scenario.
I am going to say something heretical: I have given up on auto layout and I find "manual" layout much easier. But that's because I rely on "one cool trick"[1]: Do not lay out views using an x-y cursor. Instead, slice up the bounds of the view being layed out using `CGRect.divide()`... So simple. If you're using manual layout but not using `CGRect.divide()`, I urge you to give it a shot it will change your life.
Of course maybe everyone does it that way and I'm the only person who thinks it's a life-changer. :)
[1] Side note: I learned this trick from a technical interview at EA games years ago... The best interviews are those where you leave knowing more than when you went in, even if you don't get the job.
There’s one thing you cannot do with manual layout, and that is composability. If you write a custom container, but want the contents of the container to provide sizing information upwards, you can do that pretty easily with AutoLayout. Constraints propagate information automatically in both directions. You can simulate something similar with sizeToFit/sizeThatFits, but you’ll find out pretty soon that your components are not as generic as you thought they were. E.g. how to trigger a grandparent relayout from a deeply nested child view, when it no longer fits its current size? Autolayout does that automatically.
I use the intrinsicSize property to send requested size info upwards (subview->superview). using that you can resize your views correctly during the layout.
Dealing with auto layout is probably my least favorite aspect of iOS dev, but I wouldn't recommend doing layout by hand. There are just too many variables to consider to write really robust manual layouts.
That said I'm really disappointed with UIStackView. When it was first announced it seemed like a great solution to a lot of common layout problems but in practice I've found it tends to cause as many problems as it solves and now I mostly just use it when I need a clean way to hide & reveal various UI elements dynamically. I'd take some variant of flexbox over all this any day.
i just use pure nslayoutconstraints and don’t have to deal with uibuilder. i find it pretty easy. especially if you code yourself some helper functions for it
Adding or hiding elements often causes layout conflicts, because UIStackView adds a new constraint at 1000 priority. You have to work around this by bumping one of the other constraints down to 999.
Also recent versions of interface builder tend to complete fail at rendering nested stack views, even though the layouts work fine when you actually run the app.
The one place where I've found autolayout makes things much easier is dealing with variable height table cells. Manually measuring the heights of cells is a lot more work compared to using self-sizing cells and having autolayout do it for you. Especially if you want to support Dynamic Type which means that every table cell containing text potentially has a variable height.
On the other hand, there are situations where if your views are highly dynamic then the complexity of autolayout starts to negate its benefits. Any time you need to save references to a bunch of constraints so you can adjust their constants or toggle them on/off later. Or if you have one set of constraints to start but need to replace them with other constraints at runtime as your data changes. Or if you have a lot of subviews to layout and you need to start coordinating content hugging/compression resistance/layout priorities between them. (This is the worst cause it starts to feel like you are adjusting arbitrary magic numbers to make your layout behave the way you want.) At some point you cross a threshold where your autolayout code gets absurdly complicated and you might as well have done it all manually.
Ultimately I've found it best to treat autolayout like any other tool in the box: use it when it makes things easier, but don't hesitate to rip it out and do things manually when the situation arises.
How your layout responds to changed text (localization). I've found that fixed layout generally works (I remember Delphi which was a breeze to design UI) but when your labels become long, everything is out of place.
I guess autolayout and CSS are the pointers of UI development. For some it clicks, for others it does not.
I've enjoyed making layouts in CSS even before flexbox and grid, and I have no problems doing it using autolayout. But there are problems if you are not using AL.
Having layout guides and anchors makes it much easier compared to pre iOS9 days. Just don't do it in IB—it may look easier this way, but it is a pain to debug. I find myself leaning more and more on UI creating in code only, so much more flexible and gives more control.
Manipulating frames may seem to give more control, but I think it takes even more away, introduces unneeded complexity (missing out on layout guides, size classes, intrinsic size, easy control for compressibility/content hugging, etc.).
I wonder how many devs who used frames for layout are regretting this decision while fixing their apps for iPhone X.
I have little experience with iOS auto-layout (and iOS development in general), but auto-layout engines on every OS or framework are always hard to get into exactly because automatic layout of interface is a hard problem - and solving it properly requires quite a high level of abstraction. Which always seem too hard to digest to developers that are just getting into the framework are only trying to position elements at fixed points on the screen, without giving any serious thought to automatic layout.
As a result, most of the projects that I inherited as a maintainer almost never used advanced auto-layout features and every change in the interface required re-positioning all other elements by hand. So, auto-layout ends up right there with static typing and unit tests: technologies that are of little use to the original developers who are only trying to "get shit done" up and running, but are so important for maintainers that end up with this mess later.
The divide trick is interesting; I've used it here and there over the years but never came to rely on it. Looks like the pattern became much more concise with Swift's tuple unpacking.
For what it's worth, I settled on "two uncool tricks": the venerable autoresize mask, which I have aliased to be shorter words, and extension properties for top, left, right, bottom, and center. These are just convenience accessors over a view (or layer's) frame, but there is one crucial design choice: when right or bottom mutates, it moves the origin (as opposed to resizing - this way left/right/top/bottom all have similar meaning).
Together these properties lets me write code that states intent more directly and succinctly, e.g. for a content view and a bottom bar, `bar.bottom = view.height; content.height = bar.top;`. It's still imperative code, and order matters, but it reads decently well. Conditional insertions can be just that, etc. Lastly, it helps immensely to factor out constants into named properties, e.g. `let y_margin = 4; let text_pad = 2`, especially for maintenance.
It is essentially the x-y cursor approach without a cursor - just use the relative properties of the layout built thus far to position the next item.
As for autoresize, the dumb trick worth mentioning is to initialize autoresized root views with a weird size, e.g. 99x99. Then if something is not configured right you will see the telltale square.
I have given auto layout several serious tries, most recently in 2014 or so. I tried both Interface Builder (which I deeply dislike due to nonsense XML git conflicts) and code. I found that with a few extension properties I could write auto layout code by hand effectively, but what ruined it for me was that ApKit throws layout exceptions. In general I appreciate Apple's approach to exception handling but to have my view crash before I could even see the layout was very unproductive. In contrast, when I make an autoresize mistake I can usually see what the problem is and correct it, rather than guessing at which combination of constraints caused an exception.
I would be curious to hear from folks if the auto layout situation has improved, and if there are tools that have helped with debugging (I used to rely on Reveal a lot).
Maybe someone with more experience in the platform can confirm or deny this. There are a bunch of things that have been standard issue in certain frameworks for decades. Then Apple "invents" the same topic and the Apple-focused people who have not looked elsewhere start to drool over it and call it very innovative and new.
I think Auto Layout smells like one of these. UI toolkits in other niches have had dynamically sized layout engines for a long time. But some platforms, like Win32, or Mac, or NeXT, were more into absolute positioning for longer. Now Apple is picking up other ideas, and it's oh my god so amazing (assuming you've been in the Apple world the whole time).
I was one of the maintainers (but not inventors) of Apple's autolayout. It's not at all true that "Mac or NeXT" (?) views were absolutely positioned: they used dynamic positioning via autoresizing, which started with a reference layout and then declaratively described how to modify it. This already is well beyond simple absolute layouts and is sufficient for many cases.
Autolayout takes this a step further by adding constraints and priorities for breaking them. This allows you to declaratively and naturally express tricky layouts like aspect ratios or preferred minimum sizes ("I'll have to truncate at...") which are typically special-cased in other layout engines.
Tooling around this stuff is challenging, but the architecture is leagues beyond the simple dynamic layout engines in other UI toolkits. iOS can't fully realize it (no resizable windows) but on the Mac it shines.
I guess I must have imagined a decade or more of seeing absolute positioning in objc code.
I will re-state it. What would cause people to say that Auto Layout is one of the top achievements Apple has ever produced? Unfamiliarity with how other people have approached the problem is high on my list.
As usual? Apple has done a lot of weird stuff when it comes to claiming to have invented things. Need I remind you that the CUPS source code and documentation says (c) 1997 Apple? True they bought the copyright, they bought it 10 years later and erased all evidence that it ever existed outside of Apple...
Second I wasn't claiming Apple made any claims on inventing layout, it was more that the third party Apple community acts like it did. Kind of like how they act like Apple invented threads and concurrent queues with GCD. I heard a prominent Apple fan claim that Apple invented localization to RTL languages, when the reality is this was a topic they neglected for a very long time while others were doing it. I hear and read this kind of crap all the time. My point is that Apple fans live in a bubble and tend to credit Apple for things that others have done for decades.
While the article has a some good points about not over-engineering unless you need it, I’m really put off by the pissed off and dismissive attitude. Ex “here’s another wannabe who never bothered to learn”, “This pisses me off“, “I’ve spent 15 years” etc.
This shows the author doesn’t really understand why you might decide to use an architecture that departs from the apple MVC orthodoxy, and despite “years of experience” never worked on a large, multi platform project, that tried to avoid 1st party tool lock-in, or actually implemented in-depth testing beyond the anemic xcode support.
The truth is that Apple mostly cares about providing a good UI SDK with general platform services, but is agnostic about how any one app is architected. It makes sense they will preach the simplest and most general pattern: MVC.
Or how about not pissing on other people’s valid ideas because you are offended and want to show off your own contrarian cleverness?
> actually implemented in-depth testing beyond the anemic xcode support
I think this is the key motivator for people looking at alternative architectures. My guess is most people wouldn't stray very far from basic UIKIt/MVC if the established practices weren't so hard to unit test.
I see the experimentation with alternative approaches as the iOS community going through a phase of maturity, and is a positive signal. Many people in the community care deeply about issues of code quality and testability and while striving for simplicity. The increase in iOS architecture related articles is an indication of unrest and an active search for better practices. I suspect that it is only through these experiments that we will discover the balance we are looking for.
Similarly, I think articles like the OP are a healthy resistance against the pendulum swinging too far the other way. Although I agree with several points made I'm disappointed that testability isn't even mentioned and thus many points ring hollow.
I work on a 90k line application with many views written in VIPER. It’s awful. It’s an arbitrary collection of classes that reduces maintain-ability, readability and ease of debugging.
This is really sound advice that took me years to appreciate. Just like the "Roll Safe"-meme, it always comes back to me in a similar expression like from a now 8 year old blog post on MSDN - "Source code is a liability, not an asset!" (https://blogs.msdn.microsoft.com/elee/2009/03/11/source-code...)
VIPER is terrible except for maybe 1% of projects - and those projects I envision having thousands of developers, dealing with legacy code, and fragile infrastructure.
I've seen companies actively avoid auto-layout and storyboards because "it messes up during merges", so then they turn to ridiculous code-based layout which makes it impossible for designers to work with. However all those storyboard format issues were resolved back in XCode 6 (or maybe even XCode 5). I think XCode 7 introduced storyboard references which allowed devs to work on decoupled storyboards without any possibility of conflicts.
Anyway what I've landed on for production apps is something closer to MVVM: Using Swift's protocol extensions for dependency injection without any frameworks, and having a view interface over the controller for testing business logic. There are other small tweaks, but for the most part, no crazy frameworks, no fighting the system.
I don't agree with the idea that you HAVE to use MVC. You should try out some of these ideas if you have the time/resources and see what works, what doesn't. Make the call based on team/time to market/skills/performance requirements/reliability requirements/etc.
If I'm writing a quick prototype app i'm going M(assive)VC all the way.
MVP sans the third-party frameworks works pretty well and is easy to read, maintain and test. I prefer it to MVC, for sure, but if you really wanted to... you could still write lightweight view controllers using a lot of abstraction (and don’t forget to inject your dependencies). I have nothing against VIPER, et al, but I think the author makes some really good points on complexity and the inability to understand your system when you have to rely on some of these architectural patterns.
Very interesting to read this from the other side of the pond (Android dev).
Android has a bit of a different architecture history : at first the framework team has built it as modular components (IIRC you were not even supposed to be able to subclass Application) with no recommandation on how to architect your app code outside of that.
It lead to MVC with all the business logic in massive activities .. at least for most companies.
We now have the same jungle of really abstracted patterns with some of them actively fighting against the framework by rewriting it. To be fair, parts of the framework like fragments have been a disappointment (and again to be fair Google is hard working at addressing this).
I just want the minimum amount of abstraction that gives me the advantages I want (separation of concerns, testability) and not rewrite everything from scratch.
Yes, I've noticed this a lot with Android projects too. Recently I looked at a project that openly bragged about using: RxJava 2/AutoDispose, Conductor, AutoValue, Glide, ThreetenABP, Inspector, PSync, Chuck, Scalpel, Stetho, Room, Firebase, Butter Knife
Look, not all of those things are bad but jesus fucking christ, what happened to exercising restraint and good judgement with introducing dependencies in to your projects? Dependencies are a LIABILITY and for something to be introduced it has to be worth it, not just provide some minor syntactical convenience (like Butter Knife, etc.)
+1 Butterknife doesn't reduce lines of code, introduces new syntax to learn, and carries additional risk (eg it has introduced unexpected issues). I believe some of the framework+library bloat is intellectual signaling.
I have just arrived in a new company where they use BK.
I have always been doubtful of its benefits but after giving it a fair chance, that's still the case :/
It removes a tiny bit of code, that does not seem that helpful to me. I still have to add the id in the java code, and I would have to count to know how many characters I save that way. Not many. And now I need to worry about how the bk magic happens to be sure to do not leak that way.
I am used to databinding, where I initially had the same reaction .. but after some use it allows you to gain a lot of time with two ways binding and binding adapters while staying at a sane level of abstraction IMO (it just writes the boilerplate code for you) .
Exactly what I was thinking when I read that. Original Android didn't go far enough, then RxJava and clean architecture went way too far. Fortunately the new architecture components like LiveData and ViewModel coupled with kotlin (especially coroutines, wow where have those been all my life) are simple to use and understand while providing the right level of abstraction. Android development has become really nice lately.
FWIW, my experience is almost exactly opposite. There’s not enough architecture in the iOS world. The norm is to stitch together the app without clear notion of the state or its transitions, everything is hidden in a giant tangle of view controllers, a lot of the state is crammed into the app delegate singleton, everything calls everything else via singletons and good luck finding the UI flow in a web of storyboards where arguments are passed in prepareForSegue. It’s a huge mess.
Talking about target-action or delegation in this context is IMHO a misunderstanding. These are very nice and valid solutions, but in my experience they are simply not enough to build a structured non-trivial app with. They are the basic building blocks, but there has to be something above them.
Of course, it’s also a mistake to go overboard with structure, drowning the important bits in architecture boilerplate. Architecture cannot and must not be followed blindly, as a cooking book. You have to aim for a reasonable compromise, a point of diminishing returns.
It was my experience as well, but over time things "click" and your VCs are suddenly small, there is almost nothing in AppDelegate and the app architecture becomes obvious. I can't explain how to reach that state of mind, though.
iOS is relatively young. It attracts a lot of new programmers. For years iOS suffered from lack of ideas around architecture and solid programming practices.
In the last few years I've seen this start to change and now it seems a bit like "ARCHITECTURE ALL THE THINGS!!!". We're in a period of trying out new ideas and figuring out what works and what doesn't.
Give it a few more years and these ideas will start to mature.
I see this quite a bit in projects where devs haven't taken the time to understand the SDK and tools/patterns provided by Apple. The biggest one I see blundered quite often is the delegate pattern in favor of an event/subscription system.
I see this everywhere. I think this enshrines difference between seniority of a coder. new coders always want to write everything of their own. Senior coders want to write nothing unless they have to, remain dependency free. Learn standard vim / xcode / os keybinding etc etc. because at the end of the day the more something unique is the less chance it can be immediately useful. This works the other way too, defence against homegenity of a large pool of other businesses / coders (but this one is about marketing and not writing code). Great artists steal et al.
Article is right on! Read my mind. I've been a developer for 20+ years and iOS developers since developers could write iOS apps. Mac apps before that. I can't tell you how many hours I have had to fix "clever code" and clever architecture.
It is so easy to jump into an app that follow the usual iOS design patterns. As soon as I see NSNotificationCenter or Event bus garbage going on, I know the app is a design/maintenance nightmare.
Keep it simple, following patterns like MVC, delegates and protocols. Keep your view code, business code and controller code separate. That is 90% of the issues.
OP brings some great points (I've had a lot of trouble with an inherited code-based UI in Swift), but I was dying to see code examples of the 'correct way' to use UIKit and its patterns, particularly Dependency Injection, as I've yet to find a solution that I'm comfortable with. Can anyone link a good example?
I don't have a link to a good example but I have used https://github.com/Swinject/Swinject in the past and it really helps get the job done. You can over do it a bit though as with anything.
One big problem with using interface builder to build UI is code review & working on UI in parallel.
This isn’t an issue in smaller apps but if you work on a larger team (5+) it becomes a real pain.
I really wish that Apple had taken the approach that Google did with using xml (or something humanly readable) to describe the UIs generated by visual tools.
Take iterating and filtering as a prominent example. For some, the functional approach highlights the operations themselves and avoids boilerplate. For others, the indentation of traditional loops and conditions alone is worth that.
Usually there's a certain language culture attached to that. A Perl programmer sees "map sort map" and knows what's going on, everyone else is just going "Schwartzian what?". A big problem with JavaScript is that it doesn't have a culture that points in a specific direction, so you're bound to encounter all kinds of approaches, and none is really "native".
Swift probably isn't as bad, but it'll hit the same cultural barriers from time to time, depending on the previous experience and general environmental conditioning of the programmers.
I wholeheartedly agree. I’ll just mention that this is especially acute in Haskell. Haskell is widely considered to be a language with a steep learning curve, but for experts and experienced ones it is easy to lose track of how advanced the code really is and how obtuse it is to a beginner. The lens library is the best example. I often write extremely concise data structure manipulation code that composes lenses, prisms, getters and traversals, sometimes using (->) other times using Indexed as the profunctor. It’s obvious to me but I’m sometimes told at code review time that it’s too hard to understand.
It's about maintainability more than anything. I want a different developer to be able to jump in an understand a code base as quickly as possible. In my experience, unless there is a lot of duplicated code, it's usually much more cost effective in the long term to repeat yourself a few times (heresy I know).
I agree with most of this article, but an issue that he does not address is the fact that Apple constantly "refreshes" the UIKit APIs - every WWDC shows the new cool way to do something or other, and their own Apps usually exploit these new things. Look at the Podcast App in iOS 11 for example - what used to be a very simple table based app now uses collection views and fancy presentation views that have multiple states. Unfortunately, App developers face pressure for their Apps to look relevant, so it is not easy to ignore these "improvements" to UIKit in ones own apps. Adopting the new UIKit stuff inevitably leads to bloat.
As a Mac and iOS dev since 2005: For long-term survival and maintenance of your software, XIB and autolayout are slow toxins, and Swift is a swift-acting toxin. You won't be able to recompile your Swift code in a year. Or worse, it'll compile but be semantically different.
You won't be able to edit a XIB without fixing AL issues in 1-2 years, and the XIB will be unreadable in 5-10.
The part of Xcode that used to be Interface Builder is still useful for prototyping, but don't ship that!
Write complex UI in code. Autolayout strings in code aren't as bad as the obscure UI, but one can just as easily compute and set frames.
Swift is slowly moving towards binary and source stability. Storyboards are more capable then they used to be–and they're literally XML. How can you make that unreadable?
Swift devs keep saying it'll get less fragile, then they rewrite "exceptions", or how strings work, or anything else and all your code has to be fixed line-by-line. I do not believe it'll be stable this decade.
Xcode stops supporting old features in XIB, or a class has a deprecated and then removed property, and the deserialization fails. XIB is not XML with a nice semantic DTD, it's an object serialization graph that happens to use XML.
Nah, I switched to Swift at v1.0, the transitions have been relatively easy. Right now our app has 90k lines of code so Swift 3 was a day of work, but mostly automated.
Swift 4 is automatic. The trick to making the transitions easy is to not abuse the language with too clever code. Write clear, straightforward and easily msinatainavle code and it will rarely be broken.
Ask Apple, who hide it from you in Xcode and riddle it with human-unreadable generated IDs and noise elements, in addition to needless reshuffling of elements after editing which overcomplicate your diffs and obscure the meaning of changes. Interface Builder in all its forms is indefensibly bad in execution.
C++ and Objective C are like walking around with loaded shotguns aimed at your feet. It’s not a question of if you’ll blow your toes off but when.
I’ve written my highest quality apps in Swift. They virtually never crash. And updating my code once a year has been manageable, if not trivial most times.
I spent 20 years in the Objective C world, and wrote my iOS apps in Objective C for nearly 5 years. If your apps virtually never crash you are an outlier, and you are working far too hard at it.
Swift's strong typing and optionals make it easy, fast and natural to write safe code, stuff I used to use masses of libraries to help me do in Objective C.
Tell me more about being iOS developer since 2005.
Swift and autolayout are god sent. Could not care less about storyboards.
No, you cannot easily compute and set frames for all the cases covered by autolayout. You can compute, but not easily.
There's a computer called a Mac, and it's run OS X since 1999-2000, which was NeXTstep back to the 1980s, which is where Objective-C & Cocoa (NS = Next Step) come from.
I'm certainly better than average at geometry & visualizing a grid, but that's a skill most people can learn.
Those who do not learn history are doomed to be the butt of Santayana jokes.
Objective-C code from 30 years ago still compiles and works; many of the NS* APIs still used were stable by then. The tech whims of Kids Today™ don't tell you anything about long-term survival, because they haven't been alive as long as some of those codebases.
In 5 years, everyone doing Swift is either going to be rewriting their app for the 5th time, or switching to something less like quicksand.
>Objective-C code from 30 years ago still compiles and works; many of the NS* APIs still used were stable by then.
With the caveat of a huge pile of AppKit warnings and deprecations that will need to be fixed, and probably a number of Obj-C warnings stemming from llvm/clang being smarter than the gcc of 30 years ago.
This was my experience reviving an OS X 10.1-era project. It compiled and worked, but its warning+deprecation count was ridiculously high for the project size that the effort involved in fixing it all was scarcely (if at all) better than migrating to a major new version of Swift, and honestly it probably would've been better for a lot of those warnings to have been errors given how long the involved APIs had been deprecated.
I've also migrated a moderately complex app from Swift 3.2 to Swift 4. The transition was painless and amounted to little more than a couple dozen lines changed across the project. Most of it didn't change at all. There's no way that Swift users are going to be rewriting their apps once per year, especially if the UI and logic in these projects are reasonably well separated.
It's like, the opposite of what you wrote. Swift apps are much higher quality than Objective C apps today. I know because I write large commercial apps for a living and wrote in Objective C and object oriented C for 20 years.
There are too many ways to blow up your app from C. A disciplined developer who uses safe libraries/classes to shield themselves from dealing with naked pointers can succeed at making an Objective C app reasonably stable, but it's really hard.
Swift was designed to protect you from wild pointers. All you have to do is NEVER FORCE UNWRAP and 90% of the typical crashers can't happen. Our current app is 90,000 lines of Swift. It's been relatively easy to maintain and update with Swift versions, and it almost never crashes. Our crash rate is in the top 10% of Social Media apps.
You don't have to write clever code in Swift. Just use the strong typing and optionals, the language is succinct and powerful and you can be very productive (outside of compile times). Swift is the professional tool for building iOS apps, Objective C is fading away.
stability is also helped by stagnant functionnalities. Swift is only 4 yo and aims at "total domination" aka both server client and system dev. it's perfectly fine for it to keep evolving, as long as it hasn't stepped a solid foot in each domain it's aiming at.
objc otoh isn't good and will never be good for anything other than appleOS development. i don't think betting on that tech today makes any sense. it lived a revival thanks to iOS , and it'll probably die when that platform has completed moving to swift.
OS X, iOS, and all new APIs consumed by Swift on those platforms are written in Obj-C or C++, never in Swift. So what's "stagnant"?
It's true that few others have adopted Objective-C, GNU & MS occasional efforts aside. Which is why I keep a number of languages in practice.
But fragile, power-hungry, slow-compiling Swift isn't going to beat out stable Java, or easy JS or PHP on servers, and outside iOS/OS X, building desktop software needs UI toolkit support and stability.
Which is my point: Building in Swift is total technical debt.
i guess it all comes down to whether you believe swift is going to get better with time and solve its current issues ( the ones you mention),or if it's inherently a bad design. I'm betting on compilation speed and fragility improving a lot in the next two years. What matters to me is that the language type system is among the best ( enums and optionals) , and the general feeling when coding with the language is a real joy.
For objc, however I've never been able to say anything better than "i got used to it".
It's focusing specificaly on the model layer (which i think most ios dev don't spend enough time on). But, like OP, it's also advocating to stay extremely close to the original MVC design. Free to use and contribute.
I could understand the frustration working on a smaller project solo or with a few other developers. MVC is probably the best approach to get an MVP. But when you work on an app with 100+ other developers and need to support millions of customers, it's nice to have some more structured and verbose architecture.
Target-Action: A button needs to send a click action to a target handler.
Dependency Injection: A controller requires a service (e.g. access to some API) to work, so it's injected into the instance typically by the constructor. Think hierarchy or dependency graph.
Delegate: A table view draws cells in a table but it doesn't know how to draw them, so it delegates this to something else, typically the containing view controller. Data Sources are a similar concept but instead of asking the receiver to do something it asks the receiver a question about the data it's supposed to model. For example the table view would ask how many sections and rows do I have.
MVC: Model (data), View (visual representation of data), Controller (controlling flow of data to and from its visual representation).
These are pretty standard design patterns across most GUI frameworks. For instance Java Swing apps and Android apps also are commonly built with an MVC architecture. All four of these design patterns are not iOS specific, but they are design patterns the Cocoa Touch framework was made to be programmed against.
How many times have we seen this sort of nonsense? Facebook tried to push their observer pattern Redux on the basis that MVC is bad, and the presenter displayed something that was not MVC. The article of this topic mentions exactly this issue—I guess people this day and age do not know what MVC is.
Except you can’t pull a well-typed object out of the notification. And have to use a string constant to subscribe. And if you happen to want to filter the notification stream or even gasp map over it, you’re on your own. The amount of boilerplate and subtle bugs around NSNotificationCenter is really not nice.
> I consider Auto Layout one of the top 5, maybe top 3 technologies to ever come out of Apple. It’s beautiful, easy to work with and there’s nothing I can’t do with it.
It's not beautiful, nor easy to work with, nor does it work for every scenario.
I am going to say something heretical: I have given up on auto layout and I find "manual" layout much easier. But that's because I rely on "one cool trick"[1]: Do not lay out views using an x-y cursor. Instead, slice up the bounds of the view being layed out using `CGRect.divide()`... So simple. If you're using manual layout but not using `CGRect.divide()`, I urge you to give it a shot it will change your life.
Of course maybe everyone does it that way and I'm the only person who thinks it's a life-changer. :)
[1] Side note: I learned this trick from a technical interview at EA games years ago... The best interviews are those where you leave knowing more than when you went in, even if you don't get the job.