Personally I think it's an awful idea to use meta-programming without an exceptionally good reason in production code. Like excessive use of function pointers in C. Just because you can doesn't mean you should. Programming in this style causes more code complexity and will accelerate the rate at a which a codebase becomes a mess, not to mention it's really slow.
I think the real problem is dynamic metaprogramming. Things like method_missing and the like can be very hard to reason about. However, static metaprogramming (ie code generation), can dramatically reduce maintenance costs and yield a fantastic regression testing mechanism for free: Just check-in the generated code!
Whichever kind you do, it just has to be optimized for reading the code, not be a leaky abstraction, and stay out of the way of debugging. Static meta programming can be bad in all these ways as well.
I've found that static metaprogramming produces an odd middle-ground. The intermediate code can be wonderfully legible and this lets you quickly find bugs... but when it comes time to fix the bug you have to go into the original generating code and then you find an absolute horror.
Visual Studio doesn't exactly treat T4 as a first-class language, which is a shame. Debugging it is much harder than it should be. I've found that it's frankly easier just to write your code generation in C# itself as an extra build phase.
New developers (less than 10 years professional experience) seem to become fascinated by "cool" language features that let them do unintuitive things and then approach problem solving with a mindset of "What cool tricks can I do to solve this?"
Instead, the experienced developer will always approach a problem with "What is the simplest way to solve this problem?"
This is how they become expert programmers in 10 years. Of course they will produce a couple of brainfarts and quite a few over engineered or just too clever solutions on the way, but this is how you learn.
The alternative is to just write the same conservative solution for ten years. Then you are just as medioker programmer after ten years as when you started.
Sometimes the simplest way is to use a "cool" language feature.
For instance: say you have an app with 30 different ViewControllers, and you need them to respond to the same notification. Sure, you could go back and refactor every one to subclass from another class that inherits from UIViewController - but then you need to make that for UITableViewControllers and UICollectionViewControllers, and potentially any other new view controllers that come along if your codebase ends up lasting for years.
Or, you could make one class category that method swizzles the viewDidAppear method to add the notification handling in immediately. Every class that inherits from UIViewController will now respond to that notification.
I think a lot could be said here for maintainability. As others have mentioned, it really goes a long way to take the time and refactor the class to inherit the extra/common functionality if you are working on a team or working on a project that you know will live a very long time. Swizzling will certainly work, but at what cost to readability, debugging and reuse? That's normally the question I ask before swizzling or doing fancy, dynamic things.
Sometimes the quickest or even the most elegant solution isn't necessarily the "best" one. Best being a subjective term, I would say it depends on what you need from your code over time and with whom.
New developers (less than 10 years professional experience) seem to become fascinated by "cool" language features
The classic analogy is with young musicians. Just because you can put a fancy ornament in or play real fast doesn't necessarily mean you should. Sometimes it's best to play something simple, so long as it's just the right something simple.
Yeah, but what sucks is when you object for the reasons you describe, but your objections are interpreted as you being out-of-touch, afraid of new things, etc. When really it's just a distaste for complexity and magic.
Well. Objective-C isn't that simple to begin with, but when you are used to it, it really provides elegant and very simple solutions to medium complex stuff like key-value observing and coding. Meditated together they are called bindings, where your ui is key value coded, and your model is key value observed. Yes, Objective-C (and Cocoa) are very strong on design patterns, which gives you a whole lot of leverage. It works and is debugable too! :) I love it, and find it to be the most hot-dirty-sexy language I have ever used.
First you have to master the tool, or even abuse it, then you get experienced. Or, you refuse to try those advanced language features and never get experienced.
> Instead, the experienced developer will always approach a problem with "What is the simplest way to solve this problem?"
And what would you call my mindset, which can be summarized as "what cool features I can use to express the problem and it's solution in a simplest way"?
Remember, copy pasting is also very simple solution to many common problems.
A sensible compromise to me is that runtime meta-programming should be used to accomplish something sufficiently generic that it can be confined to a library.
Said library can then document (and test) the interface and implementation sufficiently to add a minimal amount of complexity to the application code.
The big red flag is using this stuff in application code to hack around bad design decisions or save on a small amount of typing (there's also no barrier to its expanding and engulfing the entire codebase).
Personally I think it's an awful idea to use meta-programming without an exceptionally good reason in production code
One of the less well known rules of thumb from extreme programming was to have just 5 to 7 "things you have to know" to write good code for a system. Only up to two of those things should be notably abstruse or automagically implicit. Preferably, it should be zero, and any fancy tricks should be transparent for most coding.
Really, this just follows from optimizing code for reading.
Wow, major hate on the metaprogramming in this post today. Note to self: I'm not working for any of you. K? k.
Anyway. The need for metaprogramming is like a lesser version of the need object-oriented programming. You never strictly need OOP. And you can totally go overboard with the Abstract Factory Factory, and make your code insanely obnoxious to follow.
But it can help, and it can specifically help in the situations when it simplifies more than it complicates -- in the situations where it's so simple you barely notice it. (Describing object attributes for your favorite ORM is a case that comes to mind.) If you're set in your ways and you've already made up your mind to eschew it always, then sooner or later you're going to end up with something more complicated than it should be instead of simpler, and it's just an empty piety. :P
One thing that confuses me is the need for some people to read emotion into nearly everything. This is especially prevalent within the group of people people who tend to prefer Ruby, JavaScript, and languages like those.
When I read soup10's comment, or some of the other comments here expressing a similar take on the matter, I don't see "hatred" involved.
In fact, I see a clear lack of emotion. In place of emotion is a pragmatic and analytical point of view, where the benefits are weighed against the drawbacks, and a conclusion is drawn.
Emotion doesn't really play a role at all in such analysis. It strikes me as odd to see it suggested that emotion is involved, when it pretty obviously isn't.
I've never heard anyone describe function pointers as meta-programming before.
Functions are data, just like other kinds of data. I use them in C whenever what I want to parameterize is behavior, and not, say, an integer or a string.
I believe that if you explicitly avoid function pointers, and instead use a less appropriate construct, that would become a mess, instead. If you replace the function pointer with an enum, you've tightly coupled the type definition of the parameter with all users. You've added a switch statement on the enum, rather than a function pointer call. It would be both messier and probably slower than the function pointer.
With all the many languages I've used in my life, ObjC is actually my favorite to work with, especially with the modern runtime and features. I first used it in the 90's with NeXT WebObjects and was pleased to see it become popular again.
Be careful with this kind of thing. You're inadvertently making the FP community look bad. A more constructive and less pushy approach would have been to recommend an enjoyable FP language and explain why you think it suits the parent.
Lots of functional languages are just as warty as Objective-C, said the Smalltalker of over 10 years who also does Objective-C and is currently using Clojure.
I personally prefer Obj-C over almost all the fp languages I've tried. Clojure and Scala being the two big ones but I've also spent a little time with haskel and erlang. I don't hate them but feel like I'm being handicapped instead of helped. Scala was my favorite since I could do either oo or fp. They are all just tools and I shouldn't be limited.
They're similar enough that MacRuby http://macruby.org/ is actually built on top of the Objective-C runtime (I'm surprised that the article doesn't mention that).
Yes, ruby has a strong smalltalk foundation (messages & runtime). Thus it makes sense to compare the roots of both langs in the way they interact with messages & the runtime.
I know that may not seem like a big deal but many major applications people care about seem to have first appeared on Mac OS / Mac OS X (yes, these are two very different things).
BTW: namespaces?! Not exactly my go to checklist feature for languages.
Fine, they're different things. My point is that seemingly no one uses Objective C except to make proprietary Apple things. Which is fine, but I always find it annoying when people sing the praises of Objective C but avoid using it for web servers, cli applications... or anywhere else Objective C doesn't have an institutional advantage.
I think what people really like about Objective C is not Objective C but Cocoa.
It is called a vendor providing an API for a platform with useful features implemented, so that one can focus on building solutions to one's problems and not reinvent the wheel. Watch out, the way you said it makes you sound like a free software fanatic/proprietary system hater.
I would seriously consider Obj-C on non-OSX systems if it had a better library available (GNUstep doesn't cut it), but that's a chicken-and-egg problem.
It has nothing to do with lock-in. There's nothing stopping people from using Objective C to serve web apps, it's just that no one does it because Objective C is not a pleasant language to work in. People do choose to use Microsoft languages for things other than Windows apps.
Objective-C really is pleasant to use for GUIs. As I noted elsewhere in this thread, I offload everything I can into C++, but between Objective-C and IB, I really enjoy building GUIs in OS X.
And MacOS applications. You know, the second bigger desktop platform.
But it's not like "where it's used" matters. That's mostly just a historical accident. If Microsoft had adopted it instead of C++ for example (which is not that outlandish), it would have been used a lot more. There was also NeXT that adopted ObjectiveC (it was developed before it), and OpenStep which also involed Sun and other players etc.
And compared to "being used for iOS apps" (e.g 1 million apps in the most lucrative mobile market", Lisp and Smalltalk are not used even 1/100 that. And Haskell even less (some "success story" here and there, eg an obscure bank, and that's mostly it). Does that make them bad languages?
And the fact that it doesn't have namespaces. Yeah, so like C. So that's important, because?
>Nope, that's exactly what I think it is.
Well, doesn't seem like you do. Or have any extended experience with the language. It's just empty snark to convey "oh, so obsolete".
If a cool library you want also has a class named NSArray, then you just refactor it by renaming on import like you just mentioned.
It's literally the exact same thing as what you're saying. Say I have one library in my project already - we'll call it BGAddressBook. And then I find another library that I really want to add that also happens to be named the exact same - BGAddressBook. So, I would refactor one of them (probably the one already in there to be BG1AddressBook or whatever). This is no different than having a library named BGAddressBook in ruby and wanting to use another library with the same namespace. You're going to have to change one or the other's namespace to work with both.
Properties don't matter and can collide with no problem. I can have a BGAddressBook with a name property and a BGSomethingElse with a name property. Doesn't matter. It'd be the same as BGAddressBook::Name and BGSomethingElse::Name.
By importing names from that namespace and aliasing them. Or by creating another namespace and importing them there.
You can't seriously argue that "NSArray" as a single string is the same thing as (hypothetical) "NS.Array", where both "NS" and "Array" and the thing at "NS.Array" are semantically different things. Don't get me wrong, I love Smalltalk and Erlang - both languages suffering from the same problem - but I can recognize a shortcoming when it bites my arm off.
I was genuinely asking in my question above. I'm all for it as long as it makes sense. I've never messed with aliasing or anything like that - I'm decently fluent in Objective-C but have only really dabbled in ruby/js.
After looking around halfway through writing this, this looks pretty nice:
I see what you're saying. Maybe it's Stockholm Syndrome, but I do like knowing associated files in 3rd party libraries easily too. I don't know, I'll experiment with this more in ruby and see how it feels.
The whole problem is that namespaces are open, and you can add to them from anywhere. This means a namespace alias is useless in this situation. There's no distinction between things put in namespace std by library A and things put there by library B - they are all equally valid members of namespace std.
All that has been accomplished by the namespace alias is that before, both libraries were accessible through "std", and now, both libraries are accessible through both "std" and "betterns".
Namespaces gives you the tools to make name clashes less likely, but they don't make them impossible. Generally to avoid clashes totally you need a central arbiter of name ownership like, say, the DNS registry, which is why naming your packages using inverted domain names was advocated by some people in the Java community.
I never asked for anything; perhaps you are confusing me with luikore.
Namespaces and modules do not prevent name clashes, at least in any language that I'm aware of. Even with a more sophisticated system like Haskell's, you're still out of luck if you have a package name clash. (And getting to the point where that's the limiting factor requires GHC extensions.)
What namespaces/modules buy you with respect to name clashes is just that they make it possible to stomach longer names that make those clashes less likely.
Are you aware of a programming language that does truly prevent name clashes through a namespace or module system? I'd be interested to hear of it.
Except that with a proper namespacing solution, you just declare that you want Foundation's Array in one spot, and in the rest of the file you just write Array. With NSArray, it's NSThis and NSThat and NSBlah and NSFoo and NSBar and NSNSNSNSNSNSNS everywhere.
How do namespaces come to be the feature to specifically call out? Is there something deeper to namespaces than a lexical (i.e. surface) convention around identifiers?
Yep. And the fact that Apple did absolutely nothing to evolve the language into something better shows that they just don't give a fuck about tools and most importantly about their developers! Maybe it's better than C++ or other alternative, maybe XCode is quite good, but this just show that since the days of NeXT (the 80s!), they just don't give a fuck!
...hate to say it, but nowadays the only big company that seems to care about developers and actually gives them cool tools and languages is Microsoft! C# with all its extensions, F# and the cool research they do in the languages area shows one thing: they care about us!
This is crazy talk. While I'm certainly not saying that Objective-C is perfect, or that Apple shouldn't do more, to say they've done absolutely nothing to evolve the language since the 80s is flat out incorrect. Look at blocks, ARC, object-literals, etc.
Please use this stuff sparingly though, especially things like resolveClassMethod: as it makes for some incredibly difficult to follow code that can segfault with some very strange error messages. Your future team members thank you.
i enjoyed this b/c i recently made the jump from ruby to obj-C and, though scary at first, i now feel incredibly comfortable working in obj-C and think i even prefer it. so, that's cool.
i do see at the bottom that this post is one of those "inbound marketing for job candidates" things. that's cool, too... but, i gotta say, as soon as i read the sign off, "If you want to work somewhere where..." i thought to myself, "not in a million f'ing years - that company is run by jerks!"...now, maybe the founders are actually good people but their public personae is just so off-putting that it totally undermines an otherwise well-done marketing blog post. i'm curious if that's just me or if they generally have trouble recruiting.
When I first started toying around with Objective-C and got past the syntactic differences with Ruby, I was struck by how similar Ruby and Objective-C are, and concluded that they were clearly inspired by a common source (Smalltalk) or by one another or both. Although this kind of thing is no doubt mentioned in the Wikipedia articles, it was interesting to observe the commonalities firsthand and in context.
On "concise syntax" - this works well when your vocabulary is large. But when you have to search for an English word that's sorta-like what you want to do, and then contort the definition to fit the algorithm, readability suffers. For example, JavaScript's Array.join() and SQL's JOIN are radically different concepts and the definitions are not interchangeable, only tangentially related in the respect that all the information being 'join'ed ends up in one [maybe larger] pile.
With respect to the article, concision looks like an artifact of standard practices by the languages users rather than any part of the language specification. One can be just as concise in ObjC (or C or Java), but the tradeoff is still readability.
I've never really thought about it before but PHP has some very similar metaprogramming features.
For starters, the incredibly dangerous runkit extension (like importing Objective-C's runtime).
There's also the Reflection API, which provides introspection and is quite widely used.
PHP also has magic methods, such as __call(), which give the ability to handle calling a method that doesn't exist.
As others have mentioned about Objective-C, these are all nice tools to know about and understand, however using them is often a code smell and can lead to unmaintainable code.
PHP doesn't have anything like categories, which is a shame because they look really useful in cases (as long as you don't mind violating the SRP a little). PHP has traits, which are more similar to mixins in ruby than categories in Objective-C.
These features are extremely difficult to use when C types are involved. @encode() mitigates this to some extent, but there are some C types that you can't fully express using @encode(), and it can't properly handle all types (namely out-pointers).
Yes. Using objc_msgSend and NSSelectorFromString, you can call functions, dynamically by name in Objective C.
Yes, it's dangerous. Clever, and not quite as easy as your favorite dynamic scripting language, but Obj-C supports it.
My favorite personal use case so far is an Objective-C state machine, where state names are strings, and state callback functions can be added to the code & called without declaration. It's not really saving much technically, but it eases just a tiny bit of mental & manual friction to be able to modify the code without having to update header files.
I like ObjC. It's true that what people generally use it for are the Cocoa features, but it could be a really good server language too (particularly with additions of blocks, GCD, and ARC). There are a lot of nice functional features that would make server development fun. It's pretty clear why Apple doesn't push it this way, but I think it could be a good server platform.