Hacker News new | past | comments | ask | show | jobs | submit login
Fun with Swift (joearms.github.io)
220 points by Turing_Machine on Jan 4, 2016 | hide | past | favorite | 109 comments



> Swift is widely marketed as a functional language

If that's true, those sources reporting that are wrong. Apple itself refers to it instead as a "protocol-oriented language" when it's not just saying "object-oriented".

Swift is definitely not a functional language in the way that it usually means, and Apple knows this, and any legit survey of languages wouldn't clump it together with much more functional languages. For example, this guy[0] knows what he is talking about.

> Where Swift shines it is in the integratiion with the underlying Objective C frameworks on the Mac.

>Swift is a good replacement for contexts in which Objective-C would be used.

Except when you try to do anything with OpenGL or a C-based API, like Core Audio. Swift is really lagging for "power programming" with these low-level APIs. It's great for beginners who use all the high-level frameworks (though "great" is a subjective term I don't really agree with for Swift), but a lot of people who really need to get stuff done on Apple's platform are not going whole-hog Swift.

>If you’re coming form Erlang/Haskell world you’ll think Swift is verbose and a bit of a mess but if you’re coming from Objective-C you’ll think “Swift is concise and elegant”

I got to be honest, I still find I prefer Objective-C, because, it's essentially C. I can use my C structs and interfaces with zero problem. But in Swift, all C primitives are translated into Swift-only stuff, even integers!

And don't get me started with trying to use a C++ library with Swift, something that's smooth with Objective-C. Which is ironic since Swift itself is written in C++.

[0] http://robnapier.net/swift-is-not-functional


What is a "Swift-only" integer? Swift integers are just integers. They interoperate with C just fine.

Interacting with C APIs in Swift is pretty much as convenient as using them from C or Objective-C. Which is to say, it's kind of inconvenient compared to using a good Swift API, but you don't gain a benefit from using C or Objective-C instead.

The major exceptions to this are the inability to take a pointer to something before it's initialized (for example to use a local variable for an out-parameter) and the inability to declare arrays on the stack. The former just requires some "= 0" or similar, and the latter can be worked around without much difficulty using a Swift Array or just using malloc and free, at the expense of a bit of performance.

There is also a problem where Swift translates fixed-length arrays to tuples (because Swift has no concept of fixed-length arrays) which can be inconvenient, but this rarely comes up.

Maybe you're just not familiar with the facilities Swift provides for interacting with C APIs and are doing things in a more difficult way than is necessary.

It's absolutely true for C++ interfaces, of course, but Swift doesn't even try and never advertises C++ interop. For that, you have no choice but to use C++ or Objective-C++. But for plain C APIs, Swift does a great job.


I've read a number of articles like this one regarding the Integer protocol in Swift and the many different int types:

http://blog.krzyzanowskim.com/2015/03/01/swift_madness_of_ge...

I guess everyone's mileage varies, but I find it a bit strange.


The only thing strange is that the author of the post seems to be trying to make Swift's integer handling sound unnecessarily complex.

> There are 11 Integer types that can be considered as Integers

Although he's including Bit which doesn't really count (it conforms to the appropriate protocols but from the doc comment you can see it's actually just intended as the index type for CollectionOfOne).

But having a bunch of different integer types shouldn't be surprising. Every integer type Swift has corresponds to an integer type in C, e.g. Int16 is int16_t, UInt64, is uint64_t, Int is long, etc.

As for the protocols, so what? It's a bad thing that most of the functionality of integers can be represented using a bunch of protocols? Clearly not. All the author is trying to do is make things sound complex.

> My goal here is to create generic function that will handle any integer value and make some operation on it. In this case I want to build function that get bytes and return integer value build with the given bytes array of bytes

So he wants to have a function that builds integer types all different sizes given an array of bytes, and wants to do it generically for any integer? Ok, take a pointer to your bytes, cast it to a pointer to integers of the right size, and you're done.

Oh wait.. he wants to do it without "raw memory", whatever that means? That's stupid. The rest of the article is nothing but a complicated exercise in trying to do things as hard as possible. His entire function can be written in pure Swift (without NSData) as

  func integerWithBytes<T: IntegerType>(bytes: [UInt8]) -> T? {
      guard bytes.count >= sizeof(T) else { return nil }
      return bytes.withUnsafeBufferPointer({ UnsafePointer<T>($0.baseAddress).memory })
  }
That's it. 100% pure Swift, and probably about as efficient as you can get.


What about stuff like this, these weird castings that pop up in Swift that you don't worry about in Objective-C when dealing with C APIs:

glBindBuffer(GLenum(GL_ARRAY_BUFFER), bufferIds[1])

glBufferData(GLenum(GL_ARRAY_BUFFER), sizeof(Float) * tcSize, tc, GLenum(GL_STATIC_DRAW))

That just looks silly to me.


This appears to be a peculiarity of the OpenGL headers - GL_ARRAY_BUFFER is just a #define:

    #define GL_ARRAY_BUFFER 0x8892
Instead of a proper constant. Therefore, Swift interprets it as an integer, which is fairly reasonable. GLEnum is:

    typedef uint32_t GLenum;
Which is not compatible with Int in Swift, thus the "cast".

If GL_ARRAY_BUFFER was:

    extern GLenum GL_ARRAY_BUFFER;
It would work as expected.


The lack of implicit integer conversions does dirty up the code a bit when using C APIs built around them, but I don't find it to be a particularly big deal.

Regarding the post in your other comment, none of that really has anything to do with C interop. C doesn't even have generics, so the whole thrust of the article (that it's hard to build a generic function that works on all integer types) doesn't apply.


Swift enums are considerably different from C/Obj-C enums so the cast is unavoidable. Specifically unlike C enums, Swift enums can be backed by non-integer types like Strings.

For numeric types that have a clear equivalence in C-land, Swift is seamless.


Actually, GLenum is not a C enum in the first place; it's just an integer typedef.


Interfaces which use the pattern:

  struct foo bar;
  foo_init(&bar);
Can be very annoying in swift, it does play nicely with CoreFoundation which does not use this pattern


As of Swift 1.something, C structs get imported with an empty initializer that sets everything to zero. So that pattern can be written in Swift as:

    var bar = foo()
    foo_init(&bar)
No worse than the C version, as long as the unnecessary initialization is not a performance problem (which it usually isn't).


In your provided reference, the first sentence Rob writes:

"Ever since Swift came out, I keep seeing weird comments that Swift is a functional programming language"

Swift is often promoted as being "functional":

http://www.raywenderlich.com/114456/introduction-functional-...

In fact, I've tagged many articles as functional in my Swift resources:

http://www.h4labs.com/dev/ios/swift.html?q=functional&age=10...

Want to buy a book on functional programming in Swift?

https://www.objc.io/books/functional-swift/


Note that the objc.io book is more about building functional idioms from scratch so you can use swift in a functional way, rather than actually leveraging a strong functional core of the language. The fact remains that Swift is just not a functional language and attempts to categorize it as such are unrealistic.


Swift is a multi-paradigm language. In that Ray Wenderlich article you linked, he writes "This tutorial focuses on just one of these methodologies, FP (Functional Programming)"


On a WWDC video named something along the lines of "Value types in Swift" they explained that Apple is trying to get try middle ground between protocol oriented and functional. Same thing with mutability and immutability. They aren't pure anything because both ways have their purposes.


People say a lot of things. Doesn't make them right.


I actually find working with C in Swift to be far more pleasant than using C directly. The blog post by our friends at Apple on the subject -https://developer.apple.com/swift/blog/?id=6 - gives a pretty good background.


For people who don't recognize the name, the author is one of the inventors of the Erlang programming language. Erlang is famous for its approach to mutability, concurrency, and error handling.


FWIW - This is Joe Armstrong, one of the creators of Erlang. Definitely an old-school REPL/functional guy, and I love seeing him playing around with swift.

Regarding GUIs and IDEs: I think interface builder is the best ever way to build UIs, far more fun and comprehensible than web development. I totally understand why Joe wants to control every line of code and do nothing in Xcode-- when things go wrong in the IDE they can be a pain to deal with.

For me, I think there's several common stages of Mac/iOS developer: 1. Newbie- needs Interface builder because GUIs are all so confusing and he doesn't yet know the API. 2. Control Freak -- knows enough (and spent enough time struggling with Interface Builder, because IB isn't a magic box and you do have to know how things are working under the hood, and the Newbie often struggles because of this).. and so he wants to avoid IB and do everything in code so he has absolute control. 3. Starting to see the light-- tired of tiresome pointless repetitive GUI code, back to Interface builder, but this time understanding the underlying parts of it, and having a lot more fun with IB 4. Enlightened-- uses Interface builder for most stuff, but knows when and where to drop down to GUI code and generally only does that in a highly leveraged way.

One of the things I find frustrating about web development is there is no interface builder (though some people have attempted to create them in the past, they always seem to abandon them, unfortunately.) So you're always stuck wit the frustration of constantly defining everything by hand-- when the reality is there's nothing that should stop web browsers from being able to handle auto-layout the way that iOS does.

Anyway, if you've rejected IB and are finding GUI code a PITA, try using IB in balance.


"One of the things I find frustrating about web development is there is no interface builder"

There was, and Apple killed it. It was WebObjects Builder and it was a GUI for interacting with the components you wrote in code in XCode when doing WebObjects development. It was the web version of IB, and, in its day, it was awesome.


> I think interface builder is the best ever way to build UIs

Do you have experience in other tools? Because a lot of people only know xcode/visual studio (maybe java) and think them have good GUI builders.

The best ever (and I have used A LOT) is the Delphi one. Very close? Visual FoxPro/Acces.

The rest are sad attempts at it ;)


Your 4 stages are right on. I've been through all 4 and can say you hit the nail on the head.

And thanks for calling me "englightened." :)


>Which to my mind is horrendous. The closure f does not capture the value of i at the time when f is defined, changing i after f was defined make nonsense of the idea of a closure.

Of course it should change i after the closure was defined, since the closure doesn't copy but refer to that i.

Looks like a fully proper closure.

Which language does this differently?


The problem is that closures don't usually refer to mutable entities, and most functional programmers are used to an immutable language. The idea that a closure itself is mutated (its return value can change) just because something outside and separate of it mutated goes against the very heart of functional programming, and is not how functional languages behave at all.


This gets back to the point that Swift is not a functional language. By default, the closure is capturing an immutable reference - the fact that the value might change is implicit to the way you must think about programming in a non-functional language. The mechanism to capture a constant value is to declare it in in a capture list:

  var f = {[i] in print("hello I'm a callback and i =", i)}


> By default, the closure is capturing an immutable reference

I think some imperative languages still refer to those things as closures, which is unfortunate. Capturing a closure then having everything in there mutable kind of defeats the purpose in my opinion. But maybe I've been damaged/spoiled by functional programming and/or have spent to much time debugging issues related to concurrency and shared mutable state.


Wait, What? Immutable closures are way less useful than mutable closures. Immutable closures can pretty much only be used to create thunks. Mutable closures are basically objects.

On the other hand, if your closure depends on global mutable state, than you have a piece of GLOBAL state that is VARIABLE. Sometimes known as the root of all evil.

But maybe I've been damaged/spoiled by scheme and/or not done enough concurrency programming to get your point.


Anytime I see someone claim that immutability is "less useful" than mutability, I know that the individual has not spent a long time working in an immutable language. Because things are tremendously easier when mutation rarely enters the picture. But, it can take some experience to grasp this.


Meh, depends on the situation. Some problems are easily solved in a functional way, some problems aren't, and then there are problems that are best solved using a mixed approach. Sometimes a good old-fashioned for loop (or the equivalent) and stateful I/O is "tremendously easier" to work with than layers of tail recursion and I/O monads. Professional developers are expected to know which tool to use when, rather than religiously sticking to one approach in all circumstances.


I have yet to encounter any problem that would be better served with an imperative for loop or nested for loop than an immutable list comprehension in Clojure, for instance.


Do you have actual science (comparative studies --plural, not some single paper--, etc) behind the assertion of "better" for the functional approach or is it just blind faith?

An empirical observation says that the most important software in the world, including most of the internet infrastructure, OSes, databases, filesystems, office suites, embedded systems and such, powering 99% of the modern era is written in an imperative or OO language.

That's a fact. Whereas "there would be less issues if we had made them in a functional language" is mere conjecture, unless proven otherwise.

Note that common errors such as null pointer exceptions are not only avoided in functional languages for example, but in any imperative language with optionals, bounds checking etc too. So if one is gonna bring those up as something in favor of functional programming they're not giving the full picture.


I've never met a problem where tailcalls were more awkward than loops. But then, scheme kind of pushes you toward tail-calls, so hey.


I think swift really found the sweet spot in this regard by not thinking about mutability but value/reference semantics.

I've found that in swift the things that should be values are and the things that should be references are, rather than taking an all or nothing approach.


Except in swift, a truly value-oriented thought process is hampered by its copy-on-write semantics, which are a far cry from immutable structures found in functional languages.


Value semantics are certainly not ideal, but it is very practical.


hellofunk: Reasoning is easier in functional languages, so fair enough.


Cocoa and Cocoa Touch were designed with imperative programming in mind though, so this is what the creators of Swift had to work with. I think the main use case for mutable captured values is mutable self, in particular the following, extremely common pattern:

    dispatch_async(my_background_queue) {
        do_some_io()
        let result = do_expensive_computation()
        dispatch_async(dispatch_get_main_queue()) {
            self.value = result
            update_ui()
        }
    }


Objective-C does it differently by default:

    int x = 0;
    void (^block)(void) = ^{ NSLog(@"%d", x); };
    x++;
    block();
This will log 0, not 1. If you want to capture by reference rather than by value, you add __block to the declaration of x.

For a lot of people coming to Swift from Objective-C, this is a surprising change, especially if Objective-C was their first language with closures.


That's a block, not a closure. It's still an anonymous function, but the semantics are slightly odd, and derived from Smalltalk, probably altered some.


"Blocks" is what Objective-C calls closures.

A closure is just a function that captures ("closes over") the surrounding scope.

Edit: strictly speaking, the construct here is an anonymous function. Objective-C calls them "blocks" and Swift calls them "unnamed closures" or "closure expressions." Both are, or at least can be, closures. (I'm not sure if the term "closure" can be used to refer to a function that could refer to variables in the enclosing scope but doesn't. Probably not.)


What you seem to be calling a closure refers to a lambda in a lexically scoped language. Particularly when they refer to free variables.

Blocks are in fact lambdas, or at least closures. I was wrong in the above. In some early versions of smalltalk, blocks had some odd semantics that made them different, but this is no longer true. However, judging by the Objective-C example above, Objective-C blocks don't create true closures, as a true closure would have the semantics of the Swift code from the article.


This is only surprising because of the ambiguity of value types in certain cases in Swift. If you passed it to a function and mutated it then it would work as your example. I'm still not sure why they made it work this way for closures. Using reference types closures and blocks are more or less the same.


What ambiguity are you referring to? It has nothing to do with value versus reference types. Reference types have the exact same difference between the two languages. In Swift, mutable variables are captured by reference by default, and in Objective-C they're copied by default. Value versus reference doesn't make a difference. For example:

    NSView *view = [NSButton new];
    void (^block)(void) = ^{ NSLog(@"%@", view); };
    view = [NSTextField new];
    block();
In Objective-C, this will log an NSButton. Translate it to Swift and it will log an NSTextField.

I don't think either behavior is more correct, it's just a fairly arbitrary choice. IMO it's surprising only if you've learned to expect one way because you worked in a language that does it that way, then encounter a language that does it the other way.


> Which language does this differently?

Languages with immutable bindings (the author is one of the original designers and implementers of Erlang)


In fact, he expands on closures in an earlier post (about Elixir):

Proper closures should only contain pointers into immutable data (which is the case in Erlang) - no pointers into mutable data. If a closure contains a pointer into mutable data and you change the data later you break the closure. This means you can’t parallelize your program and even sequential code can contain weird errors.

http://joearms.github.io/2013/05/31/a-week-with-elixir.html


That's just wrong, in most cases. Mutable closures are incredibly useful. Erlang, however doesn't actually need them, because where I (a scheme programmer) would use a closure with mutable state, Erlang programmers would use a process running a function that takes that state as an argument and runs forever, tail-calling itself to change the state. In most languages this is impractical, because most languages don't have erlang's threading semantics.


They are, however, incredibly useful in Erlang. Most languages don't ~need~ closures. The whole point is that they're useful, though.

Haskell doesn't have mutable state; it still has closures.

Mutable or immutable is somewhat orthogonal; I admit, coming from Erlang to Javascript I was -shocked- and very facepalmy when I learned that closures close over variables that can still vary. That is, I couldn't just pass back a function and expect it to behave the same way depending when I called it. I had to add an additional scope in place and bind the values anew (i.e., a new function, passing the values being closed over into it as params). Immutable is much easier to reason about, much easier to write with, and despite being more limited in what you can do with them, I'd argue just as useful. Everything it prevents you from doing there is some other, safe way of achieving; the same as comparing mutable vs immutable in other contexts.


They don't do it differently, they simply avoid the question by not letting you change any binding ever. In languages that embrace immutable bindings but don't enforce them, like scheme, closures work exactly as the parent describes.


> Of course it should change i after the closure was defined, since the closure doesn't copy but refer to that i.

Well it should not allow it and make ia clone / copy-on-write thing /etc. Calling that a closure sounds broken. Just like Joe points out.

> Which language does this differently?

Haskell, Elixir, OCaml, Erlang


No, those languages just don't allow you to modify state. In non-functional languages like swift, you can modify state, and closures respond to that.

What I think a lot of people fail to understand is that a closure is not a special object, with magical powers, but a direct consequence of lexical scoping: closures just follow the lexical scope rules, even if their lexical scope isn't within the current dynamic scope. If a variable within the closure's lexical scope changes, the closure won't keep the value the same. Here's an example of these semantics in C:

  int global = 4;
  void oh_noes(){
    printf("%d", global);
  }
  global++;
  oh_noes(); //prints 5
If somebody complained that the function oh_noes should print 4 in this example, you'd think they were insane. Closures actually work in exactly the same way.


> What I think a lot of people fail to understand is that a closure is not a special object, with magical powers, but a direct consequence of lexical scoping:

That is how it is implemented. It doesn't have to be, I have a list of example languages where it is not the case.

> What I think a lot of people fail to understand is that a closure is not a special object, with magical powers,

Maybe that's why I like functional languages, it does seem like they have magical powers ;-) there.


> That is how it is implemented. It doesn't have to be, I have a list of example languages where it is not the case.

No, you don't. Closures in those are also a direct consequence of lexical scoping. But those languages don't have mutable variables, regardless of closures.


> No, you don't.

Functions defined in modules in Erlang are different from closures. Here is a example:

   -module(e).
   -compile(export_all).
   f()->  1.
   g()-> fun ()-> 1 end.

   $ erl
   > c(e).

   > erlang:fun_info(fun e:f/0).
   [{module,e},{name,f},{arity,0},{env,[]},{type,external}]

   > erlang:fun_info(e:g()).  
   [{pid,<0.42.0>},
   {module,e},
   {new_index,0},
   {new_uniq,<<136,230,191,77,132,145,66,52,216,215,111,24,
             18,188,4,169>>},
    {index,0},
    {uniq,71775738},
    {name,'-g/0-fun-0-'},
    {arity,0},
    {env,[]},
    {type,local}]
To get info on e:f it has to be become a closure-like object, but if you see internally it is still represented differently than a closure.

> Closures in those are also a direct consequence of lexical scoping.

Not sure what you mean a direct consequence. Are you saying that closures follow scoping rules? The point was that it doesn't necessarily follow that they have to be implemented as object instances in object oriented languages, or function pointers, or functions (say like in Erlang).


Well, yes, closures can be implemented however you want, in a true lexical scoped language, every function is a closure, not just lambdas, only most don't really close over anything, and can be optimized away. The case of C is, again, instructive.

    //foo.c
    int global = 0;
    int quuxify(){
        return global++;
    }

    //bar.c
    extern int quuxify();
    printf("%d", quuxify());//prints "0"
    printf("%d", quuxify());//prints "1"
even though quuxify() depends on global, which is out of scope in bar.c, the call still compiles. Why? because C is a lexically scoped language, and so variable references in functions refer to the scope the function was declared in, NOT the one it was called in.

And you thought C didn't have closures ;-D


It's a bit weird. I don't think Joe would have an issue with mutating the closed over variable, but having the closure just be the binding rather than the value, and then being able to rebind it to something else seems brittle.

> Which language does this differently?

Rust does it this way but in a less error prone way due to its sophisticated ownership tracking. If you close over a variable in the environment, it gives the ownership to the closure and you can't then rebind it outside.


He's not saying that closures over mutable variables is a bad thing in Swift. He is saying it's a bad idea in any language, even though most languages with closures allow it.


I see how programmers can mistake mutable closure references for immutable, but if we imagine that this confusion is solved somehow (for example, closures copy values by defualt and reference only with a special keyword), how is it bad?


This behavior is one of the reasons not to call JS a 'functional' language, IMHO.


Well then scheme isn't one either. But then, scheme isn't one. And javascript is just a language in which the functional paradigm is common. it's a multi-paradigm language. Anybody who says otherwise doesn't know what their talking about.


I'm currently working on a project which makes me jump between Swift and Erlang/Elixir on a regular basis. I actually enjoy Swift's OCD typing, particularly in exchange over not having to maintain those mind-numbingly redundant header files any more.

All in all I feel as if Steve Jobs would have been proud of what the Swift team has accomplished - lots of "great artists steal" in the functional programming department while still being disciplined enough to not go overboard with features and staying true to Apple's design culture in general.

That said the lack of pattern matching really is something I've come to miss too and Elixir application code in general feels much more elegant and almost effortless to read and write in comparison to Swift.

Playgrounds are fun and all but IMHO Joe is very much right - all the cruft, the arcane NeXT "magic spells" as well as the mouse-heavy Xcode dependence should make way for new ways to do things. The kids "ain't stupid" I mean young programmers probably all grow up with some command-line fu, essentially making Unix the new lingua-franca (besides Javascript;) so why not embrace OSX/Darwin and go for something more along the lines of how RubyMotion does things?

Whipping up a simple GUI with a couple of action functions really should come with the least amount of needlessly distracting development abstractions.

It probably would also mean less Apple developer time wasted on the next potentially buggy Xcode feature..

Less is more, but again Apple is getting there, especially now that we might be on a path to more complementary open-source development tooling - so to me, doing Swift with Xcode is super productive and fun already!


I think the author should do some more reading about Swift. He complains about a lack of inferred typing and pattern matching, two things which Swift does support. It's more limited than Haskell, but it's there.

I also disagree with his statement that Objective-C is better for writing an operating system.

He seems to conflate using Xcode with the built-in support it has for xibs to do UI layout. You don't have to forego all of the features of the IDE just because you prefer to do UI layout in code.


The author is one of the creators of Erlang which is a language that takes pattern matching above and beyond what Swift does (as well as Standard ML/OCaml/etc).


Replying to myself, because I thought about this a little more. I'm someone who's very excited about Swift. But I'm also interested in learning a variety of languages (and seeing their influences on Swift feeds my excitement).

It's easy to pick on specific mistakes in the article. However, what I think really bothered me is the criticisms of Swift that didn't show an understanding of why Swift is the way it is. And perhaps that's unfair to the author - who's just writing up his experiences.

I like seeing well reasoned criticisms. I dislike value judgements that don't seem to acknowledge the trade offs involved in all software engineering.


> He complains about a lack of inferred typing and pattern matching, two things which Swift does support.

1. Swift only implements local type inference (which the user makes use of), languages like Haskell or MLs support global type inference. That is, Swift does not infer function types.

2. Swift's pattern-matching support is more limited than Haskell's or Erlang's[0] and function toplevels aren't pattern-matched whereas Haskell's or Erlang's are.

[0] notably, you can't use refutable patterns in Swift, while they're very common in Erlang, and by default only a limited number of type classes — tuples, enums, ranges and atoms IIRC — can be matched, you can't destructure an arbitrary structs by pattern matching. On the other hand, Swift's pattern-matching can be extended (by overriding ~=)


Isn't switch an example of refutable pattern matching in Swift? You can distinguish enums, and use where clauses:

    switch read(/* stuff */) {
    case 0:
        print("EOF")
    case let amt where amt > 0:
        print("Read \(amt)")
    default:
        print("Errored")
    }
The 'let amt where...' is a refutable pattern, isn't it?


Ah yes I forgot to write the "in var/let" part which was supposed to be there.

That is you can only use refutable patterns in `switch` (and `if let`/`while let` though I believe those are still restricted to Option, not general-purpose as in Rust, that correct?). In Erlang you can use a refutable match in an "assignment", that's basically a pattern assertion, a more general-purpose version of Swift's forced unwrapping.


> two things which Swift does support

For some value of "support". If you mean compared to C, yes it has pattern matching. If you mean compared to what Haskell, Erlang or Elixir has, not as much.

The author is one of the creators of Erlang so from his point of view pattern matching is severely lacking.

I have used Erlang, Python, C#, C++, Java and yes, I though Python had pattern matching compared to Java. But then used Erlang and realized that Python doesn't have pattern matching ;-)

It is one of those things that once you use it, it is very easy to notice when it is lacking.


> yes, I though Python had pattern matching compared to Java.

Python doesn't have pattern matching at all though, and doesn't advertise such a feature. It has iterables unpacking.


You are right. I was saying that coming from C, it seemed like pattern matching and a really cool feature. Until someone found how pattern matching works in Erlang or Haskell.

Even in Python to its credit, iterable unpacking pretty nifty. It can now even do stuff like:

https://www.python.org/dev/peps/pep-3132/

    >>> a, *b, c = range(5)
    >>> a
    0
    >>> c
    4
    >>> b
    [1, 2, 3]


> You don't have to forego all of the features of the IDE just because you prefer to do UI layout in code.

I don't understand why you would do your UI layout in code if you can avoid it. If someone ever joins my team or inherits my code base, they're now partly responsible for maintaining the UI. Xib's and storyboards help with maintainability, legibility, and drastically lower the number of lines of code. The author even says:

> Find an example program that works then reduce lines until I can reduce no more making a minimal example that works - then understand every line

If you care about the line count, then why are you doing all your UI in code?


Layout done in code:

- Is type-safe, which is verified at compile-time.

- Will break at compile-time if a breaking change is made.

- Can be null-safe, without having to use Optional for everything.

- Can do things iteratively.

- Can use constants, so that you can easily modify standard paddings and sizes globally with one change.

So those are some of the advantages of doing layout in code.


Right but you'd have to document what each constant represents. Interface builder gives us a universal language. You know what the UI will look like or how it will resize etc. because it's right there. There's no confusing constant names with magic numbers. You know the why behind the constraint values because you can see it.


Constants are not magic numbers. And with proper naming they document themselves. For more cusotm UI's you won't know how UI will look like. It got better with @IBDesignable but still not enough.


Well until Xcode 6 (I think). Neither xibs nore storyboards were indexable. Meaning if you wanted to find something related to your layout, you would had to search through xml. Also much easier to refactor typed code than xml ;)


Wait, why would you fix your layout bugs by searching through XML? Why not just use interface builder?


Because you'd have to manually go through maybe > 39 scenes spread across 10 storyboards manually keep track of what's changed and find the next outlet you want to modify.


xibs are not mergable. Have 2-3 people working on the same xib, and it's a mess to merge. It needs to become human editable. That is the major reason why for most larger projects.

Also xib tooling can be a pain for complicated layouts.


Yeah I disagree. I work for an agency that's almost all giant big-brand apps. You just task developers with separate sections in the app. You can use multiple storyboards in your app (which comes as a surprise to some). So you break down the user flows so its more manageable. The merge conflicts should be few and far between.

How is setting up constraints a pain in xibs? It's hardly anymore difficult than typing it in code.


So basically, you have to lock xib files, coordinate that your locking xib files and are unable to work simultaneously on the same xib file? With android you have human editable xib equivalents, so you don't have to do any of that. Go to any large iOS project and you'll find them not using xibs eventually.

Also I'm guessing your agency doesn't have 10-100 iOS engineers working on the same app, for years, with different engineers in different buildings? You're probably small teams working on contract for a starbucks app for example? That is usually teams of 5 iOS engineers max, where xibs work.

xibs can be pretty annoying to work with if you have multiple overlapping views. It simpler for me to deal with them in code. Masonry also makes specifying constraints pretty easy.


With all due respect... what in the name of holy hell mobile app can possibly require 100 engineers working together over several years all on the parts that touch the UI?

I can barely think of even the most complicated thick-client enterprise apps (where the UX is just awful and customizable to death with window upon window upon pane upon tab upon tab upon pane clusterf*ck layouts) I've ever seen that required anywhere near that many core UI engineers.

That just seems like a number of engineers toiling away in parallel that raises a "maybe you're doing it wrong" flag for me. :-/

This is genuine curiosity because I find that just epically flabbergasting.


Facebook is one example.

At my last company, there was 1 full time iOS engineer (me) and we later grew to about 18 full time iOS engineers. The xibs are painful point started happening around 8 full and part time iOS engineers.

You have many part time iOS engineers who may need to touch UI once in a while, and that can cause issues. After a dozen 'I have to throw away all of my xib work and recreate it because they've committed a new version of the xib', you become fine with just not using xibs any more.


Another annoying point is that Xcode tends to modify xib any time you open it. You don't need to do anything, just open it and boom, it gets that "M" in source control.


Does he actually say anything wrong about Swift's type inference? I only found a spot where he mentions you need to declare types, which I thought was true at least in some spots.


For some reason reading the post gives me a deja vu of a win32 GUI programming tutorial :)


Hehe, must be all the HWNDs in the code :P


> The closure f does not capture the value of i at the time when f is defined, changing i after f was defined make nonsense of the idea of a closure.

I have an incomplete understanding of functional programming. Are there other uses for closures (besides readability and practicality) that still work if they don't capture the value of i? If not, is this something someone should suggest as a change in swift-evolution?


He's complaining about mutable state, essentially. The function `f` captures reference to `i` but does so transparently. One might expect that `f` would be free of side effects, but since `i` can be re-bound whenever this is not the case.

Essentially it comes down to the semantics of `=` whether it's declaration or assignment. Swift appears to choose it to be assignment. If I remember right you can declare things constant which would effect declaration only (sort of) but clearly there's an expectation that this happens all the time.


> He's complaining about mutable state

Bindings rather than state, the author has no issue with captured objects being mutable, only with the captured environment being mutable.


> The closure f does not capture the value of i at the time when f is defined, changing i after f was defined make nonsense of the idea of a closure.

Not sure why this "makes nonsense of the idea of a closure." You're closing over variables, not values. If you marked i using let instead of var, then you'd get the "preferred" behavior.


This underscores the notion that functional programming is ultimately about immutability, and all the idioms expected by a truly functional language are really a result of how it handles state more than anything. A closure is usually expected to close over values, and the idea of a "variable" doesn't really mean the same in a truly functional language, which is where all the confusion and the pain points stem from.


Yes, if you come from functional languages then you might expect to close over values, but I don't think that in general "a closure is expected to close over values". In particular, both Scheme and Common Lisp close over variables, not values.

JavaScript also closes over variables, so it's not just "obscure" languages that behave this way.

Java only allows you to close over variables that are effectively final, to remove confusion on both sides (e.g. why doesn't the value in my closure change when I manipulate the original variable? vs. why does my the value in my closure change when I manipulate the original variable?)


Swift feels very immature in many areas. For example, Swift fails with generics out of most ordinary cases. Today, you still can't restrict protocol conformance (inheritance clause) to a type-constrained extension. That's a big issue with arrays. A struct wrapper is the go-to solution for many things where Swift generics break down and it's not the best or more elegant way to resolve its limitations. Many hopes for Swift 3.0 but I doubt they'll fix those.


I agree, once you step outside the bounds of "text only" you bring on a whole mess of complexity. I've always disliked programming in Eclipse or VS, it just feels wrong compared to a nice simple .py file or whatever you're working with that can be used with any editor you want.

It seems like language complexity brought on IDEs, rather than anything else. They tried to fix the verbosity of C++ and in turn, Java, by creating new IDEs rather than a newer higher level language.


The lack of proper syntax highlighting and long lines is making this really difficult to follow on mobile.


It is actually generalized. Not only on mobile. Hard to read indeed but very interesting nonetheless.


> Above all it had a REPL and I could program outside Xcode (which I hate).

If that's the "above all", not sure what his beef is with Objective-C. You could always program outside of Xcode and various sorts of REPLs were/are available.


The author completely missed the point of named parameters. How am I supposed to know that the string argument of "make_window" is the title?


> The author completely missed the point of named parameters.

The author didn't miss the point of named parameters. At no point does the author even cover the point of named parameters. The author complains that:

1. they're inconsistent, all parameters but the first are named by default and named parameters are still positional (you can't reorder "named" parameters in the callsite)

2. making parameters purely positional is non-obvious and ugly

And he missed the part which takes the cake: Swift doesn't actually have named parameter, parameters 1+ are labelled (#1), and labels and names are independent, you can define this:

    func foo(a b: Int, c d: Int) { … }
which is called with a:c:, but the bindings are on b and d. You can also define this:

    func foo(a b: Int, _ d: Int) { … }
which is called as `foo(a:5, 7)` which while somewhat consistent is really garbage.


In Swift terminology, a and c are "external names" while b and d are "local names." I don't see much of a difference between a "name" and a "label" here in any case.

I don't understand why the ability to set different external and local names is so bad. External names are part of the API, local names are part of the implementation. It often makes sense to make them the same, but there are case where you want them to be different, and what's wrong with that? It's ultimately no different from loading a parameter into a local variable of a different name and then operating on that local variable. Yes, you can use this facility to make ugly APIs, but you don't have to.


> I don't understand why the ability to set different external and local names is so bad. External names are part of the API, local names are part of the implementation. It often makes sense to make them the same, but there are case where you want them to be different, and what's wrong with that? It's ultimately no different from loading a parameter into a local variable of a different name and then operating on that local variable.

Then why have it in the first place? That seems like a very odd feature to put front and center into the language (considering a very similar effect can be achieved with a few assignments as the function's prelude) and require using when just wanting positional-only parameters, or consistently labelled parameters.


Can you explain what "and require using when just wanting positional-only parameters, or consistently labelled parameters" means? I don't understand that.

As for why you have it in the first place, it's just because one is API and one is implementation, and there are sometimes good reasons to have them be different.


> Can you explain what "and require using when just wanting positional-only parameters, or consistently labelled parameters" means? I don't understand that.

As far as I can see (and the article notes), by default the first parameter of a Swift function is unlabelled, and the others are labelled. That is at callsite the first parameter can not be labelled and the others must be. Having either all-positional or all-labelled (but still positional) requires specifying external names.

> As for why you have it in the first place, it's just because one is API and one is implementation

But again and as you yourself noted that could trivially be done with local bindings inside the function in cases where it's desirable.


Yes, that's true, but note that you only need to specify the external names on the parameters that depart from the default. A function with external names on every parameter looks like:

    func whatever(a a: Int, b: Int, c: Int)
A function with no external names on any parameter looks like:

    func whatever(a: Int, _ b: Int, _ c: Int)
I'm not sure if you understood that or thought they all had to be specified if any were, but in any case that's how it looks.

As for "that could trivially be done," that applies to a lot of language features in a lot of languages. Virtually all language features are redundant in that fashion, yet they can still be useful by adding brevity and clarity.


The question is why is the first argument special? Also, it's weird that you have to explicitly mark the label as "_" to get positional parameters.

I find Scala's handling of named arguments much saner.


I think it's because the first argument tends to be named by the function itself. For example:

    button.setTitle("Ham and Cheese", color: red)
Note that named parameters are still positional, they just have names too. Using _ doesn't get you positional parameters, you already had that, it just removes the name. (Exception: named parameters with default values can be reordered with other adjacent named parameters with default values. Why? I don't know.)

It is mildly annoying that you have to add an extra symbol to remove the name for parameters after the first one, but that's the language pushing its preferred style.

How does Scala handle all this?


> Exception: named parameters with default values can be reordered with other adjacent named parameters with default values. Why? I don't know.

That's actually nice to know.

> How does Scala handle all this?

No idea about Scala. In Python 3, there are 4 "classes" of parameters:

* "positional-and-named", the basic parameter can be passed either by name or by position, though passing parameter 1 by name parameter 2 by position won't work: when actually calling the function, the interpreter first fills positional parameters left-to-right then applies named parameters, so it'll raise an error noting that one parameter got two values[0]. Can be either required or with a default value. Parameters passed by names can be passed in any order

    def foo(a, b, c=5): pass
    foo(1, c=42, b=3)
* positional varargs ("args"), can only be passed positionally, can follow any number of positional parameters, will be collected as an array

    def foo(*args): pass
    foo(1, 2, 3, 4)
* named-only ("keyword parameters"), follows either positional varargs or a special placeholder. These were not possible in Python 2 in pure Python, they're similar to 1 but can only be passed by name, they can be either required or with a default value. Like 1 but more so, named-only parameters can be passed in any relative order

    def foo(*, a, b, c=5): pass
    foo(b=6, a=2)
* keyword varargs ("kwargs"), may follow named-only parameters and will collect any parameter passed by name which didn't match any formal parameter, will be collected as a key:value map

    def foo(**kwargs): pass
    foo(bar=5, baz=42, qux=1)
The C API also allows creating true positional parameters (completely unnamed), as far as I know that's not possible in pure Python.

[0] the reverse won't work either, at callsite Python doesn't allow named parameters to be provided before positional ones


For Swift, a lot of the designs were due to the ability to automatically transform Objective-C apis into swift APIs. The big issue from what i can tell is in extracting the first parameters name from the method name automatically.


That doesn't really explain the defaults, though. Translation could override the defaults if that fit better. This already happens when translating C functions, as none of the parameters get external names.


  > Then why have it in the first place? That seems like a very
  > odd feature to put front and center into the language
Cocoa is the answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: