Hacker News new | past | comments | ask | show | jobs | submit login
Swift 4.0 Released (swift.org)
386 points by runesoerensen on Sept 20, 2017 | hide | past | favorite | 140 comments



"Swift 4 includes a faster, easier to use String implementation that retains Unicode correctness and adds support for creating, using and managing substrings."

This alone is grounds for opening and drinking very expensive champagne and/or wine.


As a new swift'er working on an app with my 12 year old son, I reasonate with your comment. We were near the end of our first app and hit a need in swift 3 to figure out locale/language settings so needed to manipulate strings. I was shocked at how unobvious it seemed. Fortunately, I found a swift playground cheat sheet and left it to my son to explore. A couple hours later he had it worked out ;)


>> support for creating, using and managing substrings

Can I finally extract a substring from the middle of a string using integer offsets, in one line? Something like string.utf8.substr(3, 5)?

Or does it still take an absolutely insane and unacceptable 4 lines of code with throwaway temp variables to track the start and end indexes? The inability to do basic string manipulations is a major turnoff, to the point it makes the entire language look like a bad joke.


A String in Swift is a sequence of graphemes, unlike most languages which model strings as sequences of code points or just rawutf8, so there's no way to do efficient O(1) indexing. This is why the API is a bit more complex than you'd like.


I keep getting this inaccurate answer of "they're just grapheme code points - you can't do it". That is why my example specifically used the .utf8 view. It's already possible to do exactly what I'm talking about, but it takes 4 complex lines of code, needing two calls to index() with startIndex/endIndex references and offsetBy arguments.

The existing solution to this problem already uses integers (the offsetBy arguments)! I'm not talking about refactoring the String type, or breaking backwards compatibility. There would be no change in how the underlying String storage or access works. It's about adding a level of (zero overhead!) abstraction to simplify a common, basic operation.

Every other language I've ever touched has easy to use substrings. It makes absolutely no sense that Swift requires assigning throwaway constants to represent the start and end indexes for a simple substring extraction. There's this level of superiority regarding the purity and correctness of Swift's String type that clouds developers' ability to see that there is no limitation to doing this. It blows my mind that the most modern major language is this clunky.


You could do something like

  String(s.dropFirst(3).prefix(5))


var wine = Set<String>()

wine.insert("Champagne")


Oof, magic strings. Always use enums!

    enum Booze: String {
        case wine,
        case champagne,
        case beer,
        case whiskey
    }

    let wine = [Booze.champagne]
</ocd>


wine's type should conform to a Stack protocol so you can pop champagne.


I'm not sure if it was intentional but "case beer" made me smile.


Nooooooo... you're right about enums but this codifies the same error I was trying to elucidate with code. Here, let's try this:

   enum Booze {
     case Wine(region: String)
     case Beer
     case Whiskey
   }

   let drink = Booze.Wine(region: "Champagne")
Now.. Anyone want to demonstrate how to use enums for the wine region or, say, whiskey subtypes? Should we use classes?


I feel like adding a pull request to add whisky to the list. Some of us prefer scotch.


Definitely have to +1 this. I'd be happy to write some tests to get scotch on board.


You don't need commas :)


Why are these String, not just enums?


Because it's replacing a Magic String?


So strings are mutable? I'm not sure that's a reason to celebrate.


The example does not demonstrate mutability of strings; it is constructing a set of strings and inserting a string element. Only the set is being mutated.

But yes, in Swift, arrays, dictionaries, sets and strings are all mutable. They are also value types, so mutation is a purely local effect.


I think what the example is supposed to demonstrate is that champagne is a type of wine.


Would that not make passing around data expensive? Or are you expected to pass some kind of pointer instead?


Copy-on-write alleviates some of the overhead. Copies are only performed where required to preserve value semantics.


All of these can be both mutable or immutable. And converted as needed

let array = [1, 2, 3] var mutableArray = [4, 5, 6] var anotherMutable = array

Not sure how much optimisation is done under the hood, but Swift compiler loves to complain about mutable structures that are not ever edited.


These are all copy-on-write boxed types with value semantics: the value you pass around is a pointer (with some additional data) and Swift has mechanisms/API which let the implementor copy the backing buffer on mutation if necessary: https://developer.apple.com/documentation/swift/2429905-iskn...


Strings (and arrays and sets and dictionaries) are neither mutable nor immutable in Swift, they're just values. Mutability is a property of where they're stored, not the types themselves. They work just like numbers do in most programming languages: you wouldn't say that Java's `int` is mutable or immutable, it just is. It may be stored in a mutable or immutable location, but the type itself doesn't have that property.


They implemented what looks like the Rust ownership model: SE-0176 Enforce Exclusive Access to Memory (https://github.com/apple/swift-evolution/blob/master/proposa...), but I'm having a hard time understanding the proposal, can anyone shed some light on this?


You can think of inout parameters in Swift as something analogous to a mutable borrow in Rust. Until Swift 4 we allowed overlapping inout access, for example:

    var counter = 0
    func foo(x: inout Int) {
      x += 1
      print(counter)
    }
    foo(x: &counter)
Note how 'counter' is read by 'foo(x:)' during an inout access of the same value. This is now prohibited by Swift 4, using a combination of static and dynamic checks.

This fixes some undefined behavior and will also enable more aggressive compiler optimizations to be added in the future.


Isn't the mutable borrow ended after the end of the x += 1 line? leaving you free to read the contents of the ptr?

I don't know swift at all. but from their document on swapAt() it looks like they are trying to prevent two fn(&p, &p) where func fn(a: inout Type, b: inout Type)


> Isn't the mutable borrow ended after the end of the x += 1 line?

No, see Mike's comment here.


> Note how 'counter' is read by 'foo(x:)' during an inout access of the same value

It's not clear to me in your example why reading the value of counter after mutating it is bad; why is this now prohibited?


Swift specifies inout parameters as copying the value that's passed in, giving the copy to the callee as a mutable value, and then writing back the value to the original storage after the function returns. & is not an "address of" operator.

Of course, it would be inefficient to do this all the time, so Swift will optimize copy-then-writeback to just passing a pointer to the original storage whenever it can. But this is an optimization operating under the "as if" rule: as long as it works as if it does a copy-then-writeback, the compiler can make it actually do whatever it wants.

If the example code were legal, then it would have to print `0`, because the writeback to `counter` doesn't happen until the end of the function. That means the compiler couldn't just pass in a pointer to `counter`, but would have to actually go through the copy-then-writeback procedure it's supposed to do, so you'd lose out on optimizations.

Instead, Swift makes it illegal. You can't access a value while this call is happening. That allows the language semantics to coexist with optimizations.


what does & do in Swift then? And what's the purpose of even having inout and & if you can't count on the code to actually be an address?


& is just a sigil saying "I acknowledge that I am passing this as an inout parameter, and therefore the value may be modified by the function I am calling." Note that & is only legal on a function parameter. You cannot write, for example, `let b = &a`.

The purpose is to allow for out-parameters. A classic example would be the `+=` operator. (Swift operators are just normal functions with special call syntax.) It takes its first parameter as `inout` so that it can mutate the value.

Note that inout parameters work with expressions where it would be impossible to take the address. For example, you can use & on a computed property that has a setter. In that case it has to read the initial value, pass that to the function, then write back the new value, because it has no idea where the computed property actually stores the value, if anywhere.

Edit: because I'm obsessive and weird, I made a quick example of this computed property stuff:

http://swift.sandbox.bluemix.net/#/repl/59c284376cbea87f72c4...

Click the play triangle at the bottom to see the output.


> (Swift operators are just normal functions with special call syntax.)

Coming from Haskell and Rust it's nice to see this trend catching on.

Is Swift planning to introduce a distinction between borrows and mutable borrows to the user? From what you describe it seems like right now syntax-wise a borrow and a mutable borrow look the same, and the runtime makes some decision about it.

edit: Or I guess it could be the opposite. Since Swift passes by value always unless the runtime can optimize (right?), you could just not write & and inout and cross your fingers it gets optimized to a borrow rather than a copy?

Stuff like this makes me prefer the explicitness of Rust. It seems like here on the surface it's abstracted from you, but really you need to know the rules anyway or you could get into trouble.


> Coming from Haskell and Rust it's nice to see this trend catching on.

It was already like that in older languages like Lisp, Smalltalk and CLU, just C++ made them in a different way.


What's the use of a non-mutable borrow? Is it just for speed, to avoid copying a large structure? Large structures are rare in Swift and probably not important to optimize. (Value types like String and Array are actually just one reference under the hood, and that's all that gets "copied" when you pass those by value.)


It also lets one avoid reference-counting traffic, which can be significant for values that contain multiple ARC/COW things (such as strings and arrays), and is semantically critical with move-only (or "unique ownership", per the ownership manifesto[1]) types.

[1]: https://github.com/apple/swift/blob/master/docs/OwnershipMan...


Unique ownership would be really nifty, and it makes sense that you'd definitely need pass by reference for it. Thanks for explaining!


I don't know Swift, but non-mutable borrows are useful in a large number of languages. Are you sure a String is represented as a single reference? In Rust a String has a reference to the actual contents on the heap, a length, and a capacity. So that's a bit heavier than just a single reference, same thing with Vec. Rust's semantics are similar in that if you pass a Vec it will 'move' those 3 things, still, passing a string or vector slice (an immutable borrow) is faster, and moves less data, because it only copies a single reference.

I find it hard to believe that 'large structures are rare in Swift'. People don't make structs with multiple fields? You never want other structs to hold one or many of those? These are cases where non mutable borrows are useful. I understand that in Swift this is probably abstracted away from you and done by the runtime (or compiler) if it can, but that doesn't mean non-mutable borrows aren't useful; you just probably don't see them.

Of course, that's just a guess, I don't know Swift.


You're right, String has three fields. It's Array that's just a single reference. Is the overhead of passing three fields as a parameter so high that passing a reference is faster? Pointer chasing isn't free either, after all. I can certainly imagine scenarios where that sort of microoptimization pays off, but it seems like it would be rare.

As far as large structures go, I'm thinking "large" like hundreds of fields. Any time I've seen people concerned about large structures, they misunderstand the value-typedness of things like String and Array and are worried about the contents of those things, which isn't really part of the size of the struct itself. But that's just what I've seen.


From the documentation of Array for Swift it has the same behaviour as Rust's Vec (it's growable, it allocates double the capacity after reaching max length). I'd find it pretty odd if it didn't share the same 3 word length as String. Also, how would it quickly know it's length if it didn't also store it's length in the same structure as the ptr to it's heap location? You'd have to chase 2 pointers just to get the length.

I'd guess that Swift also has something analogous to an array slice, which would be a (possibly) immutable borrow to a chunk of array data on the heap. This also happens to be a good use case for borrows.

From the other comment here it seems like Swift is pursuing an ownership model similar to Rusts, in which case, immutable borrows will become more important when you think about struct contents. You can only have a single owner, but you can specify many borrowers. This kind of thing is important when you have an array or vector of types, often you don't want those types to have a single owner but you want them to be populated or store a reference from somewhere else.

Anyway, don't dismiss the concept out of hand. Immutable borrows definitely have their uses, whether it's made explicit to you or not in Swift is another thing entirely.


The array capacity and length are stored inline before the contents. So you chase one pointer to get the capacity, length, or something stored in the array.

Swift does have ArraySlice, but I don't get how borrows factor into that. Seems vaguely similar in concept, except ArraySlice exists to represent a subset of the original array, not just so you can pass arrays by reference. Since arrays are already passed by reference under the hood, that wouldn't really be useful.

I'm not dismissing the concept out of hand, so I'm not sure why you're warning me about that....


Still new to Swift but I believe & is an explicit syntax necessary to make clear in the code that the function being called is mutating the argument, and thus the variable passed into that function could be mutated. It must be used wherever an argument is in/out. It makes mutation explicit both in the function declaration and also in the function call. Which is nice!

It's good for code readability but also prevents accidentally passing a variable to a function that could mutate it when you weren't expecting that, and vice-versa.


Values are not guaranteed to even have an address, IIRC.


Thanks Mike, very clear.


Note: I like to read about Rust but don't work with it seriously, and don't follow Swift at all. Corrections welcome. That said, these seemed like the key passages:

"Swift has always considered read/write and write/write races on the same variable to be undefined behavior. It is the programmer's responsibility to avoid such races in their code by using appropriate thread-safe programming techniques."

"The assumptions we want to make about value types depend on having unique access to the variable holding the value; there's no way to make a similar assumption about reference types without knowing that we have a unique reference to the object, which would radically change the programming model of classes and make them unacceptable for the concurrent patterns described above."

Sounds like a system in the vein of rust but more limited, with more runtime checks and no lifetime parameters, falling back to "programmer's responsibility" when things get hard. The last paragraph makes it sound like one of the motivations is in enabling specific categories of optimizations, as opposed to eliminating races at the language level.

One of my biggest questions as a reader is how a language like C handles these cases that Swift can't handle without these guarantees. Is this a move to get faster-than-C performance? Does C do these optimizations unsafely? Is there some other characteristic of Swift that makes this harder than C? Closures get a lot of focus in the article...


The other responses to your comment are correct: C generally can't do those optimizations, unless you manually write `restrict`, and this can hinder optimization. But to complete the picture -

> Is there some other characteristic of Swift that makes this harder than C?

Yes:

1. Swift doesn't have pointers.

Instead, you have a lot of copying of value types, and the compiler has to do its best to elide those copies where it can. For instance, at one point the document mentions:

> For example, the Array type has an optimization in its subscript operator which allows callers to directly access the storage of array elements.

In C, C++, or Rust, you can "directly access the storage" without relying on any optimizations: just write &array[i] and you get a pointer to it. The downsides are (a) more complicated semantics and (b) the problem of what happens if array is deallocated/resized while you have a pointer to it. In C and C++, this results in memory unsafety; in Rust, the borrow checker statically rules it out at the cost of somewhat cumbersome restrictions on code.

2. Swift guarantees memory safety; C and C++ don't.

This goes beyond pointers. For instance, some of the examples in the document talk about potentially unsafe behavior if a collection is mutated while it's being iterated over. In Swift, the implementation has to watch out for this case and behave correctly in spite of it. In C++, if you, say, append to a std::vector while holding an iterator to it, further use of the iterator is specified as undefined behavior; the implementation can just assume you won't do that, and woe to you if you do. (In Rust, see above about the borrow checker. Iterator invalidation is in fact one of the most common examples Rust evangelists use to demonstrate that C++ is unsafe, even when using 'modern C++' style.)


> 1. Swift doesn't have pointers.

Sure it does:

    func modify(_ x:UnsafeMutablePointer<Int>) {
        x.pointee = 12;
    }
     
    func main()
    {
        var x = 23;
        print("Before \(x)\n");
        modify(&x);
        print("After \(x)\n");
    }



> Swift guarantees memory safety

But memory leaks are quite easy to do with reference counting which is why Swift has some more complex syntax to prevent strong references. But it can take skill to understand when to use those techniques; the compiler doesn't always find these problems, thus there really isn't the guarantee you mentioned.


Memory leaks are not the same as memory safety problems.


Those Rust evangelists are only partially correct. It's the STL that's unsafe, not the language itself.

Iterators could be implemented in C++ in a safer way with some performance loss, but it doesn't seem to be a priority for anyone except the safercpp guy that posts here every now and then. STLs can enable iterator validation in a special debug mode.


That is incorrect.

The language includes memory unsafe constructs without marking them in any way, since it must be compatible with C.


We're discussing iterators here, and the STL iterators implemented as class templates can't even be compatible with C.

To clarify: safe containers, iterators and algorithms can be designed, but they don't seem to be a priority of the C++ community. Personally I'm quite scared of accidentally passing the wrong iterator to some function, but OTOH I can't recall it ever happening. I don't use the debug STL either, haven't needed it.

The examples that pcwalton keeps bringing up seem artificial to me. It's true that you can't have perfect safety in C++, but with some effort and custom libraries, many errors can be caught at compile or run-time. The advantage of Rust is that it's safe by default, not necessarily that there's a major safety difference between quality C++ and quality Rust.


You’ve probably heard this before, but the security angle is important. “I haven’t had these problems in my code” really means “I haven’t triggered these problems in my code”… that is, unless you’ve had a security code audit done. Testing isn’t enough: even well-tested codebases can and do have vulnerabilities. In practice, they’re usually triggered by input that’s so nonsensical or insane from a semantic perspective, not only would it never happen in practice in ‘legitimate’ use, the code author doesn’t even think to test it. For a simple example, if some binary data has a count field that’s usually 1 or 2 or 10, what happens if someone passes 0x40000000 or -1? As a security researcher myself, I think it‘s actually easier to audit code with less knowledge of how the design is supposed to work, up to a point, because it leaves my mind more open. Rather than making assumptions about how different pieces are supposed to fit together, I have to look it up, and as part of looking it up I might find that the author’s assumptions were subtly wrong… For this reason, it’s really hard to audit your own code, at least in my experience. I mean, you can definitely keep reviewing it, building more and more assurance that it’s correct, but if your codebase is large enough, there may well be ‘that one thing’ you just never thought of.

I’m not actually sure how frequent iterator invalidation is as a source of vulnerabilities; I don’t think I’ve ever found one of that type myself. However, use-after-frees in general (of which iterator invalidation is a special case) are very common, usually with raw pointers. In theory you can prevent many use-after-frees by eschewing raw pointers altogether in favor of shared_ptr, but nobody actually does that – that’s important, because there’s a big difference between something being theoretically possible in a language and it being done in practice. (After all, modern C++ recommendations generally prefer unique_ptr or nothing, not shared_ptr!). And even if you do that, you can’t make the `this` pointer anything but raw, and same for the implicit raw pointer behind accesses to captured-by-reference variables in lambdas.

You can definitely greatly reduce the prevalence of vulnerabilities with both best practices for memory handling and just general code quality (that helps a lot). But if you can actually do that well enough - at scale - to get to no “major safety difference”, well, I haven’t seen the evidence for it, in the form of large frequently-targeted codebases with ‘zero memory safety bugs’ records. Maybe it’s just that C++’s backwards compatibility encourages people to build on old codebases rather than start new ones. Maybe. It’s certainly part of the story. But for now, I’m pretty sure it’s not the whole story.


C and C++ don't have a culture of safety, they have one of performance.

C++ code could be written significantly safer with a performance loss, e.g: index checking at run-time, iterator validity checking, exclusive smart ptr usage with null checking, etc. That, together with code reviews, static & dynamic analysis should IMO lead to comparable safety. That's what I'd do.

However, there doesn't seem to be a rush in that direction. My guess is that there won't be a rush to switch to Rust either.

Is the security angle that important that it's handled through education and better tooling? Or only important enough to do some code audits and pen testing?


> STLs can enable iterator validation in a special debug mode.

And they do actually, in the MSVC debug mode and with libstdc++'s -D_GLIBCXX_DEBUG. But nobody ever wants to use them.


C compilers must assume that a function pointer (that is, a function passed as an argument, or that is a property of an object) may write to any global variable.

C compilers must also assume that any two pointers to the same type may alias (refer to the same object). The programmer can assert to the compiler that a pointer does not alias any others used in the same scope by declaring it with the `restrict` keyword.

For most functions this won't have much effect on the generated code. Writing equivalent functions to the ones in the swift-evolution doc in C, both with and without `restrict` everywhere possible, it looks like `restrict` only has an effect on the generated code for `increaseByGlobal`: https://godbolt.org/g/W8s3BA


> One of my biggest questions as a reader is how a language like C handles these cases that Swift can't handle without these guarantees.

C doesn't address these issues at all as far as I know.


This is the first step into adopting more Rust like memory safety, but not at the expense of productivity.

Basically Swift will keep using reference counting as its GC algorithm, but for high performance situations it will be possible to have a bit more of fine grained control over ownership.

However they want to avoid any design that might result in "fighting with borrow checker" feeling.

Some info from WWDC 2017,

https://developer.apple.com/videos/play/wwdc2017/402/

There is also a transcript.



Here's the 'Ownership Manifesto' that tries to clarify some of the differences between the ownership models of Rust and Swift [1]. The main point raised in that document is how 'shared values' are being implemented in Swift in a less strict way compared to Rust.

[1] https://github.com/apple/swift/blob/master/docs/OwnershipMan...


The new JSON parser is really slick. https://developer.apple.com/documentation/swift/codable



Author of the article here! Thank you for the kind words and thanks for sharing!

I was wondering why that entry had a huge spike on reads yesterday :D (1500 reads against a daily average of 200).


Glad I could help. Thank you for writing it!


Thats for this article. I have it on my list of things to read on my way to work!!!


Am I the only one who thinks it's weird that Codable is a magic interface? Are there any other "magic" interfaces?


It's not that weird - it's simply that Swift currently has no meta-programming features, in the interest of focusing on many other aspects of the language. As a result, Codable had to be baked directly into the language, or uses private reflection APIs not available to the public.

Objective-C does have meta-programming features [0] and that's how all the really nifty features were made possible (KVO, CoreData, etc)

What Swift is aiming for, is to be a performant, general purpose programming language. This restrains it in the amount of clever stuff you can do. That and the inter-op requirement has been an incredible hurdle when it comes to focusing on features that people actually want out of Swift.

A lot of effort has gone into being able to use Swift with existing Obj-C libraries - I sincerely hope they'll at some point say 'no more interop, only Swift, and by the way we've rewritten the UI libraries to use generics, so that you can do more than cute generic demos in your code.'

[0] https://genius.com/Soroush-khanlou-metaprogramming-isnt-a-sc...


   currently has no meta-programming features, in the interest of performance
This is a false dichotomy.


You're right, I'll edit that. Some meta-programming features do require sacrifices in performance, but not in the case of Codable.


And some can increase performance. Thanks for thinking about this though, it's one of the things I appreciate about this place.


IIRC they were already doing a sweep over the UI libraries to make them more Swift-idiomatic, but probably (like you say) while retaining the obj-c interface. They probably could make an extension or aliases to the UI libraries only visible to Swift though, translating down to regular obj-c compatible calls.


They've done an excellent job of going as far as you can, without re-writing parts of the code to not be backwards compatible.

My personal desire is to see a backwards incompatible UI layer, that makes generics usable.

As a simple example - you currently can't have a UITableViewCell<YourModel>, which's the most obvious and proper use of generics for UI logic.

So as it stands, all this work has gone into mixing polymorphism and subclassing, and yet it doesn't actually get used a whole lot, because of the existing libraries having been written when generics were not around.


Equatable and Hashable also have some "magic", currently only for enums without associated values. Recently this was generalized to enums where all associated values are themselves Equatable and Hashable, as well as structs where all fields are Equatable and Hashable. This is not implemented in Swift 4, but an implementation was recently added to the master branch. You can read the details here: https://github.com/apple/swift-evolution/blob/master/proposa...


Finally, gson / json.net for swift. Before this, every json static schema type must use generator or do manually


> Some source migration will be needed for many projects, although the number of source changes are quite modest compared to many previous major changes between Swift releases.

It seems like the language still needs some time to be completely mature, or am I wrong? I haven't used it since 1.0


I got burned by the migration from 1.0 to 1.something and haven't used it since. Looks like it was the right choice, breaking changes are a major failure in language design IMO.

I understand why they do it, but I don't like that they are getting away with it. Apple seem to be in this endless update cycle where they change stuff across hardware, operating systems and even programming languages. The customer doesn't have much choice: they either keep up or get left behind.


I disagree. Compare macOS today with the initial OS X release and it is fundamentally the same system both UI wise and code wise. Windows and even Linux has seen far more dramatic UI shifts in the same period. Same goes with the code. Windows went from win32 C API to C++ API with MFC and ATL. Then they promoted COM and Visual Basic development. Only to change gears and make C# with a completely different API. Then winforms gets depricated. We get silverlight, WPF and then there has even been more stuff in later years and I lose track. Compare this to macOS which ever since OSX has used almost exclusively objective-c and cocoa. Even now with swift you can use cocoa, and pretty much all apple technologies you allready knew. C# made redundant all previous win32 and VB skills. The introduction of Java, python, ruby and Go have not let you use any previous skills or libraries. In this regard Swift is an amazing accomplishment as you can reuse so much existing skills and code.


Can you take code written for the initial Mac OS X release, compile it and run it on the last version? You can do that for Windows.


They deprecated a lot of UI methods in the 10.4 timeframe so I would bet not. I’m sure someone has some code, but it is probably very limited.


> Compare macOS today with the initial OS X release and it is fundamentally the same system both UI wise and code wise

Try to make use of the JavaBridge, MacRuby, Quicktime, Carbon, Objective-C GC.

I can get a few more examples if you wish.


So, same story as Windows?


I wanted to create an iOS app and due to the broken Swift backwards compatibility I abandoned it. I don't care about MS, VB and the other stuff and don't see the connection.


You could have written it in Objective-C.


The Swift team != Apple. The breaking changes they've made provide substantial improvements to the language. And they've always been up front about this, if you were developing with 1.0 you knew they were warning that it was incomplete and subject to much change. Fortunately their migrator has made most upgrades nearly automatic.

I'd much rather have breaking changes than having to constantly deal with dangling pointers and excessively verbose syntax.


They shouldn't have called it 1.0. It confuses people.


I suspect this was not entirely their choice, but Apple marketing being averse to releasing non 1.0 products to the public.


> breaking changes are a major failure in language design IMO.

So I presume you:

a. get every major design decision on a project right on the first try

OR

b. don't care and let people live with bad design for the lifetime of the product


c. you develop the language openly for some time before calling it production ready (as it happened with kotlin)


If you work somewhere like Apple, you might not have that choice.


The pace of breaking changes is slowing down. Swift 3.1 was intended to be source compatible with 3.0, and Swift 4 has a "Swift 3.2" mode which tries to be source compatible with 3.0/3.1.


So, you agree with my comment? It still needs a bit more time to stabilize.


It seems it does, but they do put effort to make the transitions as easy as possible.


Language stability may not be a goal.


That is a problem for medium-big projects.


Not really. Xcode will automatically translate swift code to update it to a new version, and it has worked pretty well in most cases. The exception being the v2 to v3 transition, which was pretty onerous.

My experience with some medium projects is that it costs me a morning every update. I'm okay with that because Swift is such a more productive and higher quality language than Objective C.


Swift is not meant to target only Xcode. I am a Linux user and I would want to use swift for server development.


Converting two existing projects was pretty painless. Though the larger and older of the two had a significant amount of automatically generated changes. Most of them were (Void) in blocks changed to () though. Another nice extra is mixing 3.2 with 4.0 libraries.


The changes to tuples in closures were a little annoying too, with the jump to 4.0. But on a whole it was pretty painless for us too.


With this yearly release cycle and amount of investment Apple is making, I think Swift 6 or 7 could become general purpose language for client/server code on non-apple platforms too.


I see some mentions of Server API's on their website. With these libraries in place, how comparable is swift to golang?


Don't do it, the compile & indexing times alone as your code scales up get pretty bad. The debugger also just crashes when your codebase gets big and becomes uselessly slow when it doesn't. Backends have far more code than a typical client app, so if your 'successful' you'll run into it quickly enough. Swift needs more maturity as a language before using swift in a backend is a good idea.


I think it's nicer to program in swift than go.

It's already possible to do so to, by the way.

http://perfect.org/

What I think go has over swift is the complete independence from platform. You can write go code on osx and compile it to a linux binary and it Just Works!

Swift on Linux is not perfect yet, though it can be made to work with some effort.


Swift has no quality high-level concurrency primitives - until those arrive, Swift is a complete non-starter as a server language.

Concurrency is at the level of proposals currently, so it'll be another couple of years until they come up with something, and another couple until good server libraries/frameworks figure out how to use them properly.

It's a shame really - I imagine this could've been moving along at a faster rate had Chris Lattner stayed on with Apple and recruited some extra help.

He has written a really nice proposal for concurrency, that'll definitely influence Swift going forward. [0]

[0] https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...


Just to add to this, probably the biggest, most important reason to have language-level concurrency primitives is the 3rd party package ecosystem.

In Go, Erlang and Nodejs packages don't need to make any decisions about what kind of concurrency to use. The postgres driver in go will expose a blocking API, and the postgres driver in node will expose a continuation-passing-style API, or use promises. Everyone who uses those languages already knows how to consume those APIs. In these languages its very hard for an experienced developer to make concurrency mistakes - you can instead spend all your time thinking about the actual problem you're trying to solve.

In C, Swift and Rust this isn't true (yet). Despite all three of these languages having lots of tools for concurrency, the lack of standardisation means you can't just grab any postgres library. You also have to look at how that library does threading and figure out how to correctly integrate it with the threading model you're using in your own code. (Does it depend on tokio? Does it use posix threads? GCD? Green threads? Async?). This turns the 3rd party package ecosystem in these languages into a fragmented minefield.

As far as I know there are 4 main swift-based http server libraries[1] at the moment. Each one works slightly differently. Because they all roll their own concurrency tools, they're all mutually incompatible. Each one has implemented its own static file server, its own hsts header middleware, its own json parser, etc. I'm not sure about erlang but in node and go because the concurrency primitives are standard, they're all similar enough that you can migrate code between them or wrap a tool thats designed for one to work for another.

(Of course rust has tokio, but tokio is being rewritten[2]. Rust also has mio and raw threads. In a few years we'll probably have strong standards for both swift and rust, but until then the ecosystems will be a mess compared to the bliss of node and go.)

[1] https://medium.com/@rymcol/current-features-benefits-of-the-... [2] https://tokio.rs/blog/tokio-reform/


I agree, for the most part, but then again, C also has no high-level concurrency primitives, and basically every server ever is written in C.

You can use Grand Central Dispatch, and it works. It's not as wonderful as go routines and channels, but hey, it works.


I hear you. C has performance because it doesn't have to do any sanity checks and because 20 years of serious effort has gone towards optimizing everything possible.

The very motivation behind creating Swift, has been to have performance, and none of the undefined behaviour (cause of innumerable security breaches and unintentional complexity in compiler design and everyday programming) [0]

To me, grand-central-dispatch feels one small step above semaphores/locks/monitors.

There has been a lot of research in distributed computing and concurrency - I'd like to see libraries, language constructs and runtimes based on research fresher than before I was born :)

[0] http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


>Swift has no quality high-level concurrency primitives - until those arrive, Swift is a complete non-starter as a server language.

That's just false. Server-side projects that require concurrency are a tiny minority.


Not disagreeing with you. I just wanted to know why you think this way. Wouldn't you want easy concurrency when building a high-performance http/tcp server?

Also with the parent, why do you think "swift has no quality high-level concurrency primitives"? GCD and blocks along with some promise library should make it easier then goroutines and channels no?


I think "server-side projects" is maybe too broad a term. Yes, there are plenty of server-side software where concurrency pervades the codebase. But if you're making a web app, for example, it's common for all of the code you actually write (the "business logic") to be single-threaded.

Things like the core request loop and the DB connection pool will need to handle concurrency, but a few people will have figured out how to get that stuff working and you can just use their libraries.


Thank you for that link; it was a very worthwhile read. Two things stand out:

1. Lattner's ability to present complex topics in an approachable way is very impressive. 2. I'm really excited for Swift 8.0 (my best guess at the first version that will include all this).


Isn’t one of the most popular server languages JavaScript?


No, JavaScript is not even close. It has gotten popular in past few years. Most server apps are written in Java, C#, Ruby and python.


PHP?


Hush, we don't speak of that.


People have been using python for server side applications for years, and not only it had no concurrency primitives (until very recently), but it has a GIL that makes multi-threaded programs impossible.

According to the documentation at perfect.org, it supports asynchronous request handling.

I've tested it locally a while ago and IIRC it could utilize multi-cores just fine.


Python and Ruby got adoption because they provided frameworks that made writing websites easy - Django and Rails respectively.

Node became popular because now front-end devs could write their own backends.

Go became popular because of Google and it's concurrency primitives (as far as I know)

Swift is going to become popular because of Apple and... fill in the blank.

My take is that it'll take good performance, and easy to understand concurrency primitives that make writing libraries and using them a joy - that's the only way.

That'll be the differentiating factor that'll entice people to pay attention - otherwise as you have pointed out, there are plenty of alternatives.


>Python and Ruby got adoption because they provided frameworks that made writing websites easy - Django and Rails respectively. Node became popular because now front-end devs could write their own backends.

So you are saying concurrency primitives aren't required for a successful server language?


Swift is at least an order of magnitude faster than python. It provides static typing. It compiles to native binaries. It's still fun to program in. There are already frameworks that allow programming web servers in Swift. It's more expressive than Go.

It's a matter of time for it to get traction.

The only thing python/ruby have going for them at the moment is momentum.

Remember in 2005, Python was a fringe language on the web. Most web programming was done in php.


There was a post about energy use recently. C, C++, Rust and Swift were often sharing the top 4 slots for various efficiency metrics. I would guess that energy efficiency is also a good metric for execution speed as well. Concurrency support is planned for swift, maybe efficiency, friendly syntax and concurrency support will be a winning combo?


There's also https://vapor.codes/ and Kitura


Ubuntu 16.10 is already out of support, they should update that. Artful is out in a month or so.


Note that in spite of supporting Ubuntu only, you can run the 4.0 version in Debian Stretch without any problems. Just download the package, unpack it and add the bin/ subdirectory to your $PATH.

Edit: REPL stil doesn't allow importing Glibc by default, you need to specify the include path manually (e.g. swift -I /opt/swift-4.0-RELEASE-ubuntu16.10/usr/lib/swift/clang/include/).


There's a good summary of this release here: https://www.infoq.com/news/2017/09/swift-4-official-release


Ignorant question: What other language is Swift like most? And what improvements does Swift make over that language?


Kotlin, Rust, Scala and vaguely Go. Swift benefit over Kotlin is that it easily interfaces with existing objective-c and C code. It also has deterministic garbage collection through automatic reference counting which is better for interactive systems and maintaining low memory footprint. Since it is native code and doesn't have a complicated garbage collector it can be used like C to create libraries for other programming languages. This is impossible to do with Go and Kotlin.

Compared to Rust it is considerably easier to learn.

Go is probably easier to learn than Swift but Swift is a lot more expressive. You could do scientific computations in Swift but that would be cumbersome in Go since you can't easily define matrix, vector and point types yourself.


People are posting comparisons to Kotlin, but for the record here's Chris Lattner's take on the Swift/Kotlin comparison:

---

Swift and Kotlin evolved at about the same point in time with the same contemporary languages around them. And so the surface-level syntax does look very similar. … But if you go one level down below the syntax, the semantics are quite different. Kotlin is very reference semantics, it’s a thin layer on top of Java, and so it perpetuates through a lot of the Javaisms in its model.

If we had done an analog to that for Objective-C it would be like, everything is an NSObject and it’s objc_msgSend everywhere, just with parentheses instead of square brackets. And a lot of people would have been happy with that for sure, but that wouldn’t have gotten us the functional features, that wouldn’t have gotten us value semantics, that wouldn’t have gotten us a lot of the safety things that are happening [in Swift].

I think that Kotlin is a great language. I really mean that. Kotlin is a great language, and they’re doing great things. They’re just under a different set of constraints.

---


Kotlin is close


This is great news.

However, I wish the mac release was available as independent binaries without having to come along with Xcode.


The Command Line Tools include the Swift compiler without needing Xcode installed. Open up a terminal and type “swift” to enter the REPL or “swiftc” to compile a Swift File. SPM is included too.

If they’re not already installed, running one of those commands will prompt you to install them.


If I already have swift-3 and Xcode 8, how do I install only swift-4?


You can download a toolchain snapshot from here. There's a swift-4.0-branch download if you scroll a bit.

https://swift.org/download/#snapshots


Keep in mind that binaries built with the toolchain snapshots cannot be submitted to the App Store. You need an Xcode release for that.


Do you have homebrew installed? If so...

    brew install swiftenv


I received a CLT update today, it brought swift 4 with it.


Why couldn't they make a PDF version of The Swift Programming Language? I hate ePub.


I'm not sure how well the conversion will capture links and such, but you can use Calibre[0] to do that conversion on all the major platforms, iirc (at least as a workaround).

[0] https://calibre-ebook.com


Being voted down is nonsense. 1) I've tried ePub and it just doesn't work as well as PDF on my iPad mini, which is perfect for reading documents. 2) Virtually everyone sells their books in both PDF and ePub formats so I don't see why they can't do the same here.


I've bought an iPad Mini specifically to read ePubs. What do you think is broken on iBooks?


Your original comment is being voted down for simply "hating" ePub without providing any reason.

Your second comment is being voted down because ePub works just fine in iBooks and on iPads for many of us, again, you need to provide details.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: