It looks like a really well thought out language, and the willingness to break some things that looked like C holdovers in 3.0 for future clarity is also interesting.
I'm wondering if Swift will hit a sweeter spot than Go (GC, weird takes or delay on things like packaging, generics) and Rust (complexity) for userland systems software.
My intuition is the Swift ecosystem will struggle against conflation with particular platforms, i.e. iOS and OSX. This struggle is common among languages: C# with Windows and JavaScript with the browser come to mind. Historically, C faced similar association with the Unix platform. Language platform conflation makes it difficult to parse information from blogs, books, and StackOverflow: the worst offender of course is Javascript where the presentation may be based on jQuery or NPM dependencies even for things practical in vanilla Javascript [or rather ECMAscript].
Outside of Google, using Go tends to be a decision made among a wider range of technical options whereas Swift is very often "not-ObjectiveC." To put it another way, Google created Go as an inhouse alternative for applications where Python, C++, or Java might have been the choice previously. A wide language range: even more considering the range of Java alternatives [Scala, Clojure] on the JVM.
A third factor I see with Go is the Clojure effect. The median Computer Science and Engineering knowledge in its community tends to be high relative to languages chosen based on platform limitations, institutional standards, or "this is the language I know".
I've left out Rust because I don't see it as likely to achieve the scale of adoption of Go or Swift...and saying that I should clarify that I think that platform binding, and a few billion platforms at that, will mean Swift is likely to instantiate more lines of code than Go. Clarifying further, I think Rust's adoption will be a matter of power law distribution not technical shortcomings.
Apart from swift on iOS/mobile which is gigantic, you are underestimating the push for Swift by enterprise players (IBM,SAP..). The next big frontier for enterprise applications is cloud, IoT and of course mobile (buzzwords or not). Java is increasingly being used by Oracle to extract revenue, there is no guarantee they will not change their mind and start charging fees from future versions as their other businesses (database, apps etc.) shrink.
The major enterprise players are not going to let Java (with Oracle stewardship) continue to be the language for the next generation of enterprise platforms and be at Oracle's mercy. Swift is open source and apple enterprise agnostic, easy to see other enterprise players pile on this language. IBM and SAP already on board. If you remember, this is how Java came to dominate enterprise space. Don't be surprised if Google release some sort of compatibility layer for Android.
You are also overestimating Go's influence. It has been five years since the language release, it seems to have settled as a niche language like Clojure and topped out. Unless some major platform adopts it, it will remain a niche area. It certainly hasn't replaced (or even come close to) Python, C++, Java in any sense.
If you narrow that to just San Francisco, it's 19 out 258 i.e. ~7.4%. I consider SF a leading indicator i.e. other parts of the country and the world will follow that usage growth.
I grant you that it's still far behind Python/Ruby/JavaScript/Java, but those things have exponential growth for a long time and 6 years is early. Those other languages took 10+ years to establish mainstream presence.
Judging by many metrics (number of significant software like Docker or Juju or CockroachDB, number of jobs available, number of Go-related conferences and meetups and their size), Go is easily past "niche language" like Earlang/Haskell and even Clojure and Scala (both of which riding JVM wave) and well on its way to be a peer of Python/Ruby/Java in the next 5-10 years.
You are looking at a narrow slice of technology companies in SF. This is not representative of a general trend. I will add that you are right, it is not "as niche" as clojure. If you look at a broad indices --
1. At Tiobe, it is now at #42. It was language of the year in 2009 and has been slowly moving down
3. Look at jobs on dice.com same number of jobs for golang and clojure. I have been tracking them occasionally and they seem to around the same since last year.
dice.com
Your use of Docker as a datapoint is interesting, while it is written in golang, far more people use it as a sysadmin tool. So why is that a factor for golang usage? That is a general issue though, lot of infrastructure might be written in golang (and should be) but that doesn't mean there will be golang jobs.
As for taking 10+ years to establish mainstream, things move faster nowadays so unless there is a established or rapidly growing platform exclusively for golang, I wouldn't necessary bank on it becoming hot in another 5-10 years just because it has been around.
Sorry if I was unclear. I intended to communicate that Go has been used in lieu of the three languages at Google. Unsurprisingly, since that was the intent behind developing it.
To the degree IBM and SAP drive enterprise languages I suppose their consultants and sales staff could go whole hog behind Swift as a long term strategy. I'm not sure I see the same strong parallels with Java, I remember Sun playing a role that uniquely leveraged its hardware and software synergy. Redhat played a role there too: there having being at least as much Java as an alternative the Microsoft stack as a dictate of blue suit consultants.
Which makes me wonder if the rationale for Swift's adoption based on open source is significantly more compelling than predicting the adoption of C# on a similar basis or a case that the adoption of the opensource .NET as the replacement for the JVM. I suppose it's that I'm not sure I see IBM and SAP as having orders of magnitude more influence on enterprise or tools selection than Microsoft or that I see top down implementation of consultant recommendations as dominating bottom up decisions by small teams.
Of course, predictions are hard particularly about the future and your mileage may vary.
There is no doubt IBM is making big Swift announcements but IBM is also making huge contribution to Go. They are porting it to PowerPC / s390 architectures. They are also contributing Go based blockchain implementation (~50k LOC) to hyperledger project.
IBM/SAP swift support looks like riding along with marketing of Apple/OSX/ios. These tech consulting giants will definitely make a top-down push for swift but grabbing developer mindshare might not be so easy. At least IBM swift on server support looks Websphere like play. At one point it was a big business or may be still is but given alternative huge number of developers will prefer alternative.
Unlike Java/Swift which are often introduced or talked by executives at Oracle/Apple, Go is not a corporate mandated language and Google execs do not talk about it like android. Google I/O does not have any Go specific talks. The biggest Go conf is organized by 3rd party.
Go influence need not be over or under emphasized. Hugely influential cloud based projects such as Docker/rkt/etcd/kubernetes/juju and many more are written in Go.
Maybe it will differentiate w/ the flavor of their libraries? It seems Go is the go-to networking and low level language, whereas Swift may be better for graphical applications and user-facing apps?
I've written a fun little OS X/iOS game in Swift 2.0 and found the language incredibly satisfying. Swift 3.0 seems to bring a ton of improvements including some of the bugbears I encountered during that first project.
It's probably mostly personal preference, but I prefer Swift's approach to things over Go or Rust. Go in particular does a lot of things that I personally find obnoxious (capitalization of method names indicating exporting instead of using a proper keyword I find particularly egregious, but I also find the := assignment operator annoying as well.)
Swift is the first language I've learned in a while that had way more "OH yeah that's awesome!" than "What?! Why did they do that?"
I actually find Swift and Rust to be pretty similar in how they approach things from the end-user perspective, (I think of Swift as application-level Rust), it's just that Rust targets a different market which involves quite a bit more complexity in its domain, which is reflected in the language where needed.
> It looks like a really well thought out language
Well, I think it could have been more cleanly designed if it was able to abandon the burden of Objective-C interoperability. (But Apple can't do this, obviously.) The standard libraries could have also been much cleaner if it didn't rely on NextStep. Also, I don't personally like the dichotomy of value types and reference types, but YMMV.
There are at least a few aspects of the language (eg, the special case for the first argument passed to a method) which would be completely bizarre decisions if not for the Objective-C compatibility requirements. Though I think they're tweaking that a bit.
> special case for the first argument passed to a method
Are you talking about the weirdness where the first argument doesn't require a label? If so, that decision has been made explicit in the last few dev snapshots of 3.0. Basically, if you don't want to require a label when call a function/method then it must be made explicit.
To be clear, that document contains brainstorming by some core Swift team developers. It's neither implemented in Swift nor officially accepted as a guideline for future development.
"The implementations of the thread-safety layer, the thread verifier, and programs that use the three concurrency libraries are available in the concurrency git branch."
Thanks, I'm going to have to take a long hard look at that and the repo. I've just spent a couple of weeks abstracting over async in Swift [0] and where I ended up looks a lot like that - to the point where some of the primitives have identical type sigs (mainly because the patterns used here are not novel - promise/future, async/await, etc). Be great to see this as a core language feature!
At least now we can see stuff coming, I had just finished writing unit tests for my OAuth library when Apple rolled an implementation into iOS.
[0] I'm aware of PromiseKit and co (and read a bunch of their sources), but I didn't feel like I had a good conceptual grasp on their implementation until I could create something similar and understand how it works and how to use it to solve my actual problems. Often I'll bin my own implementation afterward and use an extant library, so this is almost - but not entirely - quite unlike NIH.
Check out VeniceX[1]. It provides Go-style concurrency. The same group[2] that is working on this is also working on a GCD based concurrency library but the name escapes me at this time.
Yeah, it's very cool stuff! Would just like to add that VeniceX is actually single-threaded, so it's more safe than Go's. It's similar to libuv in that way, but since it doesn't use callbacks it's much nicer.
CSP with channels in Go is just a fancy locking mechanism, something that Rob Pike and co wanted to experiment with, and a marketing feature, of course. Nothing more. They don't really work that well. Their limitations, verbosity and quite complex implicit behavior makes them hard to use for high level things. I think mutexes are more widespread and more encouraged already in Go, than channels. So, idiomatically and in the standard library Go doesn't handle concurrency well at all, on the opposite, it has the worst kind of concurrency - shared memory multithreading. And anything can be better than that. Even a wrapper around synchronous APIs ;)
It doesn't mean, of course, that Swift or any other language will be a Go killer. It means that most people never write complex concurrent programs to begin with and don't really care about good concurrency models. Sadly or luckily, this applies to people, who design languages as well, so concurrency tend to be a hype-driven mess. The only known exception is Erlang, where the whole language was designed to solve concurrency problems.
I agree. I've found Go channels to be disappointing in practice. There's lots of PR but little criticism of something so key.
The fact that they are implemented using mutexes is, from my admittedly limited, non-concurrency-expert perspective, inexplicable. The simplest use case, an unbuffered channel, should be trivial to implement using CAS. There's a bunch of research on implementing robust queues using CAS.
Then there are the various ugly edge cases: Closing a closed channel is an error (now you have to implement your own refcounting implementation if you want to hand out a single channel to multiple consumers), receive on nil channel blocks (means you can't use nil as a signal to mean "I've closed the channel"), etc.
I really like GCD's style of enqueueing concurrent tasks, which I think is a concurrency model that often works more naturally than channels. A major difference between GCD and goroutines is that you can specify what queue to use. I'm looking forward to see what they come up with for Swift.
I think that Go's biggest mistake was making goroutines multithreaded. Libmill takes a much safer approach by making everything still run on one thread (and by taking advantage of libmill, Venice brings a similar scheme but with the added safety).
Built-in, it's threading and Grand Central Dispatch. However there are a shit of good quality libraries available for swift. PromiseKit being the one I'm most familiar with. There are others that offer CSP and channels.
Also Swift 4 will contain it as a native part of the language.
Yes, currently. But in the link, it states that "For Linux, Swift 3 will also be the first release to contain the Swift Core Libraries." And GCD (or libdispatch) is part of the Swift Core Libraries :)
I've built a few simple apps running on Linux that have worked great.
Check out Zewo[1]. They've been doing a lot of great work to make Swift 3.0 on the server work really well. Also, check out VeniceX[2] for concurrency and Open Swift[3] for cross project standards[3].
I don't know - this is a language that can't easily handle concurrency. Instead you mess around with dispatchers. And then you're dealing with thread-unsafe structures, objects, arrays, etc.
This is such a critical thing with todays interconnected apps that i have no idea how they did not worry about it from the start. Having had the pleasure of dealing with threads, mutexes, dispatchers, execution-contexts and the like there is no way i'd pick up a language that hasn't evolved in that regard, especially on a server. Even Javascript has this figured out with stage-3 async/await.
This is inaccurate. There is no term in either Swift, or the Objective-C runtime named 'dispatcher', which makes it seem like you are unfamiliar with the platform.
There are dispatch queues, and the whole purpose of them is to abstract away threads and locks. Using them does not require you to use threads, mutexes or execution contexts.
Yes, it would be better if Swift incorporated language level concurrency, but it isn't a priority because the platform provides a good solution.
As others have said this is misleading and inaccurate. GCD, which will be cross platform as of swift 3, is very pleasant and robust to work with and abstracts the threads, mutexs, and dispatchers away from the developer. PromiseKit and Venice are third party libs that also provide other better concurrency patterns.
Also GCD can be faster than locking with a mutex because it doesn't have to trap into the kernel. So now in my app I tend to protect access to variables with queues instead of @synchronized. The only downside is that it can't replace recursive locks
I did not realize that we were this close to Swift 3?!
This is really awesome news. I'm pretty excited to see what they do in 3.0, I'd love to have another language under my belt without fooling around with NS-nonsense. That they might actually take Linux desktop seriously is really exciting to me also.
I'd love to see a widely-used tool where games and popular apps that would traditionally be put out on Win/Mac are able to be used seamlessly on Linux also. On top of that, I definitely don't want to write Java, I definitely don't want to write ObjC, etc.
Interestingly enough, I think this might even compete with Dart, which I suspect will end up getting Wasm support and ultimately replacing Java for android development... (Just a wild shot in the dark...) So I can't wait to see what the Swift starts brewing up in the Concurrency department.
> I did not realize that we were this close to Swift 3?!
WWDC is June 13-17, so they basically have to have something ready by then. Not so coincidentally, June 13 is "4-6 weeks later" from the May 12 date for making the release branch. Similarly, the late 2016 release for the final version of Swift 3.0 is probably actually mid September, since the version of Xcode with iOS 10 support has to ship before the next iPhone model.
It's good that they are making these breaking changes now. I like the language and the changes they are making.
The next focus should be on performance in my view. String processing in particular is too slow to be usable for server-side tasks.
Also, I wonder what to do with Foundation in the long run. It's huge but its design is extremely antiquated. At some point Apple will have to shed that past where URL manipulation and file system operations are methods on a string class.
Long story short, it looks like the reflection machinery in the standard library is improperly being used to construct String instances. Doing so, while probably not sufficient to account for the entirety of the awful performance, is probably quite expensive. This looks like a bug and I'll try to dig deeper into it this week.
Swift also badly needs a native version of NSCharacterSet, even if only for programmer ergonomics.
The developer in charge of the standard library has mentioned that that team intends on redesigning the String API in the near future; this should provide an opportunity to reexamine the performance implications of the current implementations.
After a bit more investigation, I found that if you replace the following code:
result.append(begin == eos ? "" : String(cs[begin..<end.successor()]))
with this:
if begin == eos {
result.append("")
} else if let str = String(cs[begin..<end.successor()]) {
result.append(str)
}
runtime goes down from ~3 seconds to ~2.2 seconds.
This is due to a rather insidious API design decision:
init?(_ view: String.UTF16View)
constructs a string out of a UTF16 view, but it can fail. If used in a context where its type is inferred to be non-nullable, the following generic reflection-related init is used instead:
init<T>(_ instance: T)
I'm going to bring this up on the list and see if there are better ways of doing things.
As far as I can tell most of the rest of the time is spent in the Swift native Unicode --> UTF16 decoding machinery, and NSCharacterSet.
OK I found the cause of that weirdness. I had slightly changed my test code since I posted it here weeks ago. After undoing that change I'm seeing the same thing you do.
But this raises more questions, because what I changed is the test code generation. I'm now generating a million different strings instead of adding the same string a million times (note the \(i) at the end of the string):
func generateTestData() -> [String] {
var a = [String]()
for i in 0..<N {
a.append(",,abc, 123 ,x, , more more more,\u{A0}and yet more, \(i)")
}
return a
}
The running time of generateTestData() isn't what we measure but apparently the performance improvement you found only works if the same string is used every time. Otherwise performance drops.
One thing I've noticed is that performing the string scanning operation is relatively cheap. (If the splitAndTrim code is modified to not use Strings and to return a [String.UTF16View], the runtime is around 1.2 seconds.) It's the process of building Strings out of those UTF16 views that is destroying performance.
I still don't know why changing the way the input data are constructed would have that effect, except to guess that the underlying representation is different somehow. I'll file a ticket.
This looks to me like memory allocation / reference counting is at least part of the problem. Slicing a UTF16View to get another UTF16View mostly likely doesn't involve any dynamic memory allocation at all.
These are small adaptations to Swift's semantics and naming guidelines. There is nothing in these documents that hints at the sort of root and branch redesign of the APIs that I think is necessary.
Cocoa as a whole is obsolete and needs to be replaced by something better. It is the epitome of everything that is wrong with 1980s/1990s style object oriented design: Huge classes, massive coupling, very little cohesion.
What are some examples of "huge classes, massive coupling and very little cohesion"? Barring a few example like NSString UIKit Additions, I thought it's well thought out.
E.g. Swift standard library already replaces dictionaries and arrays without adding similar bloat. I think/hope they want feature parity first and then will start moving towards what you propose.
While it's true that "the implementation of [all but 3 of] these types will use the existing reference type API", this means (AIUI) that there will now be a swifty shim between swift and Foundation rather than the types being bridged directly. I would expect that would make the job of replacing Foundation with something appropriately modern easier in several dimensions.
You can already do URL manipulations with NSURL and NSURLComponent, it took time but almost all file system methods can take an NSURL now instead of a NSString.
I've had a quick look on 3.0 changes and didn't find major features related to protocols. I'm thinking of things like more complex generic type constraints, or a mechanism to better code against protocols ( see all videos that talk about swift protocol oriented in "real life" and how annoying those limitations).
I would love it if Swift could become the next-gen C++ for cross-platform development. The Apple platforms/Linux advances are great but is it ever going to be adopted on other platforms? I don't think Apple would care, it's mostly Lattner & the team behind pushing the language. (bless 'em :))
Swift is more like Java ecosystem than Rust is ideal as the next gen C++. Otherwise, Obj-C would have been popular when it tried to compete with C/C++. There was an article mentioning.
> compiles to native code, specific to a platform
> deterministic runtime performance (no GC, no JIT)
> lightweight "exceptions"
Seems way more similar to C++ than java.
Also Obj-C's success or failure as a C replacement (I can't imagine using it as one... it's not going to run well on embedded devices) has little to do with swift's future path besides that both languages are heavily used at apple.
The key difference may be that Chris Lattner is basically a compiler superstar, and I think he could make it happen if he chose to.
I was hoping to see some changes to the error handling. The extensive use of implicitly unwrapped properties always makes me nervous because basically we still have the null pointer dereference problem. Also, optional chaining seems really well geared to the use case where silent failures area OK, but things get ugly quickly when trying to handle individual errors. Maybe I've just been spoilt by Rust, which I've been learning at the same time, but their approach to error handling seems much more holistic and less tacked on than in Swift. I'd definitely like to see more done with this because that feels like the only complaint I have with the language.
Anyone know how Swift performance compared to Scala? Those are pretty similar, I've heard. I'm pretty familiar with Scala, but not Swift. Is there any reason to switch to Swift?
The main thing to know about Swift performance is that it varies hugely depending on what you're trying to do. I think it will take some time for the implementation to mature to a point where it becomes less hit and miss performance-wise. Swift as a language has all it takes to create a consistently fast implementation.
The wildcard is probably the decision to go with reference counting over a tracing GC. We'll see how that works out. There are good arguments on either side.
They aren't so similar. Feature-wise, Swift is much closer to Kotlin, which is (on purpose) significantly simpler than Scala.
Also if you're using Scala, there's a huge ecosystem of libraries and frameworks that just doesn't exist in Swift. If you want native compilation, just wait for http://www.scala-native.org/
I haven't seen any data. Scala has been around for a while, which means more time to spend on optimization, so it's probably faster at this point. Because of garbage collection on JVM, Swift has the potential to be more performant though.
Unless you are trying to build some really high performance shit, what really matters is developer productivity, which is abysmal for server-side Swift right now. Getting better every day though.
I know you did not mean to say that scala code is non-performent.
Well written jvm is highly performant and with GC algos like concurrent mark and sweep or garbage first and a good allocation of eden, tenured (etc) space you should really never have a stop the world because of GC'ing anymore.
While it's true you should never stop the world for CMS in a server-side program where you can control memory tuning, newgen collection is still stop-the-world, and can cause multi-millisecond pauses on ~8GB heaps.
I really don't like the way that objective c express everything, so i was looking forward to Swift.
This tells me that Swift is not ready for prime time 3 versions later:
"Swift 3.0 is a major release that is not source-compatible with Swift 2.2. It contains fundamental changes to the language and Swift Standard Library. A comprehensive list of implemented changes for Swift 3.0 can be found on the Swift evolution site."
Fundamental changes??? Let me know when is stable across versions.
The reason there are so many fundamental changes right now is because they want to stuff all breaking changes into Swift 3, and be stable after that (naturally with new features going onwards, though). Once Swift 3 comes out the language will be very stable.
Maybe I'm too old school but the removal of for loop is quite surprising.
for-in doesn't tell you the index of the current object so you'd have to manually keep track of the index.
Also, x++ is a common syntax in many languages and I think a new developer looking at Swift would try that first before using the other syntax (x += 1).
I think what a new developer would try first isn't a particularly good yardstick for language design. But I also think that the removal of ++ causes some problems.
IMHO ++ is trivially readable, but that's only because I happen to be familiar with C/C++'s pre/post increment idiom. An idiom which is well known for causing confusion and bugs, especially - as you say - for devs new to the language. For me the compelling argument for removing them is that they're a Cism and don't fit with the semantics of any of the other operators. Chris Lattner strongly agrees with you and explains his reasons here https://github.com/apple/swift-evolution/blob/master/proposa...
A beginner should be made to learn something only if they get an advantage out of it in readability or type safety or performance or something else. I don't see that being the case for pre- and post-increment. If anything, I find:
i += 1;
String name = names[i];
more readable than
String name = names[++i];
where I have to pause and remind myself of pre- and post- increment and which one is being used here and mentally translate the code into the above version.
Imperative languages already have a way of specifying execution order: it's the order of statements in your file. Let's reuse that rather than making things more complex.
I don't see a difference in functionality — both code snippets I gave do exactly the same thing.
As for brevity, I think clarity is more important. We should optimise for the time it takes to read and understand the code (clarity), not just read (brevity).
The best abstractions and programming language features are both brief and clear, like Python's list comprehensions. I find
[name.uppercase() for name in names if name.startswith("a")]
to be clearer than Java's
List<String> uppercaseNames = new ArrayList<>();
for (String name: names) {
if (name.startsWith("a")) uppercaseNames.add(name.toUpperCase());
}
So, the best programming language features enhance both brevity and clarity. When that's not possible, I'll take slightly longer but clearer code over short but confusing one.
I've seen ++i vs i++ cause so many bugs that whenever I see it, I stop to ponder if the author got it right. It's like in Javascript where I stop to check if the author intended for `if (x)` to take the false path if x is "" and 0.
It's death by a thousand papercuts for my mental load.
Nitpick: for...in is not providing access to the index, it's the enumerate() method returning a collection of tuples, the first element of which happens to be an index.
for-in is elegant, and you can use enumerate() to access index. However, if we had to do reverse for-loops in Swift 3, we would have to use for-in with either from.stride(to:by:) or reverse(), and to me that is not as elegant as c style for loops.
Interesting, it feels significantly more elegant to me as you're simply telling the language what you want (reverse iteration) and it handles the exact details, and works for any reversible iterable.
> for-in doesn't tell you the index of the current object so you'd have to manually keep track of the index.
Functional decoration trivially provides that. Just zip a counter with the backing iterable, and modern languages generally provide that OOTB as a variant of `enumerate()` e.g.
for i, item in it.enumerate():
# stuff
> Also, x++ is a common syntax in many languages
Meh. It's generally present in B derivatives (via C) and that's pretty much it. Most languages do just fine without it.
I was hoping all Swift professional are motivated to contribute patches, features and tuning performance to Swift 3.0 in a swift pace instead of until 2017. Release Process is such a lengthy discussion that could have been shorten and spending more time on improving code.
What I really miss is a proper compiler for Swift. I've encountered two bugs in the last few days: one in the IRGen and the other one in the TypeChecker. That can be really annoying. And that error "expression is too complex to evaluate" is just ridiculous.
It'd be nice if they stopped breaking the language at some point. I don't want to rewrite my program every year. I would have called that future stable version 1.0 (like, say, Go and Rust did).
Version 3.0 is supposed to have a much stabler API going forward. They've been making a lot of changes even since 2.2 that really clean up the standard library.
Today I have submitted a proposal for "memorization", "memo", "memorize" keyword for a function that allows to run only once through out the lifecycle in Swift 3. Repeatedly execute the function will only return the same result without having to rerun the function everytime. As far as, I don't see my submission is being visible for discussion.
I'm wondering if Swift will hit a sweeter spot than Go (GC, weird takes or delay on things like packaging, generics) and Rust (complexity) for userland systems software.