Hacker News new | past | comments | ask | show | jobs | submit login
Go vs. Swift [pdf] (github.com/jakerockland)
166 points by jakerockland on Jan 18, 2017 | hide | past | favorite | 126 comments



I don't see any "strength" in the classical object oriented programming model as found in C++ or Java. Actually, in modern programming composition is considered superior to inheritance.

The interface concept of Go makes programming with composition much more flexible and powerful than with the class model. The author skips this Go specific and original interface typing. This provides a multiple inheritance equivalent without all the complications on C++ and that most OO oriented languages forbid because of that complication.

Go is a very original language in this aspect as well as with concurrency. Understanding and mastering these properties goes beyond simple syntax analysis.

To me the most remarkable property of Go is its simplicity. As I explained to a friend who is a strong advocate of D, the difference with other programming language is the same as dealing with a spoken language of 1000 words instead of 10,000 words. It's true that the language with 10,000 words is more expressive and richer. But the effort required to learn, read and write a language of 1000 words is much lower than a with a language of 10000 words. I'm beyond 50 years old, and too me this makes a huge difference. The best way to express it is that with Go programming is fun again. I hope that Go will preserve this simplicity. At the beginning Java was simple too. They later killed it to the point I don't want to deal with Java code anymore.


OOP is primarily about polymorphism (subtyping) and encapsulation, code reuse by means of inheritance is just a nice to have, so that comparison doesn't make sense.

You can compare OOP with parametric polymorphism, you can compare it with type-classes. Heck, OOP isn't necessarily about subtyping and we could be talking about row polymorphism (e.g. OCaml) which has some really nice properties.

> This provides a multiple inheritance equivalent without all the complications on C++ and that most OO oriented languages forbid because of that complication.

Except that it doesn't solve the fundamental problems with OOP, because it's still essentially OOP with subtyping ... and lots of marketing. So coming from Go you can be excused for thinking that the last 30 years of research have been for nothing.


> Except that it doesn't solve the fundamental problems with OOP, because it's still essentially OOP with subtyping

I'm not sure what fundamental problems you're speaking of, but what's nice about Go viz-a-vis C++ is that classes are fundamentally open. Personally, I think it's a win. By the way, I'm a C++ dev and I love C++ too. I just think Go has really interesting ideas.


By "classes are open" you mean that you can add methods to classes at will by just implementing a func with the right receiver?

In that case they're not truly open: Go does not allow you to declare methods on receivers from other packages, which means you can't extend anything which wasn't written by you. Which makes open classes almost useless.


`Go programming is fun again` I found this expression really difficult to accept it :) Ruby, Python are languages that are quite fun for me.

Also, `Go is a very original language in this aspect as well as with concurrency` CSP is quite old also.

Dlang, is quite a nice and easy to start language, If I will compare d vs go, I will say that you can see that golang has a lot of money and people behind. And, after I wrote more than a simple library, I tend to be tired of some parts of the language decision, and is not about interface, is just the small stuff. But again, I tend to write code that is more OOP :)

Please excuse me if my reply offend you, that is just my personal opinion after work with golang few years ago


> Go is a very original language in this aspect as well as with concurrency

Actually it isn't.

Using object Composition/Aggregation is a very old, and composition as the recommended paradigm dates back at least to COM (that's just my memory, it's probably older). Granted, delegation in COM aggregates was based on conventional interface and not structural typing, and it was implemented using ugly C macros, but dynamic languages made it easier and far more flexible. Kotlin even made it part of the language: https://kotlinlang.org/docs/reference/delegation.html

What Go has, despite all the hype, is not real composition with delegation though, but a crippled form of inheritance. With proper composition, your child objects are named, and you can choose which interfaces you want to delegate to which object. This doesn't just give you better control over which functionality you wish to expose, but also avoids conflict when two member objects implement the same interface. In Go you essentially get the same nasty diamond problem of multiple inheritance, with none of the benefits. Sure, you can disambiguate your method calls, but you could do the same in C++ and Python.

As for Go's approach for concurrency obviously isn't new. The CSP model was implemented by Rob Pike in at least 3 different languages previous to Go (Newsqueak, Alef and Limbo) and theory behinds it dates to research by Tony Hoare in the late 1970s.

I won't argue with you that Go is simple, but it's not revolutionary. As for the fun, I think that really depends on the individual. I know many people who think Go is fun, but for others, like me, it's about as fun as fingernails on chalkboard. There seem to be a strong correlation between people who like functional programming (especially the ML camp, but also the LISP camp).

For me coding in Go is plain drudgery: error handling boilerplate, noisy imperative looping (and in general very low SNR) and almost no ways to create abstractions. Yeah, you can make very useful and down to earth software in this language and it's currently satisfying much of my microservices needs at work. But it isn't what I would call fun.


> With proper composition, your child objects are named, and you can choose which interfaces you want to delegate to which object. This doesn't just give you better control over which functionality you wish to expose, but also avoids conflict when two member objects implement the same interface.

Oh but you can avoid that, it's part of the language, see this link:

https://golang.org/doc/effective_go.html#embedding

Typically, you can have something like this:

    type lockedReader struct {
        io.Reader
        sync.Mutex
    }

    lr := lockerReader{someReader, sync.Mutex{}}
    lr.Lock()
    lr.Read(...)
    lr.Unlock()

By default, methods will be delegated to the first field that has the method. If you want something else, you are free to override this default behavior.


No they're not. io.Reader and sync.Mutex do not embed the same type and do not have conflicting methods with the same name.

This is how a diamond looks like:

  type Base struct {
    Foo int32
  }

  type Child1 struct {
    Base
  }

  type Child2 struct {
    Base
  }

  type Diamond struct {
    Child1
    Child2
  }

  func (c *Child1) DoSomething() {
    fmt.Println(c.Foo)
  }

  func (c *Child2) DoSomething() {
    fmt.Println(c.Foo)
  }

  func main() {
    c := Diamond { }
    c.Foo = 42       // Doesn't Compile
    c.Child1.Foo = 10
    c.Child2.Foo = 20
    c.DoSomething()  // Doesn't Compile
  }


"[I]t's about as fun as fingernails on chalkboard."

Nailed it.


"The best way to express it is that with Go programming is fun again."

I beg to disagree in the strongest possible terms. I found programming with Go to be the opposite of fun. It's a highly opinionated language which gives people poor libraries, poor tooling, and the worst of many possible worlds. It's as if someone who couldn't see past C wanted to be slightly Erlangish.

No thank you, I will go back to Common Lisp and Haskell any day of the week from this abomination of a language. I've been exploring Clojure (on JVM, CLR and JS) as a practical alternative to CL lately.


> I don't see any "strength" in the classical object oriented programming model as found in C++ or Java. Actually, in modern programming composition is considered superior to inheritance.

I work in C++ and still use inheritance (although I generally prefer composition). One advantage over composition is that it's less typing. For example, if I'm using public inheritance to express a is-a relationship between a base and derived class, all of the public methods of the base are available without having to implement a method in the derived class that just forwards the call to the base class.


My understanding is that Go provides for this by a mechanism called embedding. You can place an object i:Inner in class c:Outer and then Outer acquires all of Inner's public interface.

I think it's a nice idea. In general, Go seems to provide the language mechanism without the "moral" aspect. In other words, it saves you typing without forcing you to accept the OO paradigm (Liskov substituion etc.).


It's always felt like something you should be able to do in C++. That is, tell it to apply interface X to class Y. If the interface and class have matching function signatures, the mapping should be automatic.


I found the study very shallow and superficial. For example, who cares how many people have starred a project on github? And why should I care how many lines "hello world" is in a language [1]?

I would rather see a discussion about performance, platform support, maintainability, governance and do on.

---

[1] someone please create a new programming language where the empty file means "print hello world". Since you can't do any better than that it would once and for all put an end to this stupid benchmark.


The author notes here (in the HN thread) that it's a very very cursory overview and there are plans to add more later given time. This could have been a blog post (or series of blog posts), so maybe the academic-paper feel was misleading.

While I'm not sure this content should be on the front page, it never was represented as a very deep comparison. I do admit that when I read it through I expected much more.

Also, how do you determine what libraries to use and what projects to invest in these days? Do you always do a full audit of every project? I know I don't have the time, so I use stars on a Github project just like one might use word-of-mouth. If a ton of people think a thing is useful, it's probably at least a little useful. Of course, that kind of thinking can be dangerous (see: javascript ecosystem), but a lot of the time, it's "good enough".

BTW, the paper does cover a tiny bit of information regarding differences in platform support when it covers how Swift deals with concurrency.


> I know I don't have the time, so I use stars on a Github project just like one might use word-of-mouth.

Up to a certain point, stars can be useful as it helps you discover projects other people have found of use. But saying project X is better than project Y because it has more stars? What does it show other than more people using X are on github or people using X a more prone to push the star button?

For reference, bootstrap has 100K stars, Linux has 40K...


Oh I definitely am not trying to imply you should use a project just because it has more stars -- maybe the right term was "hype" instead of word-of-mouth


Well... there is stuck....

https://stuck.readthedocs.io/en/latest/


> And why should I care how many lines "hello world" is in a language?

Because it's a good proxy for how good the language is for writing short, (often) throwaway programs.


I don't even think it's that. It's a good indicator of the language's ability to print a string constant, and nothing more.


The more I use swift, the more I've grown to appreciate the concept of Optionals and how the compiler enforces it's usage. I have found that my swift code is more robust and explicit than my ObjC or Android java code is, as well as the team members around me.


While I'm sure you're aware, Java 1.8 does have Optional support (https://docs.oracle.com/javase/8/docs/api/java/util/Optional...). Once Jack is completed (https://developer.android.com/guide/platform/j8-jack.html) I assume you'll be able to use it too. Of course, you can also just use a library (like Guava) that makes Optionals available (and somewhat close to the spec so you can phase it out in the future if need be).

Some quick googling found me: http://fernandocejas.com/2016/02/20/how-to-use-optional-on-a... ...

One thing I've found is that once you start using Optionals, they tend to infect (I don't mean this in a bad way) the rest of the code. You start to want Optionals everywhere, and start writing maps/orElses everywhere, which I personally love. The error-handling paradigm changes a little bit (to errors-as-values, which I personally appreciate as well), but some are turned off by having optionals everywhere.


Maybe I'm wrong about how it works, but I feel that Optionals in Java are a bolt-on solution that only solves half the problem that the same concept solves in Swift (or Haskell, or Rust, etc.)

An Optional in Java is just another value, and nothing in practice stops that from being null. I.e. a method that returns an Optional can still return an actual null if poorly implemented. So the major benefit Optionals give you in Java is a typed declaration that the method may return an "empty" value, which is certainly helpful for knowing what cases your client code needs to handle, but it does not guarantee that the method will never return null. It reduces the probability of unhandled null pointers, but it does not eliminate them.

On the other hand, Swift & co. guarantee at compile time that a function which says it returns an Optional will always return an an Optional, never a null, and, better still, if a function says it returns some non-optional value, it still is guaranteed to never return null.

It's great that Java is pushing for this style of code, but I'm afraid without compile-time guarantees of return values it will always feel less powerful there.


I completely agree with your point -- you can't compare the power of compile-time checking to Java's Optionals, for the most part.

However I tend to think that what's important about Optionals is the spreading of the notion of thinking critically about failure modes around improper input/output. "Defensive" coding is often considered a mid-range skill (at least I think), and IMO it's because a lot of junior programmers just assume if some method get ObjectThing a, it's going to be there, and not null. I think merely seeing Optional<ObjectThing> a encourages movement in the right direction as far as program correctness, a road that will almost surely tour through concepts and languages that take that kind of correctness more (ex. Swift, Haskell).

Also, Optionals are a light introduction to functors which is nice.

For more concrete actionable stuff please see other comment (https://news.ycombinator.com/item?id=13433335), there are tools that can do this at compile time for java


Checked exceptions also forces thinking critically about failure modes, but they seem to be an unpopular language feature, and few new libraries seem to use them. No other popular language has adopted them.

Why are optionals good and checked exceptions bad? (I'm certainly not implying that one replaces the other - i'm just saying that it they both have an origin of forcing programmers to think about errors)


Because checked exceptions are incompatible with the idea of standard interfaces. You can't use interfaces like List, Iterable, Runnable, etc in a world with checked exceptions; you'd have to declare special versions of each of those interfaces for every possible exceptional condition. Since we need standard interfaces to let modules work together, everyone ends up catching and wrapping checked exceptions in runtime exceptions. Any potential value is lost.

Exceptions are exceptional; the whole point is that you aren't supposed to have to think of them in normal program flow. Go gets this wrong too, forcing three extra lines of if err != null { for pretty much every single function call -- and with the added bonus of destroying stack context with each return.


The compiler should be helpful.

I love checked expressions, hate Optionals.

I'll be happy when the language pimps stop trying to make Java look and act like a scripting language.

Edit: I apologize if I've offended anyone with my opinions. I hope this helps. http://bit.ly/1S1943H


https://www.reddit.com/r/java/comments/5dgm96/are_checked_ex...

They generally also break encapsulation (though you can kind of work around/sidestep this by making top-level visible errors more opaque)

Do you have a good reason you'd like to share for liking checked expressions? It seems like the reason you like it is because it's enforced by the compiler -- but I don't think this makes them the right choice, as the compiler could easily also enforce exhaustive handling of an Optional (or sum types like other languages).

If we limit ourselves to the case of Java 1.8, it is a fact that optionals are not checked by the compiler, so there is a solid benefit of using checked exceptions, solely that there is a compile time check of whether the exception was accounted for.


"...a good reason you'd like to share for liking checked expressions?"

Because they're the best solution for the problem. In Java.

The problem with Java's checked exceptions is misuse; turgid overwrought frameworks which obfuscate. Work closer to the metal. Eschew Spring, JPA, Annotations, aspects (AOP). All terrible ideas (making Java more dynamic) implemented terribly. (Quoting myself: Spring is an exception obfuscation framework.)

If you have to catch an exception you can't handle, revisit the design.

Your linked comment references sum types (Rust, Haskell) and multiple return types (Go). I haven't used those, so I can't comment.

The only solution better than checked try/catch/finally might be proper state machines. Experimenting with that is (far down) on my to do list. I'm also curious about Erlang/Elixir (and CSP in general), which uses a completely different strategy for error handling.

*"... Java 1.8, it is a fact that optionals are not checked by the compiler..."

Java's Optionals are a bad idea implemented badly.

The Correct Answer is using the Null Object pattern. Or just use LISP/Scheme. Also good ideas are intrinsic path expressions (vs method chaining) and pattern matching.

---

Top-thread, someone shared the "composition over inheritance" heuristic. This is more correct. Null Object and composition are like peas and carrots.

You mention "breaking encapsulation". So? Choose composition, make most objects dumb, implement the smart/active parts as yet another Interpreter. Problem solved.


> Also, Optionals are a light introduction to functors which is nice.

Optional is not a functor, in fact it violates the functor laws quite blatantly.


I didn't assert they were functors. Just that seeing `map` on something that isn't a list should be somewhat eye-opening to someone who just casually uses it and isn't too familiar with functional programming, that was the point.

Also, when I wrote "functor" I was thinking of https://wiki.haskell.org/Functor, is that what you're thinking of? In that case, which of the rules does it break? if not, what definition of functor were you thinking of?

Optional.of(whatever).map(identityfunction) will definitely give you back an Optional<Whatever>... Am I missing something fundamental? Also Optional.of(whatever).map(f).map(y) is equivalent in return to Optional.of(whatever).map(f . y)... (of course that's not the java syntax for composing functions but I digress)


Consider two functions (excuse my Scala):

  val str: Function[String, String] = s => if (s.length > 3) s else null

  val num: Function[String, Integer] = s => if (s == null) -1 else s.length
With Optional, you receive different results depending on whether you call map twice, or combine the functions first:

  scala> Optional.of("Foo").map[String](str).map[Integer](num)
  res12: java.util.Optional[Integer] = Optional.empty

  scala> Optional.of("Foo").map[Integer](str.andThen(num))
  res15: java.util.Optional[Integer] = Optional[-1]
This is incorrect and violates the functor law.


Use JSR305 with Findbugs or SpotBugs to solve the problem at compile time. Using Optional in Java does have a problem with extra object overhead, so if you want to avoid that, you should use another language like Kotlin or Scala.


Scala's Somes cost an object as well, though (I believe) None is free in both Java and Scala.


You're right. Some is not an AnyVal.


Java has optional support, yes. However it lacks elvis operator "?." which actually makes code much clearer.


> the more I've grown to appreciate the concept of Optionals

You may also enjoy a language that supports the generalization of Optionals, which are Algebraic Data Types (ADTs). Optionals are a single, limited application of ADTs.

Optionals allow you to shift null pointers (and null pointer exceptions) to the type level. So instead of having to worry about a function returning null, you just deal with Optionals whenever a function returns one.

There's another ADT, usually called "Either" or "Result", that allows you to shift exceptions to the type level. You can see from the type of the function what kind of exception it might return, and you deal with exceptions like any other value instead of via a special exception handling mechanism.


One of the things that makes Optional so pleasant in Swift is the syntax support. This includes optional-chaining, if-let and guard-let unwrapping, and some convenient operators.

For example, in Haskell, by default you can't compare an Optional with the value it wraps: `5 == Just 5` fails. But in Swift this works like you would want.

All that is to say that Options in Swift are a bit nicer than what you could get with pure ADTs. It's a similar story for Swift's syntax-level error handling vs type-level monadic error handling. (The downside of course is that the compiler has to be specially taught about these - but maybe you want that anyways, e.g. for Rust's null-pointer optimization.)


> This includes optional-chaining,

Haskell has bind (>>=) and do-syntax, rust has `and_then`. This is pretty standard with any ADT-supporting language.

> if-let

Most languages with ADT support have good pattern matching that subsumes if-let syntax and allows you to do other things as well. Swift's pattern matching, such as it is, is a bit of a mess.

> guard-let unwrapping

Haskell has `guard` and `MonadFail`, which address the use cases for this in a more principled way. `guard` for boolean matching (like equality) and `MonadFail` for structural matching (like pattern matching on a constructor).

Rust has (?) and `try`, which are syntactic sugar around a simple match/return statement that addresses most use cases of guard-let.

Obviously there are going to be small differences between this implementations, but Swift doesn't really excel here.

> `5 == Just 5` fails.

As it probably should. Supporting such implicit casting could lead to some obvious confusion and buggy behavior. Ideally, the fact that "a == b" typechecks should indicate that "a" and "b" are of the same type.

In Haskell, you would just do `Just 5 == x` if you want to check that something is, in fact, `Just 5`. If that really wasted a lot of space somehow, you can define something like `x ==: y = Just x == y`.


Rust copied 'if let' from Swift (~2 years ago) despite having decent pattern matching; community consensus today is that it's highly worthwhile as a feature. There have also been proposals to add 'guard let'. So, while I don't have enough Swift experience to judge its ADT support overall, I wouldn't cite those features as evidence that it's wanting. They might not be as natural in traditional FP languages like Haskell, but they seem to fit pretty well in the stylistic niche Rust and Swift (among other languages) have carved out.

edit: FWIW, there's some overlap between '?' and 'guard let', but not that much. Using hypothetical Rust syntax, they'd overlap in this case:

    guard let Some(x) = foo else {
        return Err(GettingFoo);
    }
better expressed as

    let x = foo.ok_or(GettingFoo)?;
…but if you want to return something other than a Result, or the ADT being matched on is something custom rather than Option/Result, there's no good way to make '?' do the job.


> Rust copied 'if let' from Swift (~2 years ago) despite having decent pattern matching; community consensus today is that it's highly worthwhile as a feature.

Rust copied 'if let' and made it far nicer and more capable by allowing arbitrary fallible pattern matches to be the condition. Then Swift 2 copied that improvement back with 'if case'.


? is bind for Either monad if you squint.

`context(e?)` becomes `e >>= \x -> context(x)`


You mean `e >>= context` ? ;)


Hah!

I should have been clear about CPSing things first so I didn't need to eta-expand :)


Optional chaining is syntactically much more lightweight than bind and do-syntax. >>= and `and_then` have the annoying property of reversing the order of function calls. `if let` is also more pleasant to read - note you can destructure multiple values naturally (no tuple required) and incorporate boolean tests.

It's totally fair to point out that Haskell is more principled - users can build Haskell's Maybe, but not Swift's Optional.

> Rust has (?) and `try`, which are syntactic sugar around a simple match/return statement that addresses most use cases of guard-let

They aren't really comparable. Rust's `try!` addresses one case: you have a Result and want to propagate any error to the caller. This is closest to Swift's `try`. But Swift's `guard` is much more flexible: it allows destructuring, boolean tests, multiple assignment, etc., and it also allows arbitrary actions on failure: return, exit(), throw, etc., with the compiler checking that all paths result in some sort of exit.

In practice this is used for all sorts of early-outs, not just error handling. It neatly avoids `if` towers-of-indenting-doom. I think the best analog is Haskell's do-notation.

There's the same tradeoff here. Rust's try! is a macro that anyone can build, while Swift's `guard` is necessarily built into the language. But any Swift programmer will tell you how much they appreciate this feature, unprincipled though it may be. Use it for a while and you become a believer!

> As it probably should. Supporting such implicit casting could lead to some obvious confusion and buggy behavior. Ideally, the fact that "a == b" typechecks should indicate that "a" and "b" are of the same type.

The principled stand! Both languages make the right decision for their use case. Swift's Optionals are much more commonly encountered than Haskell's Maybe (because of Cocoa), and so the language's syntax design optimizes for dealing with Optional. They're more central to Swift than its ADTs.


>in Haskell, by default you can't compare an Optional with the value it wraps: `5 == Just 5` fails.

    -- Probably nothing like in Swift
    instance (Num a , Integral a) => Num (Maybe a) where
        fromInteger x = Just (fromIntegral x)
        
    main = print $ 5 == (Just 5)


This only works because you're working with number literals here, which have the very liberal type "Num a => a". If I replace your main function by

  check :: Int -> Bool
  check x = x == (Just x)

  main = print (check 5)
I get a compile error:

  Main.hs:7:17: error:
      • Couldn't match expected type ‘Int’ with actual type ‘Maybe Int’
      • In the second argument of ‘(==)’, namely ‘(Just x)’
        In the expression: x == (Just x)
        In an equation for ‘check’: check x = x == (Just x)


Right, it was just a fun little "Well Actually" moment.

http://tirania.org/blog/archive/2011/Feb-17.html


That is possibly the most condescending thing I've ever read. Whatever the author tried to make themselves less abrasive, it didn't work very well.


> `5 == Just 5` fails. But in Swift this works like you would want

Why in the world would you want those two to be equal when they obviously don't represent the same thing?

That doesn't make sense, not even if they have the exact same memory representation, in which case I'm pretty sure it has been a compromise, which would mean you're still dealing with `null` with some lipstick on it, making that type behave inconsistently with other types in the language.

This kind of misguided convenience is exactly why equality tests in most languages are in general a clusterfuck.


> Why in the world would you want those two to be equal when they obviously don't represent the same thing?

This is the difference between normal people and theoretical computer scientists, summarized in one sentence.


As suggested by the existence of this monstrosity, "theoretical computer scientists" have it right on this one.

https://dorey.github.io/JavaScript-Equality-Table/


>Why in the world would you want those two to be equal when they obviously don't represent the same thing?

Because I care for intended use, not ontology.


That is not the intended use of Option/Maybe, the whole point of an `Option[A]` is to be different from `A`.


Not the intended use of Option[A] -- the intended use of A. Option is just a safeguard, and for the check with A that capability is not needed at all (if it's Just A it can ...just equal to A).


Option isn't a safeguard, it expresses missing values in a way that doesn't violate the Liskov Substitution Principle, otherwise you might as well work with `null` along with some syntactic sugar.

And they are different because the types say so. By allowing them to be equal, you're implicitly converting one into the other. That's weak typing by definition, a hole in the type system that can be abused.

So why have types at all? Dynamic typing is much more convenient and Clojure deals with nulls by conventions just fine.


>Option isn't a safeguard, it expresses missing values in a way that doesn't violate the Liskov Substitution Principle, otherwise you might as well work with `null` along with some syntactic sugar.

Again, it's the use that makes it a safe guard, not its ontology.

>And they are different because the types say so. By allowing them to be equal, you're implicitly converting one into the other.

Which is fine sometimes, when you explicitly need to do it a lot.

>That's weak typing by definition, a hole in the type system that can be abused.

Any examples of how A = Just A can be abused in any meaningful way?

>So why have types at all?

Because I don't believe in the Slippery Slope fallacy, and some types are better than no types at all, but exceptions can be OK too.


Swift supports ADTs and implements its Optional type as an ADT.


> Swift supports ADTs

True, but perhaps not practically relevant. The pattern matching in Swift isn't much better than what you would get with a tagged C union, which is why no one really uses it very heavily. The Optional type in Swift gets a lot of special compiler support, which is indicative of the fact that the broader language isn't very friendly towards using ADTs to structure data.

But you're right, I should have clarified in my comment that Swift does have a basic degree of support for ADTs.


> The pattern matching in Swift isn't much better than what you would get with a tagged C union

In what way? Pattern matching in Swift is quite powerful; definitely better than C's switch statement. This blog post is a good overview of its capabilities. https://appventure.me/2015/08/20/swift-pattern-matching-in-d...


I don't know what kind of Swift code you're writing, but you're very, very wrong when you say "no one really uses it heavily". Pattern matching is in fact quite powerful and is used heavily in any codebase that isn't simply a straight porting of Obj-C idioms.

> The Optional type in Swift gets a lot of special compiler support

Only the ?. optional chaining operator (and technically the ! postfix operator too, though that could have been implemented in the standard library instead of being special syntax and in fact I'm not really sure why it isn't just a stdlib operator). And probably some logic in the switch exhaustiveness checker so it knows nil is the same as Optional.none. Everything else (including the actual pattern matching against the "nil" keyword) is done in a way that can be used by any type, not just Optionals (nil for pattern matching specifically evaluates to a stdlib type called _OptionalNilComparisonType which implements the ~= operator so it can be compared against any Optional in a switch, and you could do the same for other ADTs too, assuming they have a reasonable definition for nil).


In what way is Swift pattern matching not "much better than what you would get with a tagged C union"?

enum in Swift has pattern matching support, can have properties, adopt protocols, etc.

I would disagree with your assessment that "nobody uses them".


Swift does fully support ADTs, and can easily model result types, they just chose to have catchable exceptions as well.


This is a big thing for Swift - the 'optional' paradigm is totally unavoidable, and heavily affects the code as usage is fairly pervasive.

Not having used them much in the past, I rather don't like them, but I'm aware of the fact they could simply take getting used to.

Anyone chime in on whether or not Optionals are really the 'future of good syntax'? Or they are quirk? Or good in some cases, not for others?


Optionals should be used sparingly in de-novo Swift code, as the corresponding features (Option/Maybe types) are in other languages (Haskell, Elm, OCaml, Rust, …): only make something an Optional if, well, it's actually optional.

Optionals have lots of syntactic sugar because Swift has to deal with reams of native and objective-c code which is ubiquitously nullable, dealing with that without the syntactic sugar would be nothing short of horrendous. This is also an issue with ported/converted API (which is most of them at this point) as the underlying system tends to leverage nullability extensively (it's a very common Obj-C pattern as nil is a message sink), which is usually left shining through by simple conversion rather than wrap every historical API in a converted Swift version.


Optionals being clumsy means you start to eliminate them as you write more Swift. It becomes a carefully considered decision to use an optional in your interface. And the compiler keeps you in line.

I love the compile-time optional support now and severely miss it when using languages that don't have the feature. They are brilliant when you are working with all-Swift code or Objective-C code that has been decorated with nullability keywords.

However optionals become a burden when you are working with old Objective-C code that has not been annotated.


make sure you check out kotlin then, for Android


the nearest analogue to swift in the android ecosystem is kotlin - give it a look if you haven't.


People have known about Optionals being better for decades.


Swift: better OO, syntax, IDE; Go: built-in concurrency;

Swift is a great joy for iOS development, where compatibility with Objective C makes things smooth, and operation queues are good enough for concurrency. Xcode is a big productivity booster. Good to break Swift's own backward compatibility with new releases to keep innovation, while providing a quick fix tool. Hope it will become a great server language too.


I never understand the praise for XCode. I love the iOS specific features like GUI creator out property explorer. But the text editor part is just awful compared to anything else I've used (primarily JetBrains IDEs, but also vim, sublime and some atom). On top of that it also has comparatively poor git integration.

Edit: one thing I forgot is the very slow feedback at least when editing Swift. For syntax error notifications to get updated sometimes takes tens of second which can be very confusing.


Yeah agree. While I've moved on, IMHO xcode was generally crap compared to dev envs I used back in the friggin 90's, like Delphi or Topspeed. Not many on HN know much about Delphi, but it set the gold standard for the desktop IDE.

Great thing about Go is you don't need much of an IDE because it's best to just keep Go in its sweet spot, which is services. LiteIDE works great for Go- small footprint,debugging, enough project management to get by. Just like with everything else about Go, you can get a newbie dev going with the Go toolchain actually producing something that works in hardly any time.


To add some anecdata, I know three people (myself included) who moved from "playing with Delphi 5 as a student" into iOS dev. And I still feel that ObjC in Xcode is (sadly) the best replacement for Delphi right now:

- A stable and pragmatic language that interfaces well with the C world.

- Nice separation of class interface/implementation; ARC is as easy as COM was in Delphi; solid reflection/metaprogramming capabilities (e.g. enumerating properties).

- Compilation is fast enough on my 5y/o laptop.

- A standard library that is so good that people never re-invent it.

- The debugger works really well and is always-on (IntelliJ still has separate "Run" and "Debug" buttons, facepalm).

- IBDesignable/IBInspectable makes writing your own components as nice as it was in Delphi: http://nshipster.com/ibinspectable-ibdesignable/

The points above will probably fall apart with Swift, but right now it's not actually too bad.


Agreed. I don't use the GUI tools much (I prefer to manually construct stuff, I know, I know...) and XCode has been just awful to work with. Random crashes, freezing while debugging... in the most recent version I can't view log outputs of a process I attach to, and if I try to take a screenshot from the iOS Simulator it immediately crashes.


I'm not sure it should be Apples job to build a best-in-class text editor. I haven't used XCode in a long while, but last time I did it worked pretty well with external text editors, and you could configure the UI in a way that really facilitated using it just for just compiling/debugging.

Every single IDE I've used, I've always used another text editor for doing actual programming. It's nice to have a half-decent text editor in the IDE for debugging (see value of variables in the code, looking up and quickly fixing compile errors, etc).. but I don't expect it to be as powerful as vim, emacs, sublime, et.al.


Have you used any of the JetBrains IDEs?


Coming from an IntelliJ background, one thing that irks me the most when using XCode is: i need to click on the red dot symbol to see the error. For the love of God, just display the error, why do i need to move my hand away from the keyboard to the mouse. It just annoys me enough to lose my coding mojo.


Everything you saîd about swift is correct for small sized projects. The story becomes much much different once you work on a piece of code for a few months with a team. Then xcode crashes, swift builds slowly ( in minutes), you start to be crippled because of the compiler and tool.

I hate to say it, but It's not production ready yet, at least not for a big project.


I hear this argument a lot that it's not production ready but my personal experience does not agree. I use Swift in production for several apps with relatively large codebases that sustain many hundreds of thousands of sessions per day. I would agree that a full clean/rebuild is not as fast as Objective-C and that the incremental build system sometimes seem as though it is compiling more files than you'd like. However, for most part the incremental build system is fine and in daily use it is not a problem. I could never imagine going back to writing objective-C again when I compare the net gains we've realized coding in Swift. The readability and succinctness of the language, as well as catching bugs at compile-time rather than run-time, is a huge benefit. Value-based programming, functional(-ish) programming, and protocol extensions have made my code so much easier to maintain and test, as well.


How many lines of code does your project use? How complicated is your module structure? Is everything in one huge target or do you split your app into 100s of modules? What do you use for dependency management, what is your deployment target? Are things running in one process like an iOS app or is it some sort of OS X project?

Pretty much every large swift project I've seen has had the problems described. Lyft, Uber, Linked In, etc.


I'm going to answer but I should preface by saying I'm not going to try to proselytize you to Swift. If it doesn't work for you and you're happy with Objective-C, then by all means do what makes you happy and keeps you productive. I have a couple of apps that have about 50k lines of Swift, one also with 60k lines of Objective-C and another with 40k lines of Objective-C. We use MVVM as well as a fairly involved mechanism of VM abstraction with intercommunication using RAC. Common code is in a cocoapod as well as other third-party dependencies otherwise the other target is the iOS app itself. I think LOC is a poor metric to measure complexity, but it does have a relevance to compile time. Others have also said that Swift and Xcode are not perfect, but as an engineer programming in Swift makes me quite happy.


Aha! You haven't hit the LOC barrier that starts to make swift & xcode/source kit act like bsaul described. At around 100kloc you will start to get that experience with swift. While an equivalent 100-200kloc line app written in objective c will work quite fine with xcode. You'll start noticing some degradation as you get to 60-70kloc. You also have a simple module structure, which makes things better.

You can see it happen pretty plainly with a simple codegen script creating a bunch of dummy code up to X lines of code.


Then, it's not Swift that is not production ready, but XCode that is crap :)


Most of this crap comes from the compiler / sourcekit itself, which is a major part of the language. It has scalability problems.


You worked with source of both, Uber and Lyft? And Linkdin too?


They do presentations about swift and have twitter employee accounts where they complain about swift :)


I like Swift but incremental builds are very much broken on Xcode 8 and have been for a while: https://forums.developer.apple.com/thread/62737

This leads to very long compile times for even simple one line changes.


I also use Swift for large projects (many modules, lines of code, complex generics etc). Yes, Swift and Xcode are far from perfect. But the fluent language, and the tools that Xcode provides far outweigh any stability issues I've run into, especially recently.

There's some tools you can use to diagnose build times, often slow builds are just a line or two that seem to mess with the compiler and can easily be broken apart or written in a different way.

Also, have you tried the Android emulator/Android studio? Last time I did it made me appreciate Xcode and the Apple dev ecosystem a whole lot more.


I wonder if there's a cognitive anchoring effect. I've found Android Studio to be the best part of Android development, while Java and particularly Android itself being quite painful. XCode, on the other hand, I've found to be particularly painful, while iOS is much easier to work with, and Swift is one of the most pleasant languages I've used, so XCode looks bad in context, while AS looks much better.


But are those issues with Swift or with XCode? XCode is a known piece of crap that does not play well with teams. And XCode isn't any more stable with ObjectiveC than it is with Swift.


Mostly Xcode, but issues with compilation speed in the Swift compiler also means some of the problems are harder to pinpoint the issues.


In terms of IDE support Go has Gogland by JetBrains.

https://www.jetbrains.com/go/


> Because concurrency is supported through an Apple API rather than being explicit in the language’s design, using it is currently very much coupled to iOS or macOS development.

I am not sure this is accurate, because libdispatch has been ported to Linux:

https://github.com/apple/swift-corelibs-libdispatch

They are "...early in the development of this project..." but server-side frameworks use it.


This is one of the things I have marked as an issue to elaborate a bit more on.

https://github.com/jakerockland/go-vs-swift/issues/8

Thanks for the link! :)


The two differences that I wonder about:

* What is the cost profile of Dispatch relative to the Go scheduler? It seems that Go programs suffer a slow down just to enable multi-threading, for example -- which is not unlike other systems (GHC at some time in the past) that have a similar model.

* Is Dispatch suitable for "millions of threads" ala Erlang? Hundreds of thousands of threads? Dispatch allows you to decide which queue to put things on; but also kind of requires it of you.


AIUI Dispatch is suitable for potentially hundreds of thousands of queues. It doesn't spin up thousands of actual OS threads, instead it intelligently manages a pool of OS threads and multiplexes the queues onto them. IIRC each dispatch queue is around 100-200 bytes in size (while empty).


Yeah but I wonder about is, is the concurrency truly fine-grained and scalable? For example, you want to do a map over an array of 1 million elements. Can you dump a million blocks on a queue for a linear speed-up?


You could dump a million blocks in the queue, but that's probably too fine-grained. Each block is an allocation and there's some amount of overhead to executing blocks (though I've never measured how much). So unless each element actually takes a long time to process, you probably want to batch it up into larger segments. Dispatch has an API explicitly meant for this scenario (doing concurrent work over an array of elements), which is dispatch_apply() in C or DispatchQueue.concurrentPerform(iterations:execute:) in Swift, though the block is just given an index and it has to do the legwork of reading the correct elements from the input.


Very basic comparison. Doesn't go into the quality of the generated code, compiler speed or efficiency, etc.


This definitely is a basic comparison. If you didn't see in the readme, this was a paper for a course I took and definitely has a lot of areas that could use more elaboration or detail. There are some open issues for things I'd like to add at some point when I have the time, feel free to open new issues if there are other things you think are missing!


I would hope that Go is better in all of the above. It has been available for much longer, and it's supported by a great team.

I want to know how the languages compare for writing code. Dos one make it easier to write more succinct and correct code? Swift is missing concurrency support until at least the next version, for example.


Go's compiler emits naive code quickly. For example, parameters are always passed on the stack.

Swift is built atop LLVM, and it emits code more slowly, but supports many optimizations like hoisting, vectorization, etc. that Go does not currently perform.


I'm not sure, Go code isn't all that fast.


Semi-related, side-by-side comparison of how common things are done in swift and go:

http://rosetta.alhur.es/compare/swift/go/#


Go feels like a beefed up version of C.

Swift feels like a high level language with static typing.

Interestingly, Swift doesn't feel much like Obj-C.


With C you almost always know what you have, and can always cast it to anything else you want (safely or otherwise).

With Go, I'm never sure what I have (pointer? reference? something other and weird?), and can't cast things that should be castable.

Either way, I'd still rather take Haskell or Common Lisp, but I'd gladly take C over Go after having used Go for a highly multithreaded communications server.


  > Interestingly, Swift doesn't feel much like Obj-C.
Not necesarrily a bad thing. However I am the one who does like Obj-C. That said after some time with Swift you do not really want to go back to Ojbective C any more.


I favored Swift over Go in a small project (developed in my freetime) because it has template / metaprogramming support, and it calls destructors inmediately on unreferenced objects.

Some things that can be improved:

* It needs more support / packages for Linux (and Windows maybe?). I was using Manjaro Linux, and around November 2016 (dont remember exactly), the existing packages in the AUR didnt work anymore.

* No built-in weak collections.

* No source subfolders for the same project when using the buit-in package manager (I dont know about Go in this matter).


I use Swift every day and appreciate it a lot, while I have only cursory experience in Go, mostly because it didn't have any real debugger support in the past. But any comparison is rather shallow because at the moment they don't really address the same target usage, and Swift is only minimally supported on the server side where Go is generally used. Give it a couple years and the comparison might be more meaningful.

Once Jetbrains Gogland is up and running I will be playing more with Go.


The analysis looked good to me, but I was only able to read it after I downloaded it. I hate to be "that guy", but the Github pdf viewer brought Chrome to its knees.


I can't believe many people are making that choice, it strikes me the domains of the two languages don't intersect very much.


It's not true that Go requires import "fmt" to print something.


I have a note about that:

> Note that Go does support a ‘println’ function that does not require import- ing from the standard library; however, this function reports to ‘stderr’, rather than ‘stdout’, and is not guaranteed to stay in the language 19.


No mention of functional programming and the idiomatic async calls using closures made popular as of late by JS?


This paper definitely has a lot of room for elaboration and improvement. There are some open issues in the repo for things I'd like to add at some point when I have the time, feel free to open new issues if there are other things you think are missing--I'm also welcome to contributions if you'd like to make a PR.


That's nice, but in JS it's a necessary evil. Callback hell is not a desirable feature.


I'm confused as to why this piece was written using LaTeX.


Because it's nice to learn how to use new tools? Hadn't used LaTeX before and saw this as a good learning opportunity for getting started. I'm confused as to what is wrong with writing this piece with LaTeX?


I'm not who you're replying to but I was looking for some way to convert it to an html page. It's unfortunate that the only readable copy is the pdf since it cuts the code examples and renders poorly on my laptop.


Interesting, it renders fine on mine (not OP).


I'm sure we'll see a 'Go vs LaTeX' post before the day is done...


And I feel that it would also reach the front page :D


Why? The LaTeX required to produce a PDF like that is no more complicated than your favorite format.


Compared to Markdown, LaTeX is extremly complex, in fact, isn't LaTeX is turning complete.


It is (some fun examples here: https://stackoverflow.com/questions/2968411/ive-heard-that-l... ), but that's completely irrelevant to how complicated or not it was to create this document.


It's soo pretty.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: