Hacker News new | past | comments | ask | show | jobs | submit login
What's Going Down at the Tiobe Index? Swift, Surprisingly (metaobject.com)
70 points by mpweiher on April 27, 2019 | hide | past | favorite | 146 comments



The Tiobe index is not a reliable indicator of anything; their methodology is extremely noisy. A fractional percentage drop is meaningless.


The clearest example to me that Tiobe does not reflect reality is the Javascript chart:

https://www.tiobe.com/tiobe-index/javascript/

Javascript in 2019 only has 0.5% more market share than in 2004?

2019 has basically every new UI (and many rewrites of old ones) being built in Javascript, 2004 it was being used for the occasional drop down menu.

(Even if you include Typescript, the only compile-to-JS language in the top 50, it still only gets to 2.7% today compared to 2% in 2004).

See also C unassailably at number 2. I think we can all agree that in 2019, there is more active development in Javascript, than C - Likely it's benefiting from their methodology giving it credit for C++ and C#, so at least we can theorise about that one, but I really have no idea for Javascript.


Part of the problem is that their methodology is based heavily on performing web searches for the phrase "$LANGUAGE programming".

Lots of developers use Javascript in the course of their work. Few of them will refer to their work as "Javascript programming" -- they will typically use either a broader term, like "web development", or a more specific one, like "React development". Neither of these would influence TIOBE's rankings.


If I want to find the MDN page for destructuring assignment in JavaScript I know I don’t need to add JavaScript or MDN, I can just search for “Array Destructuring”, and the top half dozen results are all JavaScript.


And if I want to search for say which of call/apply takes an array, or even confirm what version of php a new flag is introduced, I’m not searching for “$language programming $feature”.

I’m using DDG bangs to reduce the noise from the start and take me straight to a site specific result : `!mdn function apply` or eg `!php preg_split`.


> 2019 has basically every new UI (and many rewrites of old ones) being built in Javascript

yeah... no

> See also C unassailably at number 2. I think we can all agree that in 2019, there is more active development in Javascript, than C

bubbles, bubbles, bubbles. the immense majority of large companies I see start new GUI projects in Java, C#, C++, ... daily. Count the hundred of thousands of programmers working for gigacompanies like Samsung (with more than 40k devs), ATOS (122K employees, mostly devs), CapGemini (211k employees, mostly devs), etc... they are far from all doing JS.


I've noticed this issue a lot more recently. Developers who live in a language / platform bubble that assume that the large majority of paid developers are using whatever trendy language and less than 24 month old framework they're on etc.

Listen kids. A colossal amount of the software engineering that takes place on this earth is paid development for closed source apps and services. It's not on GitHub. It's not written in JavaScript. It's not done on Node.js. A great deal of it doesn't happen in California. And it dwarfs the amount of software on GitHub or GitLab.

It happens in Java and C++ and C# and Visual Basic and other not-sexy languages and involves writing crappy enterprise interfaces and tons of backend code.

The great thing about TIOBE is its a good indicator of what jobs are being hired for. That's why it doesn't match whatever trendy expectations most people expect to see in a languages ranking. That's why Visual Basic is always ranked so highly. Because the actual world of paid software engineering and your mental view of that world are wildly different.

Personally I think TIOBE is very valuable for what it's for. I tell young developers to use its top 10 list as a guide for where to invest your time IF you're taking a "safe" approach to career skills and employability. If you use one of the other guides for that, you may decide that learning new and exciting languages might be optimal for employment. Generally speaking that's not the case.

If one wants filter bubble expectations met, I recommend sticking to language rankings that are solely based on GitHub and other publicly available source code.


> Typescript, the only compile-to-JS language in the top 50

Clojure is right next to it on the list. It’s not only compile-to-JS but 100% of the people I know using it use it that way.


The Tiobe index is based entirely on search engine result counts. Those numbers aren't even particularly meaningful as a count of web pages; treating them as an authoritative source of programming language popularity, as TIOBE does, is ridiculous.

There are a couple of really strange comparisons you can spot in the results. For example, Scratch -- a visual programming language made for children which is only usable on its own web site -- is ranked higher than TypeScript, Lua, Scala, Kotlin, or Rust. Similarly, D -- a proposed replacement for C++ which has never found any real traction -- is ranked even higher.


Not to defend the TIOBE index (claims to be a measure of popularity of programming languages) but if you go by the dictionary definition of popularity ("the state of being liked, enjoyed, accepted, or done by a large number of people") I don't doubt that Scratch is more popular than Lua, Scala, Kotlin or Rust. If I think about my friends and acquaintances, most of them certainly have never heard of any of those languages, but I'm sure almost all of their kids were exposed to Scratch at some point.


I see you conveniently left typescript out of your reply.

Anyhow, regardless, the point is TIBOE is measuring some meaningless stuff.

There’s not really any defending it; it’s just some arbitrary metrics about languages, not really worth paying attention to particularly...


Fluctuations in Tiobe can be a simple result of how a major repository for the language is indexed. Take for example, Delphi from '08 - '09. http://delphi.org/2008/10/the-many-faces-of-delphi/ and http://delphi.org/2008/10/delphi-keeps-climbing/

It isn't a measure of popularity - its a measure of which languages either intentionally or accidentally have the best SEO.


Or look at how the "rating" for C dipped precipitiously, then climbed back up, between 2016 and 2018. There wasn't a sudden shift in the popularity of the language over those two years -- that's the source data being unreliable.


This is the impression I get as well. VB.NET > JavaScript?

Here's what they say about their methodology:

> The TIOBE Programming Community index is an indicator of the popularity of programming languages. The index is updated once a month. The ratings are based on the number of skilled engineers world-wide, courses and third party vendors. Popular search engines such as Google, Bing, Yahoo!, Wikipedia, Amazon, YouTube and Baidu are used to calculate the ratings. It is important to note that the TIOBE index is not about the best programming language or the language in which most lines of code have been written.


And some of that "methodology" is seriously suspect.

> Popular search engines such as Google, Bing, Yahoo!, Wikipedia, Amazon, YouTube and Baidu are used to calculate the ratings.

Bing and Yahoo! are both front ends to the same search engine. Wikipedia, Amazon, and YouTube aren't general-purpose search engines at all, and it's unclear how accurate the results of a Baidu search would be for English-language search terms like "Javascript programming".


It's the Dow Jones Index of indicators. Totally pointless.

The best measure of a language's popularity needs to take job vacancies and github metrics as their most meaningful measures. Stackoverflow posts and google searches will help too but contain a lot of natural bias.


Agreed, but having said that there are a lot of people developing for iOS who never made the switch due to past language instability and continued poor quality tooling.

Outside of iOS I don’t think it’s made a dent and I would be surprised if things like Swift for Tensorflow make a difference there (if only because the number of ML devs is very small and the appeal of Swift for ML is also pretty small).


I'm an extreme outlier, but I'm about to start an Objective-C codebase (a 2D renderer on top of Metal), because I suspect interop with Rust will be a lot easier. In particular, it looks like you can just build .m files from a build.rs, and I'm pretty sure that's not true with Swift. It's also possible to invoke Obj-C from Rust using the dynamic features, but I think that's both code bloat and a runtime cost. [I'm interested in exploring these issues further, but it's probably outside the scope of this HN thread, get in touch if you want to discuss.]

Example build.rs: https://github.com/emoon/rust_minifb/blob/master/build.rs


If you want to call rust from Swift, it should be easy. I work with C libraries and it easy to interface with module maps. However, with objective C the interoperability is trivial. As long as you pay attention to retain cycles, you can hold reference and call objc methods directly. That said, please make sure your framework can be imported into Swift properly. As most of the users will be using Swift.


Yes, I've seen examples of calling Rust from Swift, and I do plan to support that. This particular thing is about calling macOS platform functions from Rust code. Projects either call Objective-C API's from native Rust code using the dynamic invocation mechanism (which I think is slow and code-bloaty but want to measure more carefully), or they use the "cc" crate to include Objective-C code (and then generally use C ABI calls from Rust code into that). I've never seen Rust code that compiles Swift from a "cargo build" but of course would love to be proven wrong.


Why can’t you build Swift from your build.rs?


Maybe I can, I just haven't found an example of it. Certainly the "cc" crate doesn't seem to know how to compile Swift files. This might be more of a documentation issue than a real technical limitation though.


If you can figure out a way to run arbitrary commands, I’d suggest looking at swift -frontend, which works like you’d expect a normal compiler to.


I don't care about the random TIOBE fluctuations. But I really don't get the love for Swift.

- Optional is a nuisance and I don't think it is highly effective at preventing the dereference of a nil pointer in practice. People are going to fiddle with ? and ! until their code works, and the protection that Optional provides then disappears.

- "let" is a mess. In one context it means unwrapping an Optional. In another, it doesn't. If you use it in an expression, it interferes with the usual refactorings that people apply to expressions.

- There is so much fussy detail around preserving the syntax of object.field = expression. didSet, willSet, getters, setters, oldValue, newValue. WHY IS THIS SYNTAX SO FRIGGIN' IMPORTANT? One of the justified criticisms of C++ (IMHO) is that operator overloading means that all sorts of functions can be running under innocent-looking code. I would much rather have = mean assignment, and use function call notation when I really want functions to run.

- Reference counting is a bad compromise between the unpredictability of GC, and the burden of explicit memory management. It's mostly unobtrusive, but I've seen examples of code that require careful and non-obvious use of weak to avoid memory leaks, (sorry, I don't have the reference handy).

It sure beats Objective-C, and it does have some nice modern features, but I think they made some unfortunate design choices.


I can share my love of Swift

- Optional has made null pointer exceptions virtually non-existent. There are occasions where deserialization does result in a nil value where one is not expected, but that's rare. I think this is as massive as garbage collection was back in the day.

- The language is in a sweet spot between something convoluted and heavy like Java and something overly simplistic like PHP/Ruby/Python. It's perfect for software development and large code-bases with a low learning curve.

- There is no magic (methods, properties, etc). Everything is explicit but not wordy. This may change in the future but I hope it never does.

- It's more legible than Kotlin with fewer unnecessary language features. Speaking of which...

- It's a simple language and its syntax is straight forward

It's not a perfect language:

- The package manager needs a lot of work

- Extensions can lead to confusion (eg. in the case of conformance to protocols (interfaces)).

- I would like an optional garbage collection system for long-running processes (e.g. server-side applications)

- I would like it to perform even better

You can nitpick syntax anomalies with any language, but a language is defined by how easy it is to scale to multiple programmers, how applicable it is to a problem, how well it performs, how easy it is to read for someone new to the system, how it helps the programmer not commit mistakes, etc. Of all the languages I've written, none has been as successful in all regards as Swift.


> It's a simple language and its syntax is straight forward

Wha? It has about 100 reserved words — more than C++, now, I think. Even its own designer said [1]:

“You’re unlikely to run into anybody that says that Swift is a simple language. Swift is a complicated language.”

If you consider Swift simple, what language do you consider complex? Of all the languages I've written, none has been as complex as Swift.

[1]: http://atp.fm/205-chris-lattner-interview-transcript


Yes, it's simple from a user's perspective. Even the quote in your comment is then followed by Chris talking about how easy it is to learn Swift (e.g. `print("Hello world!")` is a valid program).

There are a lot of keywords in Swift, but that is not what makes a language complicated. They are all in English, after all. Symbols and unique concepts are what complicate a language because they are not explicit or unorthodox. For instance, using `:=` or `(setq x (1+ 2))` for assignment instead of `=`.


> - Reference counting is a bad compromise between the unpredictability of GC, and the burden of explicit memory management. It's mostly unobtrusive, but I've seen examples of code that require careful and non-obvious use of weak to avoid memory leaks, (sorry, I don't have the reference handy).

This is such a bizarre statement to me. I have to wonder if the commenter has ever dealt with production systems or code that uses GC? Because with years of writing and tuning Java servers, I've long since come to the conclusion that GC is largely a false economy.

You still have to deal with memory leaks but the cause this time is objects maintaining references they don't need. Sure GCs become cleverer at detecting unused forests of objects and so on but this largely makes another problem worse: so-called stop-the-world pauses for a full GC. It's still an issue you have to deal with on Java, Go and probably every GC language.

This isn't to say that ref-counting is a panacea (what is?). Particularly on servers the critical region for updating the ref-count itself can become hot and a bottleneck.

But this isn't really a problem on mobile app code. I've long said that iOS's uses of ref-counting (ARC since 5.0+) is a strategic and competitive advantage to Android's Dalvik/Java GC.

Google's Plan9 2.0 (ie Fuchsia) makes me sad (for many reasons). Just one is that they've embraced Dart/Flutter at the application level (which isn't required as anyone who works on it will tell you, yada yada yada) but Dart is such a strange choice because it still relies on GC. What's the point?

Oh and just to be pedantic, you list ref-counting as a negative of Swift compared to Objective-C. ARC is orthogonal to that. Like I said, ARC came about in iOS 5.0.


IDK about Java, but the GC characteristics of Go seem pretty ideal for UI code. I think they're at sub-1ms pauses on multi-GB heaps now. So even if you want to run at 100 fps it shouldn't be a big obstacle to making frame rate. Of course there's a throughput trade off, but same thing with reference counting.

Although I guess it doesn't matter much since the library ecosystem isn't there. I think it would have been an interesting choice for Flutter instead of Dart though.

For just memory management, I don't see the downside of GC (vs reference counting) if the latency is good enough. Sure you can still leak memory by maintaining references but at least it deals with cycles (and it's one less thing that any dependencies can screw up).

If GC latency is OK, the only remaining advantage of reference counting I can think of is the deterministic destruction of non-memory resources like file handles etc.


I've worked with production Java systems for many years, and have had to tune GC, implement suballocation schemes etc. It is definitely not a panacea, and I meant to say that it was not, in my original comment. But at least GC fixes mostly leave the source code alone (ignoring suballocation tricks). I find reference counting to be less intuitive, and I don't like the fact that fixing reference counting problems require modifications to source more often.

My Swift usage stopped in 4.x, so I'm unfamiliar with the 5.0 improvements.


I actually like Swift: advanced type system and compiler, clean syntax, very fast.

I think optionals are a great way of saying: “Hey developer, this value can sometimes be nil; be sloppy or not, your call.”

Any language you pick is an interplay of choices and compromises. No such thing as perfect.


> very fast

This is a common sentiment, but it rarely holds up to closer scrutiny (such as measurements...). Swift is both slow to run and very slow to compile. Both compile and runtime come with extremely high variance. Such unpredictability is arguably even worse than just plain slow performance.


In most of these benchmarks [1] Swift's performance comes pretty close to that of Rust. So yeah, I would call that very fast.

[1] https://web.archive.org/web/20190402100249/https://benchmark...


Try some JSON benchmarks with Codables in Swift vs serde in Rust. Rust smokes Swift, something like 20x.

Incidentally I've spent considerable effort on this problem, including writing a prototype faster JSON parser in Swift (still about 3x slower than Rust, but a huge speedup over what's there). I couldn't get anyone in the Swift community to care enough to motivate me to finish the work, which is indicative of a deeper problem: Swift just doesn't have "performance culture," the benchmark game results notwithstanding.


Hmm...when I look at those numbers, Swift only comes close in a very few, is significantly slower in most (50%-4x) and much slower in one (45x):

fannkuch / pidgins: essentially the same

binary-trees / fasta / Mandelbrot: ~ 1.5x slower

spectral-norm / reverse-complement: >= 2x slower

k-nucleotide / n-body: ~4x slower

regex-redux: 45x slower


Up until Swift 4, compilation took longer than with Objective-C according to my experience. Not sure how much Swift 5 has changed things.


- Optional is a nuisance.. People are going to fiddle with ? and !

And that is GOOD. This mean that wrong code look wrong:

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

and many instances of it mean something is badly wrong in the codebase, and the developer(s) haven't adapted to how must be done.

In contrast:

    bad.shit.happends
Good look with that...

- "let" is a mess

Overloading keywords have problems, but in this case the rules are simple. And is not that many context to worry about.

- I would much rather have = mean assignment

Agree. Overloading keywords are fishy, but operators are in another league!

Have some sense for maths but in other contexts is not worth the extra "feature".

- Reference counting is a bad compromise

You mean, ARC. Is not exactly the same, and the compromise is totally worth. I work with .NET for iOS/android and man, is heavy.

All is about what is default.

With GC, by default, eat memory and unpredictable destructors. With (A)rc only need to worry about break cycles. That not happened often enough to be a real problem

BTW, I have use obj-c, pascal, .net, python, rust. The ARC model is the best for most kind of code most do. I honestly have more unpredictable and unfixable problems with GC (mostly on android with underpowered devices), so I prefer to get something less automatic..


> [Using Optional] mean that wrong code look wrong

...as long as you don’t use any implicitly unwrapped optionals (which virtually all Mac/iOS code does), or use any frameworks that predate Swift (like AppLit/UIKit/Foundation), or wrap any C libraries (most third-party libraries in the world).

So basically that’s true for the 5-line samples showing how cool Optional is, and no real program I’ve ever written. It was a nice experiment but in practice I have more issues stemming from Optional than I ever did with nil.

> In contrast: bad.shit.happends

Do you not read any Swift forums? There are newbies posting every single day asking why "a.b.c" crashed in Swift.


> There are newbies posting every single day asking why "a.b.c" crashed in Swift.

That is fair. And totally show why optionals are the right thing. Because obj-c and others NOT have it the world suck MUCH more. That is why nulls are the billon dollar MISTAKE with capital letters.

The solution is to more and more languages, libraries and frameworks embrace the idea of algebraic types, not just declare "interfacing with old codebases with this huge MISTAKE is not nice, so why bother??".

What rust make nice is declare stuff as UNSAFE. I wish this was the case with swift and obj-c so make clear to everyone that you have entered the realm of nasty things and must be more careful...


Objective-C has "optionals": every object pointer can be `nil`. What it gets right is that it doesn't implicitly treat a `nil` as a logic failure.


That is not optionals, at all.


I wonder if you have real experience with swift or you’ve just been looking at the language from a distance, because i disagree with pretty much every point you made.

Optional is the best thing that could happen to a language that allows nil values ( the only other option being not allowing them at all, but that’s not common).

If let ( and guard let) is an elegant idiom for unwrapping optionals safely, but that’s about the only cass you’re going to see a let in something that looks like an expression. The only complain you could have in that part in the language is between let and var, but that’s actually quite useful in practice for self documenting code.

Operator overloading could become dangerous, but it seems that it is used very sparingly in the swift ecosystem. Standard library doesn’t overuse it, so people don’t as well.

Reference counting to me has quite the same drawback as GC : it makes you forget about memory cycles and ownership. But apart from rust who tried something very different ( and is scaring a lot of developers), once again i don’t see a lot of other options.


I do have some experience, playing with an iOS app for several months, through various versions of the language (3.x, 4.x). But this was as a hobby. I liked Optionals when I learned about them, and less as I started using them. I found that libraries sometimes changed return types, e.g. T to T? (or vice versa). The feature struck me as having some drawbacks similar to Java checked exceptions: the rigor required ran counter to clean and easy to understand code. Did I just not get the design of the feature? Perhaps, but I did find it very difficult to avoid tweaking code with ? and !.

I have no problem with var vs. let. I like that distinction. And I do understand that let is used only sparingly in expressions. But that's what bugs me about it, it is so limited. It is not an "orthogonal" feature, (usable in any expression). An expression is ideally functional (no side effects). Let definitely does have a side effect, declaring and binding an identifier. I find it awkward to mix the two ideas. This may be many years of mental baggage, from C, C++, and Java, but it is there, and I think that the introduction of such a major side-effect into the world of expressions is a genuinely bad idea.

GC vs. reference counting: GC really does allow you to forget about memory management. Until you start running into problems requiring GC tuning. But then, the fixes tend to leave the code alone completely. With reference counting, tuning requires more intrusion into code. Yes, this is subjective. Just explaining my reasoning.


Lots of misunderstanding of how optionals are designed to work in this comment!


Whether there is misunderstanding by the grandparent post or not, in practice there are a lot of people who do just play with ?/! until they manage to get code to compile and then hope for the best.

You can see it all over the place in code people write, posts on Stack Overflow, etc. The Apple Swift book does a poor job explaining the rationale and benefit of using option types.


Genuine question - does this not highlight a problem with the language on some level?


If a developer is just fiddling with optional-related operators until the compiler is happy -- in any language, not just Swift, e.g. Kotlin -- instead of actually understanding what they are doing at any fundamental level, that's not a language problem.

I wouldn't want to work with someone like that I doubt anyone else would, either.


I would believe it's an education problem, in that we teach things like recursion and local variables (the fact that the same variable can have different values in a nested call to a function) in intro CS classes but we don't really teach how post-NULL languages handle optionals (Option, Maybe, etc.). But it's something that should be part of an effective programmer's knowledge. It's not too unlike buffer overflows—all the people writing UNIX until 1988 were skilled, educated, experienced people, but at the time none of them were really aware just how bad buffer overflows were.


I never learned those in university, so I don’t think that’s a sufficient excuse. Any developer should understand, or want to understand, the fundamentals of the language they are using.


It highlights some of challenges of bridging Swift with ObjC APIs.


Fully Swifty UIKit or whatever it is that eventually replaces UIKit can’t come soon enough. As Swift has matured, UIKit/AppKit has quickly become the weakest link in the Apple platform dev experience.


Yes, which Apple has done with some success (Toolbox -> Carbon -> Cocoa) but we've also seen how WPF, etc have gone down. The hard challenge is the migration path, and you can't commit until the language and tooling is ready.


Optionals when used right really cleans up code that deals with a lot of deeply nested data structures. There is a learning curve though. When you are not used to them so there’s a lot of frustration and sometimes forced unwrapping until it clicks.


Is it only the developers fault when critical parts of a programming language are hard to grasp?

I suspect that might be a sympton of excessive complexity and parent's example of C++ seems spot on.


But optional is very simple concept.

However, break from the old habits is hard. I was there moving to f# and the first month or so I try to bend the rules.

Same now with rust. I try to mold it as a OO language, and get SUPER annoyed with how rust is so different. But you embrace algebraic types and pattern matching and suddenly a lot of other hidden problem simple disappear forever.


Proper optional use is indeed a habitual thing. When I was first transitioning from ObjC to Swift I found them to be an annoyance, but after using Swift for the past few years I find myself wondering how my ObjC code ever ran as well as it did without them. I’m now reluctant to use languages that lack option types.


> I would much rather have = mean assignment, and use function call notation when I really want functions to run.

And then you end up with Java where you have to write so many boilerplates for getters and setters, even if they are only plain attribute access, just in case you want a function later.


I agree that a very common Java pattern is to create getters/setters for every private field. Which is just stupid, might as well have the members be public. To use the popular phrase, Java classes like that have a "bad code smell". It tells me that the author of the class doesn't really understand how it should be used. I think a well-designed class will have private members and public member function interfaces that don't resemble each other very much.


From your comment looks like you do not get Swift anf that prevents from getting why it is loved. There is no need to "fiddle" with ? and ! Optionals are not that complex. And there is nothing wrong with let. It has nothing to do with optionals.


"if let x = someOptional" unwraps the optional, no?

And "let x = someOptional" doesn't, right?

So the first form does have something to do with Optionals, and it is true that the two forms of let behave differently with respect to them.

(I have moved on from Swift, so if things have changed since Swift 3.x and 4.something, my apologies.)


I agree that's confusing coming from an imperative context. The core misunderstanding here is that `let` is a binding, not a variable declaration. It's binding a name to a value, not creating a variable which can be modified. Therefore the `if let` example is saying "if we can bind the value inside the optional to a name", while the regular let example is just saying "bind the optional to x"


I get all that, but in one case the type of the value bound to the declared identifier is T, and in the other case it is T?. To me, that violates The Law of Least Astonishment, but that is admittedly subjective.


Actually, in `if let x = someOptional`, you should rather see it like this: `if (let x = someOptional)`. If that second part were a function, it would be `() -> Optional<T>`.

That is to say, the if statement is satisfied if the "result" of the "function" (if you follow me; clearly I don't mean it's actually a function in the implementation) returns .some(T) (not nil) rather than .none (nil).

That we can do an assignment with `let` is just syntactic sugar for:

    if someValue != nil {
        let x = someValue!
    }
Basically, we want to know if the value is non-nil. There are instances where the result can be nil, and if so, nothing gets assigned. For example:

    protocol Fruit { }
    protocol SeededFruit: Fruit { }
    protocol SeedlessFruit: Fruit { }
    struct Mandarin: SeedlessFruit { }

    let mandarin = Mandarin()
    
    if let someFruit = mandarin as? SeededFruit { }
Okay, so the compiler will actually disallow this, but you get the basic idea: `mandarin` is not a SeededFruit, the cast will always be false, therefore someFruit will be nil, therefore the "result" of the "function" will be nil, which is counted as false.

The thing is, this isn't specific to the `let` keyword, either. You could just as easily define a new variable with the `var` command.

So `let` and `var` don't mean different things in different contexts and never have. Swift just allows them to be used as syntactic sugar in if statements.


Just a late addition to anybody who sees this, I tend to think of `if let` and `if var` statements in Swift as syntactic sugar for:

    switch someOptional {
        case .none:
            // whatever
        case .some(let x):
            // do stuff with x
    }


I don't think there is a 'love' so much as it's the future of iOS platform.

Swift is just too much, too many things, and yes, they kind of dorked optionals. To me it's just trying to hard.


I'm glad I'm not the only one who feels this way. I used to absolutely love writing Objective C, like totally get into a zone and lose track of time. I've never managed to achieve that same feeling with Swift.


This is what most people seem to totally miss.

You want to solve interesting problems. You do not want interesting tools getting in the way. A boring, old, flawed tool would often work better for solving the problem without interruptions.


> This is what most people seem to totally miss.

No, they don't 'miss' it. I think what you're missing is that is perfectly possible to 'get' optionals and be as, if not more, productive than in ObjC. Not all people get the Swift model and for them it may be unintuitive, but many do get it.

I find it distressing that some developers just want to bang out code as fast as possible and any tool that tries to impose a bit of good software engineering in between that is seen as 'evil'.

I've seen similar sentiments expressed about Rust i.e. 'I'd rather take my memory leaks and data races over the borrow checker'. That's fine, but please be aware that I'll try my best to not let you execute any code on my machine if that's how you approach the craft.


Personal opinion: if the index is based on search, then if one language has better documentation, better ide support, less surprises, then the search count will be less and probably has a lower ranking in Tiobe.

And also, since swift's adoption domain mainly in ios development, it heavily affected by the iOS ecosystem.

My take on why objective c is higher ranked, not because it is better, but it has tons of gotchas, that people have to do heavy search to get things done.


I think good documentation, both authoritative but also that created by active users from blogs and also Stackoverflow are what make people productive in a language and enables both newcomers and experts to progress. And of course libraries and tools are being developed, which also facilitate usage. Look for example at R, much earlier than Python it had various libraries with machine learning algorithms, at the same time the language is really "unusual", so people rather rewrote libraries from scratch in Python. At least for time series analysis I think even today there are more advanced libraries available in R. But the documentation at least some years ago was just not on par with the difficulty/unconventional ways to do things. (Despite having a super helpful tightly knit community as far as I can tell) The Python equivalents on the other hand have often more documentation than needed.

I think also JS won a lot due to good documentation, in fact there used to be an SEO campaign to boost MDN docs because the JS docs used to be so bad making people write bad code, I think the language's reputation still didn't recover from that.


Does Swift have "better documentation"? Because I've always found Apple's developer documentation a bit lacking.


It might be lacking, but there is definitely a culture within iOS of using the Apple docs for things, Apple tooling, Apple development practices, etc. It’s a much more closed ecosystem in many ways than most of the other ecosystems I’ve been exposed to (JS, Python, Ruby, even .NET in some ways).

Because of this, I wouldn’t at all be surprised if searches aren’t common because Apple provides documentation, regardless of its quality.


This is not a rank of which language is "better". It's merely about usage. I'm sure there is a lot of documentation and not that many "gotchas" in Java for that to be the reason for searches by Java developers. There are a lot of Java searches because Java is popular.


It is a rank of "Searching", not "Usage".


Every search gives us valid information about usage.


Swift is a good but not great language considering when it came out.

One good thing about it is that it lets iOS apps run as well as Java/Kotlin apps in ~60% the memory. I'm sure the layers of legacy Android APIs also contribute to this.

The other good thing is that it's close enough to Kotlin to switch between iOS/Android without as much friction.


I code for iOS (in ObjC) and Android (in Java) and honestly the language differences are trivial to me. The much harder parts to deal with are the APIs and the IDE differences. (In particular, I really loathe gradle and Android Studio.)


have used swift since version 2.x to create several mobile apps so i consider myself proficient, for the record, i have also used php, nodejs, c/c++, java, ruby, python, and golang quite extensively. i consider myself a full stack developer.

regarding swift, i am not a fan of the language at all, i use it more out of necessity, when i started to write ios apps, swift was available and i had already gotten used to not having to deal with memory management and such so chose the language, i also thought objc as a language for ios apps was a bit too verbose after looking at some tutorials, i had been used to using interpreted languages and syntactically concise languages like golang so thought i'd give swift a try.

somehow, over the years, i've managed to write a few apps with swift, so it does the job, the 2 things i dislike the most is the "as/as?/as!" statement, e.g. when i have a dictionary like object [String: Any] and having to "as" that is annoying to me, and you can't cast which to me is more direct, you have to first "as? Int" and then de-convert the optional with an !

The second thing i really hate about the syntax is the whole thing with "let" constants, i'm fine having to check for NULL first, but to do if let apples = fruits["apples"] as? String {...}, i guess this is kind of related to the first issue.

my swift code is such a spaghetti mess of these statements, i imagine a lot of it has to do with me, but this doesn't seem to be a problem with other languages. it could also be that i'm just trying to plow through the code so that i can get functionality working asap.


If you're doing so much casting you're doing something wrong. Swift's power comes from the type system, which alleviates the need for all casting in general. Instead of using a [String: Any] dictionary which bears no meaning at all, consider converting it to a struct with accessors of the right types. If you work with the type system, Swift will be a joy to use, because that's where it shines.


I’d agree with this completely. A lot of the people I see having issues with it are trying to write it like python or something. For this style of coding objective-c is a much better choice.


also not sure why you were downvoted, i voted you back up.


not sure why you were downvoted, i voted for your comment, makes perfect sense. i was using a generic [String: Any?] to hold a mix of variables, you are absolutely correct, should use a struct or maybe even a class.


I don't see Swift used for anything outside of iOS development. I don't really think it has enough to offer over more neutral languages like Python or TypeScript to make it worth the time of learning a new language. It's in a position similar to C# in my opinion.


Maybe over TypeScript but I would love it if Swift displaced Python. Python is ridiculously popular for some good reasons (the clarity of code that tends to come out of it) but a whole lot of bad ones - complete lack of an effective type system, no usable multithreading etc let novice developers write bad code lazily without really feeling bad and it still passes as good code ... it's effectively the PHP of this decade.


This simply won't happen till Swift can interop with C, C++ and possibly even ASM, fluently.

Even then, you need structure, data types, and actual foresight while writing code, which leaves out all the less CS/SE trained peeps, such as scientists.

And Swift is never going to be better as a scripting language. Lua, Python, Bash etc all have a few things in common, and Swift would stand out like a sore thumb in that company.


Yes, I agree completely. Indeed, I would say the same thing in reverse if Swift was taking over the world of scripting (a bit like how people propose Kotlin and Scala as scripting languages ...). Using those languages for interactive analysis & high level scripting is as bad an idea as using Python for building a complex application with large numbers of rich data structures.


> using Python for building a complex application with large numbers of rich data structures.

Eh, you don't need to though. Python can be the glue for your tools, if you will. Writing the meaty parts can be done in C/++, while you glue everything together in Python, giving you the best of both performance and ease of writing.


I don't understand; isn't Swift much better than Python at native interop? Swift can directly include and interpret C headers for example.


C yes, C++ no. I'd say C++ is way more commonly used as the common application logic, for cross-platform phone/ desktop apps, so this represents a significant roadblock.


How would you improve the existing C/C++ interopability in Swift? What needs to work that doesn't work right now?


Well, Swift has zero interoperability with C++ currently.


isn't that the same as many languages? Everyone just defines the functions that matter with __cdecl and moves on.


Swift can interoperate with most C code already.


Ignoring C++ and Assembly completely. C++ is more important than C, especially for Swift's use cases, like multi-platform phone and desktop apps, video games, etc.


Oh, and it appears that Smalltalk has at least re-entered the top 50 after a 5 year absence:

https://medium.com/@richardeng/smalltalk-and-tiobe-5409afb19...


This took me down a rabbit hole. I wonder if there are any legit reasons to use Smalltalk (Pharo) these days?


My main issue with Swift, at least initially, was how much of a moving target it was/(is?). It's been in public use for under five years and it's already on it's 5th major version. I'd learn it to a certain degree, start working on something else, then come back and find that a lot had changed while I wasn't paying attention. That said, the difficulty in iOS development stems from getting a good understanding of the core libraries, and thats much easier to do in Swift than in Objective-C (unless you're already familiar with that language or coming from something like Smalltalk).


Swift 3 was the last version that had dramatic changes. The first two years it was pretty upfront about being a work-in-progress that wasn't for people who wanted something stable, and the last two years it's been pretty stable with only very minor incremental changes.


Apple pushed swift as production ready from the moment it was announced. This was a mistake as it was in fact years away from production quality.

Now I wouldn’t even consider starting a new app in ObjC but it’s been a bumpy road to get to this point and the tooling is still pretty bad in some respects.


> The first two years it was pretty upfront about being a work-in-progress that wasn't for people who wanted something stable

I've been very much on the Swift train in its first 2 years, but I've never heard of anything like that. Do you have a source?


https://swift.org/blog/abi-stability-and-more/

Almost every release has been pushing towards this. That’s not to say that that swift has been “unstable” as it was pretty solid from the beginning but they did make it pretty clear that you would need to accept a lot of changes in the early days.


Swift is no longer a moving target.


The tiobe index always seems to be way out of line with the other indices out there. Nobody but tiobe ranks VB.net above c#. Also the other day I read that Lazarus isn't taken into account in terms of assessing the popularity of object Pascal, because Lazarus is an IDE, not a language. Tiobe also seen to be constantly tweaking the methodology such that if you look back at historical rankings they can be totally different to the rankings for the same period that were reported historically. Which is verging on the dishonest. It would be like a tipster going back and revising his tips after the fact.


Microsoft own "statistics" claim million of C# devs, 100ks of VB.NET devs and 10ks of F# devs.

Tiobe is strange. Either they blend all Basic together (legacy code bases) or something more stupid happens :)


Its not strange. Just a result of bad input.

Search for:

+"C# programming" - 2 340 000 results <- USED

+"Visual Basic .NET programming" - 17 000 000 results <- USED

+"C# .NET programming" - 26 600 000 results.


Okay. So it is "more stupid happens" :).


So is c# the new vb6 then!


That would break my heart.


Tiobe compares search engine queries and lists the most popular languages mentioned in its list. So as an example PHP will naturally end up high on the list and Closure will end up low. Because PHP has lots of irregularities to check, and Closure doesn't. So it might be Swift is simply a better language than ObjectiveC?


My first thought once I saw this was "flutter". It's really getting a lot of market share in mobile dev and all you need to write is Dart, for better or worse. Also, whenever you need to do iOS specific duct-tape-plumbing, it seems that Obj-C is still more capable than Swift because of the accumulated legacy.

However, take it as an educated guess rather than a fact since I have never been a mobile dev apart from hobby apps.

EDIT: typos.


As an iOS dev I’m not thrilled with Flutter because it’s tied to a niche language and like most cross platform toolkits, it throws platform conventions out the window. Apps built with it sit firmly in the uncanny valley, feeling somewhat smooth but still “wrong” due to dozens of small cues being wrong or slightly off somehow.

And yeah, Objective-C is still the duct tape language of choice because it interacts well with C++. Swift handles plain C well but has no interior with C++ at all.


I've recently been stuck with a react native project and I was surprised to learn that objective-c is still the language of choice for interop.


Well until the next version of Swift it’ll add 10mb of runtime to your app, not worth it for 100 lines of code


The current version of Swift, actually. Swift 5 has already been released.


I really wish we could build iOS/Cocoa apps in Rust. Like Swift, we could use reference counting instead of GC, but Rust has a much nicer type system.


For years and years I've really thought ios/macos should support scripting languages to build apps.

For instance, why can't you build an app that does something useful in python or bash or perl?

The way I see it:

- applescript is not good

- cocoa and swift require compilation and generally xcode

- there are hacks that let you do a thing or two through osascript (back to applescript anyway)

- there are other hacks like PyObjC or python with tk, not really useful


I think you're spot on, in that a lot of iOS/macOS is gluing together existing components, very much like a scripted-components/Software-IC[1] pattern. For most of this, a scripting/coordination language would be great.

However, it needs to be interactive, not require the same (if slightly faster) Xcode build cycles. Also, most of those languages interoperate only partially.

My crack at this is Objective-Smalltalk[2]. It can script interactively [3][4] and is a full-fledged programming language.

[1] https://blog.metaobject.com/2019/03/software-ics-binary-comp...

[2] http://objective.st

[3] https://www.youtube.com/watch?v=sypkOhE-ufs

[4] https://www.youtube.com/watch?v=sypkOhE-ufs


Swift's type system was designed to make developers happy, Rust's to make the borrow checker happy.

As it turns out, some people think like a borrow checker and don't have any issue with Rust's choice. But most don't and we should not make them miserable.


Well, really it was designed for ObjC interop. Hence the limits on (e.g.) generic protocols.


Make some developers happy*

The tradeoffs swift chooses make a lot of sense for mobile app development.


Swift's type system is pretty similar to Rust's (especially with recent proposals covering existentials), but without lifetimes.


What do you think is nicer about Rust's type system?

IMO Rust's type system is more principled but does much less for you, and so is more annoying to use. As a simple example, incrementing a reference count in Rust requires a clone() call; in Swift this happens automatically.


Last time I checked out Swift, I ran into some annoying issues with type erasure and generic protocols (similar to the issues Java has, where one class can't be both Iterable<Integer> and Iterable<String>).

I also find Swift's "inout" params deeply confusing (especially when aliasing is involved), and I wish it made dynamic dispatch more explicit (like Rust).


What would e.g. `view.superview` look like in Rust though? That seems to be the main sticking point. In UI programming it is common to walk up, down, and around the view hierarchy.


I actually think that allowing this sort of traversal is usually a bad thing, but it can always be solved with Rc<T> and Weak<T> in Rust.


Not much movement for Swift either direction in the job postings shared in the monthly whoishiring thread, but still consistently beating Objective-C

https://www.hntrends.com/2019/mar-postgresql-vs-mysql-no-lon...


To the extent that this is even meaningful I suspect it reflects a disenchantment with native apps in general. The app goldrush is long since over and getting an app on a user’s home screen now is extremely expensive and difficult.

That said I’ll take Swift over Objective C any day.


I'd be curious to hear from iOS devs what they think is going on here. Did anyone try moving to Swift and what was the specific reason for going back to Obj-C?


I really like Swift as a language. To me it feels more elegant and less verbose than Objective C. However, the tooling around Swift is worse, so you have to live with longer compile times and other random bugs. This drawback is still not enough to make me go back to Objective C. Also, I am not sure I get the argument about ecosystem - you can easily include Objective C dependencies into Swift projects.


> Swift is worse, so you have to live with longer compile times

Is this truly a factor? Aren't the vast majority of mobile apps relatively modest in scope due to the very nature of the platform? How long does it take to compile an average Swift app?


> Is this truly a factor?

Yes. Every Swift project I am aware of fights with this problem. Two of the CocoaHeads Berlin talks in the last year or so were about workarounds for slow Swift compile times.

For one, I computed the compile speed for the frameworks in question, it turned out to be in the double digits of lines / second. That's not bad, that's truly horrific, starting to get into "transcribe into machine language by hand"-joke territory.

The other talked about splitting all their UI features up into little mini-frameworks in order to be able to work on this one feature in isolation without having to compile the rest of the app. While I am a great fan of framework-oriented development, that's not really how that's meant...


Depends on your scale. A clean build of my medium-sized app takes a couple of minutes on my machine, but most large apps rely on dozens of frameworks and have lots of code in them and can take tens of minutes to build.


Swift can be pretty bad due to how much the compiler wants to guarantee. Doing certain things can also really make it take a long time (like doing a ton of casting on a dictionary). I personally think it’s worth it and saves me a ton of time other places but I do have warnings for most of my projects for if a function is taking too long to compile.


I doubt it. It was worse but now just good enough not to care too much.


My impression (which is just anecdata): there is plenty of Objective-C out there, and migrating it to Swift is such a significant effort that it likely won’t happen.

Swift however has significant mindshare, there are even Swift-only iOS devs. Lots of developers and most materials have shifted to Swift. That’s where the excitement is.

Swift interopts with Obj-C well. Lots of development has switched over to Swift, even if the rest of the codebase remains Obj-C.

There will still be plenty of Obj-C jobs over the next 5 years, but there will be more Swift oriented jobs.

Swift is also making strides into other areas: Swift on the server is growing, Swift on Linux and Windows is improving, and Chris Lattner is forging ahead with Swift for Tensorflow.

Whether it is successful in any of those beachheads remains to be seen, but there is significant effort going towards them.


I would suspect it’s because even many of the core libraries are still written in Obj-C. Swift is neat and I really like it, but the ecosystem is such that even when using it on a new project you end up writing ~25% of your project in Obj-C.


Depends on the nature of your project but I can count the number of Obj-C libraries I’d be likely to use in my projects on one hand, and they all play secondary roles (e.g. analytics). For almost anything else Foundation/UIKit and third party Swift libraries would be used.


This sounds extremely strange. Last time I neede to write Obc-C was early 2017. My current project started as pure ObjC, now there is zero lines of this in production.


I think the ranking is meaningless. VB.net over JavaScript? Assembly over Ruby?


Is there any good reason why Apple couldn't just let us code apps directly in JS/React and not have to go through React Native with its overhead? React surely isn't perfect, but it's likely the best way to create large scale UIs. Far better than Swift.


Neither React nor Swift can create UIs alone.


…they can?


React is a set of JS APIs and tools that live on top of a an underlying model (the browser DOM, litho, etc.) Swift is a programming language. The comparison doesn't really make any sense.


Well, yes, but this is a pedantic comment in relation to the post you were replying to.


It wasn't meant to be. OP expressed a desire to "code apps directly in React" and avoid "React Native with its overhead." But that's meaningless: the React design is a layer on top of an underlying model, and that layering will always impose some overhead.


Not sure I believe much in what Tiobe has to say about programming languages but I would happier if Objective C was dead forking last.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: