> Well, yes, there are. But picking the right language for the task is one easy way to improve software quality, so why discard it?
The language isn't all that important. The main problem is functional decisions that transcend the implementation language. It's not hugely important if I write this comment in English or French to get my point across, and that's why I'm writing it in English.
Beyond the random internet post (or the rare one, like "Beating the Averages", which might well be exaggerated) I haven't seen any obvious competitive advantages from choosing a non-mainstream language. On the contrary, what you often see is that immature ecosystems are huge timesinks because a lot of stuff simply isn't "there" yet.
That's not to say there aren't incremental improvements happening with new languages over time. But if your goal is to complete a project as opposed to evolving a language ecosystem I'd say that it's unwise to pick a non-mainstream language. Heck, I'm still using C for almost all of my projects (chances are I'd be using Java or C# if I was working in other domains). I've tried to switch - I have smaller and larger investments in a dozen languages. But no other language has stuck for me. A big reason for this inertia is that I simply know C well and I feel productive with it. But there are more objective qualities, some of which are related to it being mainstream and having a mature ecosystem. My language of choice has flaws, but they are well-known, with well-known workarounds. That lets me focus on the problem instead of the tools.
Blog posts generally don't consider these qualities because they can't be discussed with toy examples that show you how you can neatly turn an isolated problem from 10 lines of C code into a 2-line fancy language solution. The problem with these examples is that they are not representative of longer-term and larger-scale software development.
> The language isn't all that important. The main problem is functional decisions that transcend the implementation language. It's not hugely important if I write this comment in English or French to get my point across, and that's why I'm writing it in English.
I think this is the root of the disagreement. For my domain at least, I find it very hard to write robust software in some languages and fairly routine in others.
Unfortunately the data on this topic is very limited - typically groups of college students implement something in various languages to compare. Hardly a realistic test.
What I can point to are code constructs that I cannot make in certain languages where the work arounds are either more verbose or less precise.
As I said I'd likely use a different language if working in a different domain (but then again I'm not).
But you will be hard pressed to find a serious domain where there aren't solid mainstream tech choices.
> What I can point to are code constructs that I cannot make in certain languages
This is what I was alluding to, these examples are not very relevant. In a larger-scale codebase the important thing is the structure that you build, not the structure you get for free (or are being imposed) by the language.
If you cannot say certain things easily, then write a function to factor the common stuff in a layer that you can then build upon. This is the basics of programming, and from what I've seen this is already where it fails in most real-world codebases.
Functions and records (structs) are sufficient to cover you in almost all real world situations. I believe that the step up from assembly to higher-level languages is a very powerful step, because it frees you from doing a repetitive task again and again, that you can't even realistically do better than the compiler (because of time constraints), at least not on a large scale. Another reason that the step works is that what it automizes are quite local issues to solve. Barring extreme optimization requirements, we can simply come up with a function call ABI and have the compiler solve all register allocation for us. The tradeoffs and ramifications of this are quite clear - we can be confident by now that there are no unknown tradeoffs that will come haunt us later when the project has grown much bigger.
The same step up will be hard to repeat, since functions and structs are so expressive already, it's rare to find a problem that can't be elegantly solved using them. Garbage collection is another big help automating the repetitive in some domains, but it gets in the way in other domains.
> The same step up will be hard to repeat, since functions and structs are so expressive already, it's rare to find a problem that can't be elegantly solved using them
One thing that's hard to express with them is a data type that's TypeA OR TypeB OR TypeC. For that you need Sum Types, which a lot of the newer generation of languages (Rust, Swift, Kotlin) have, but the older generation (Java, C#, C++) historically haven't had at all and still have poor support for.
I find this to be a significant productivity boost over languages that don't support this.
That's a good point. ADTs (algebraic data types) are certainly nice for certain classes of mathsy problems, compilers, and similar. As I'm moving more and more towards distributed systems work (as well as in-process messaging, queuing, and event handling) I'm still figuring out if I want ADTs things built in, or if enums + simple code are good enough to cover my needs.
One thing I don't particularly about ADTs is that they remove opportunities to tweak the representation, and make you think less about smarter ways of avoiding certain complexities caused by choice (Sum Types) in the first place. But I have to admit that very often I'm simply doing the "struct with a tag member and a union member" thing manually, which one gets for free by using standardized ADT syntax.
Yes sure - if it becomes a pattern, it makes sense as a language feature. Conversely, if it is a language feature there is the danger of it becoming a pattern without need.
> If you cannot say certain things easily, then write a function to factor the common stuff in a layer that you can then build upon. This is the basics of programming, and from what I've seen this is already where it fails in most real-world codebases.
This presupposes that the language even has facilities to enable appropriate abstractions! You can't argue that languages don't matter without reckoning with these real world matters. Yes, sure, I can just "abstract it away", yes, sure, "all languages are Turing complete so equivalent".... but no, that's not the whole story at all.
> The same step up will be hard to repeat, since functions and structs are so expressive already, it's rare to find a problem that can't be elegantly solved using them.
Maybe, sometimes, ... - in practice, I've often seen them pave the path to madness. When types become "expressive" the problem is that now you have to deal with 2 languages. An ok one and a soul suckingly bad one.
Well, we seem to agree that some languages are better suited for some domains. So I'm not entirely certain what we're disagreeing upon.
My thesis is that you need to pick the right language for the task and that language evolution regularly produces languages that are an order of magnitude better at some tasks than previous languages. To pick an extreme example, I doubt that you're going to code your next relational storage with pure C. You'll most likely use some SQL (or a competitor) somewhere in the stack.
And of course, as I believe we agree, the ecosystem matters a lot. In many, but not all, cases, it matters more than the language.
I think the biggest differentiator is ecosystem. If you don’t count that, then there really is no evidence of any significant difference between any two managed languages. There may be a bit more in systems programming, but that itself is a niche.
Ecosystem is a huge differentiator. But then, again, there are things for which some languages are infinitely better than some others.
As everything in development, it's a tradeoff. I have written system code in JavaScript (calling directly into libc or kernel32.dll methods) and I don't want to ever do it again. I have written compiler code in C++ and the code was orders of magnitude worse than it would have been in a functional programming language. I have written database code (the actual dbms, not using a database) in both OCaml and Rust and I'd really prefer using SQL (or some layer on top of it), rather than reinventing it.
> "I haven't seen any obvious competitive advantages from choosing a non-mainstream language."
Just look at the market cap of YC companies started over its first 10 years and group by initial back-end language choices.
It wasn't dominated by companies using Java, .Net and PHP. On the contrary, those using Ruby account for over 50% of the valuation. Those using Python (a far less popular language then than now) accounted for another 17%. PHP, JS, Java and .Net combined were at under 5%.
Even with my typical C rants, I absolutely support the idea of a project going with C if that is what it makes sense on the market they operate on.
The additional effort to add tooling and workflows that most of the teams aren't comfortable with, extra effort implementing integration layers, might slow down delivery to the point of diminishing returns, thus withdrawing any advantage.
So while I personally might not like it, in some scenarios that is exactly the tool to use.
Same applies to other domains and whatever language might be the "to go option" there.
The language isn't all that important. The main problem is functional decisions that transcend the implementation language. It's not hugely important if I write this comment in English or French to get my point across, and that's why I'm writing it in English.
Beyond the random internet post (or the rare one, like "Beating the Averages", which might well be exaggerated) I haven't seen any obvious competitive advantages from choosing a non-mainstream language. On the contrary, what you often see is that immature ecosystems are huge timesinks because a lot of stuff simply isn't "there" yet.
That's not to say there aren't incremental improvements happening with new languages over time. But if your goal is to complete a project as opposed to evolving a language ecosystem I'd say that it's unwise to pick a non-mainstream language. Heck, I'm still using C for almost all of my projects (chances are I'd be using Java or C# if I was working in other domains). I've tried to switch - I have smaller and larger investments in a dozen languages. But no other language has stuck for me. A big reason for this inertia is that I simply know C well and I feel productive with it. But there are more objective qualities, some of which are related to it being mainstream and having a mature ecosystem. My language of choice has flaws, but they are well-known, with well-known workarounds. That lets me focus on the problem instead of the tools.
Blog posts generally don't consider these qualities because they can't be discussed with toy examples that show you how you can neatly turn an isolated problem from 10 lines of C code into a 2-line fancy language solution. The problem with these examples is that they are not representative of longer-term and larger-scale software development.