Hacker News new | past | comments | ask | show | jobs | submit login

>It's a common reaction displayed by the proponent base of many languages, even here on HN, to advocate the use of their language for every conceivable need. This can't possibly work well.

Is that some law of nature, or merely a consequence in how we've designed languages thus far?

A well designed type-infered statically typed language with a REPL/interpreter, a fast AOT compiler, an optional GC, good documentation and IDE support, and a big API library could work well for all kinds of domains, from scripts and websites, to network application servers and systems programming.

Note also that the money we spend on language and tooling development are miniscule and laughable compared to the IT industry's size.

When there have been some decent money involved we had good results. Namely:

With SUN/Java we got the fastest, more mature VM out there, with very good GC and a ton of tooling available.

With Javascript, we got V8/JSC/etc, that made the language 10-100x faster compared to the nineties.

With MS, we got C#, a great language that can cover tons of ground for what its designed, modern, with a huge ecosystem of tools and APIs.

And from Apple, we got the great Cocoa libs and Swift (still a work in progress but very promising).




> Is that some law of nature, or merely a consequence in how we've designed languages thus far?

It's a game of tradeoffs, same as everything else in nature, isn't it? I would rarely say that a given language's design or implementation is universally flawed, instead it's the result that comes out of a set of premises and decisions. Yes, some of them may be objectively bad, but mostly they are just decisions made to solve specific problems.

In a way the whole point of my post was that languages don't primarily exist on a one-dimensional spectrum of "good" to "bad". They're optimized for different things. The assumption that given a list of possible decisions during language design you just have to select the right one every time is based on the fallacious premise that right and wrong are the actual choices you get.

> A well designed type-infered statically typed language [...]

...and those are already some key decisions you made that reflect what's important to you personally. Every time you choose one of these properties, you open doors and close others behind you. While you may personally believe these are the minimum requirements which every "good" language absolutely must have, you should also recognize that you are doing exactly the same thing as every single language designer in history.


>It's a game of tradeoffs

I find that less and less true. The canonical example for me are abstract data types. They're pretty much free on a runtime level, enable clean code, and allow for more static checking. Basically a superpowered version of enums. I can't see any argument against having it in the language unless the express goal is to have no syntax (lisp).


Again, that's a personal preference, not a statement of fact. There is a cost associated with different type systems, and that cost can vary with the problem you are trying to solve. It's not that I argue against the benefits of having such a system in place, but you should be aware of the fact that this too might not represent the end-all-be-all for all programmers.

One of the basic mistakes made here seems to be the notion "if programmers only _knew_ about the one true way of doing it, they'd all use the same language and tools, namely: mine".


Again, this is not about personal preference.

In the real world, 90% of programmers use either C#, Java, C/C++, JS or Python/Ruby/Perl in their jobs.

We could restrict ourselves to this set of languages and syntax style, and design a super-set language that does everything, gets rid of their historical warts, and can go from scripting to HPC.

Stuff like using significant whitespace or not, are BS bikeshedding, which we can bypass.

The thing is, were it matters (speed, expressivity, availability of REPL and IDE, large SDK) nothing but money prevents us from making such a language...

Is there anything preventing a top notch team working for 10 years and come with one ready, including all the trims and works, JIT, AOT native compiler, ports to 2-3 architectures, batteries et al?

Sure, some people would still like their Java or their Lisp or whatever.

But there's not some logical impossibility preventing us from creating a language that's better for 90% of the tasks than what's our there.

In a sense, that's what MS did with C# -- but they stopped too soon because they have their own agenda. So it was Windows only, in the CLR with no supported AOT option, etc.


> Again, this is not about personal preference.

There is no need for passive aggressiveness. I get that aping the exact phrase used in a parent comment is used to communicate disrespect, and it's duly noted, but all things being equal I would prefer not to go down that route.

My argument here is that people designing languages have made choices based on their personal preference as well. It's not really appropriate to take one specific set of features you like and declare it to be the objective winner.

> In the real world, 90% of programmers use either C#, Java, C/C++, JS or Python/Ruby/Perl in their jobs.

That's a No True Scotsman-like argument. Also, what's the meaning of the three different types of separators you used there? But for the sake of convergence, yes, let's assume every important real-world language is in that list.

I'm still not sure what to say about this without repeating myself, except for: go and do it. You say it's a matter of investment, but on the other hand consider the benefits if someone pulled it off. If you really do believe you are the person who has this all figured out, please go ahead and implement this. Heck, make it a Kickstarter project or something, I'm in!


>There is no need for passive aggressiveness. I get that aping the exact phrase used in a parent comment is used to communicate disrespect, and it's duly noted, but all things being equal I would prefer not to go down that route.

What passive aggresiveness and/or disrespect?

The repeat is used to communicate disagreement with a specific thing attributed to what I said. Since it was attributed twice, with no regard to the arguments I made, it was mostly necessary.

If anything is "passive agressive" is this sudden ad-hominen and armchair psychology attempt.

I only responded to the abstract issue under discussion, didn't made any judgements on the participants. Can we keep it at that level?


>My argument here is that people designing languages have made choices based on their personal preference as well. It's not really appropriate to take one specific set of features you like and declare it to be the objective winner.

And my point is that I'm not doing that, I'm not picking features based on personal preference (my actual preferences are different), but to describe how a language can encompass the whole range of applications (be the right tool for most jobs, as much as possible).

This wasn't about "my dream language" (that would be something like Smalltalk) but about whether a language can cover all/most bases or we're forever doomed to use a babel tower of "right tools for the right job".

And the "cover all bases" thing I tried to tackle from a technical features standpoint. At this conceptual level of the argument I don't even care much if programmers will like the end result.

>That's a No True Scotsman-like argument

No, it's just a statistical observation. 90% of programmers do use this languages. I don't say the rest are not programmers or not true programmers -- just that we can cover the majority of programmers doing professional work with a language in that vein.

>I'm still not sure what to say about this without repeating myself, except for: go and do it. You say it's a matter of investment, but on the other hand consider the benefits if someone pulled it off. If you really do believe you are the person who has this all figured out, please go ahead and implement this. Heck, make it a Kickstarter project or something, I'm in!

Now, that's disrespect and passive agressiveness!

I never said it's just me that "has all that figured out". In fact lots of people say the same thing, inside every language community there's a trying to fix the same pain points to make each language more universal.

Lots of Python people for example wanted to make it GIL-less / capable of good async operation / faster / typed / etc. All this is for handling different kinds of scenarios that it currently does not.

People using JS asked for more speed, then for server side/native interfacing (Node et co), then for "programming at large" features (ES6) etc, types, things like asm.js to get native memory management, etc.

So, what I wrote was that having a language extend to almost all jobs is not impossible, and gave a laundry list of features (taken from observations such as the above), that could accomplish that.


> So it was Windows only, in the CLR with no supported AOT option, etc.

NGEN was there since day one.

Spec#, Singularity systems programing language (based on C#) only has AOT compilation to native code.

This work was the basis of Windows Phone 8 .NET, that only compiles to native code, in a PE format known as MDIL.

This work was then continued to bring static code compilation to the Windows 8 for tablets and now for desktop store apps.

It is part of the upcoming .NET 4.6.

The only deployment format not supported for static code compilation are the traditional desktop and the compact framework.

Parallel to this work, the Dafny the systems programming language for Ironclad,Singularity's successor, also produces static executables.


Cool, missed that NGEN was there from the start. Was mostly following Mono at the time, were the AOT option came quite later.


>It's a game of tradeoffs, same as everything else in nature, isn't it?

My question is if tradeoffs are inherent in programming language design (some mathematical inevitability) or due to lack of resources and other "real world" concerns that can be overcome given enough care and money.

I'm not convinced by anything that I've seen that we don't just have the latter.

E.g. one could say in 1995 "JS is an awful language for development, and it's unsuitable for anything that needs to be fast. It's an inevitable tradeoff, use the right language for that etc".

And then the big corps got interested in optimizing JS, and we can now write full 3D games, and even video encoding in it, and we have asm.js and the like that take the performance of JS 2-3x worse than C for most things, as opposed to 100 times worse back in the day.

And we got node in the server side, and tons of libs etc. And with ES6/7 we get tons of language improvements.

Even the "fundamental" issues with JS, like bizarro type coercion, global by default, no integer types etc, could be corrected, it's just compatibility concerns that makes us not fix them, not some inherent impossibility of getting a great language without those issues.

Heck, we could even introduce optional gradual typing for JS, if it wasn't for those backward compatibility concerns, and even AOT compilers to native code.

Those things maybe wouldn't make JS perfect and useful for everything, but it would make it an order of magnitude better than what it is.

And if we started from scratch, without all those compatibility constraints at all, we could get a new language pretty close to perfect with enough money and a great team.

>...and those are already some key decisions you made that reflect what's important to you personally. Every time you choose one of these properties, you open doors and close others behind you.

What door exactly closes? In the "mathematic necessity" way, not the "some people only like dynamic typing" way.

Because the thing under discussion was wether a language good for everything (or close to it) could be produced.

I chose those attributes not because I like them personally, but with this end goal in mind. Namely:

1) Without static types, you can't get the last mile of performance and safety checks, for using it for high performance apps and systems programming. They also help having a better IDE experience (autocomplete, suggestions etc) for those who like that.

2) If the types are not infered the language will feel too verbose and weighty to people wanting to use it for quick scripts and the like.

>you should also recognize that you are doing exactly the same thing as every single language designer in history.

Well, I don't have any beef with any language designer in history.

My basic problem with current languages is not that they made some choices, but that they didn't make some additional work they could PILE ON TOP of the previous choices and get much better.


I think the issue here is that we have two opposing theses on how this works. I posit that the programming ecosystem is very much like any other ecosystem in that there is are many organisms within it, for a good reason. For example, looking at nature, you might ask yourself why biology hasn't converged on a single lifeform yet, and whether it's not simply an issue of having all the right genes and discarding those that are not so good. But that's obviously not how it works. There are niches and different strategies for solving different problems. There is no one genome that solves everything. And yes, every single codon is a decision that opens doors and closes others as well.

If I understood you correctly, your position is the inverse of that thesis. You argue that there is a base abstraction that should work equally well in every context, and that specialization could come in the form of optionals piled on top of the one true base. There is no mathematical reason why this should be unworkable, but a solution so far eludes both programmers and evolutionary processes alike. That doesn't mean you shouldn't go ahead and try solving it. It's a worthy project. If you believe you can come up with common denominators that cover all aspects of all previous languages, by all means: give it a try!


>For example, looking at nature, you might ask yourself why biology hasn't converged on a single lifeform yet, and whether it's not simply an issue of having all the right genes and discarding those that are not so good. But that's obviously not how it works. There are niches and different strategies for solving different problems. There is no one genome that solves everything.

And yet, nature already has "converged" sort of into humans, who are "master of all trades" sort of, and somewhat analogous to the "full-spectrum language" I'm talking about.

And as with that language and other languages, the existence of humans doesn't mean all other lifeforms will perish or dissapear.

>* There is no mathematical reason why this should be unworkable, but a solution so far eludes both programmers and evolutionary processes alike.*

Well, for evolutionary processes we have humans. And soon, if we are to believe some pundits, the "singularity".

As for languages, we have some near damn all rounders, but my observation is that it's not because it has "eluded programmers" that we don't have it, but because of business reasons (e.g. some company wants to only target segment X), oss being underfunded, narrow scope, etc.


Optional gc is not something I've seen work in practice. Apple abandoned their attempt because of the difficulty getting gc'd and non-gc'd code to play together nicely. Rust has (and may again) offer something like optional GC, but it will operate at the value level rather than the program level and thus doesn't really solve the library problem.

Programming languages are designed in a huge, multi-dimensional space. Should the language be static or dynamically typed (or optionally typed)? Interpreted or compiled (or JIT'd)? Manually memory managed or gc'd? Powerful or simple type system? Large (C++) or small (scheme)? Batteries included? Hosted on a runtime?

Each of those questions (and of the dozens more than go into designing a language) involves tradeoffs. Manual memory management makes a language suitable for domains like kernels and games, but inevitably requires the user be aware of where and how memory is being allocated and deallocated. Small dynamically typed programs tend to be (in my experience) faster and simpler to write, but static typing makes maintain large codebases much easier.

I strongly believe there will never be a language that is the best available for all domains. Currently I find that I need three languages to cover my bases:

  * A dynamic, batteries-included scripting language (Ruby)
  * A static, fast, gc'd language (Scala)
  * A static, fast, non-gc'd language (C++)
I'm hoping that Rust will collapse the last two, but there are still a lot of tradeoffs.


>Optional gc is not something I've seen work in practice. Apple abandoned their attempt because of the difficulty getting gc'd and non-gc'd code to play together nicely. Rust has (and may again) offer something like optional GC, but it will operate at the value level rather than the program level and thus doesn't really solve the library problem.

Well, you could make 2 compilers for languages with the same syntax, and 2 sets of libraries (GC and no-GC) with mostly the same APIs, divergent only in whether they have GC or not.

For low level stuff you use "GCLESSLANG" (as a better C) and for the other stuff you use "GCLANG".

Otherwise the 99% of the syntax is the same, and the 2 langs are designed to easily call into one another. They could share most of the parser and compiler too.

This would basically give you the feel that you use one and the same language, switching between GC and no-GC version.

It's a money and resources thing, not some "cannot make this work" thing.


I think Rust attempted to do a much more conservative version of this (same language, different sigils on pointers depending on whether pointers were refcounted or ownership-transferred) and determined that it didn't work in practice. Libraries have to use one scheme or the other, nobody made APIs that exposed both schemes, and calling between them was too much of a hurdle. So in practice people ended up settling on a single pointer type, which was -- somewhat surprisingly for the language designers -- the non-refcounted one. And the language got rid of syntax for the refcounted one, and everything became more usable.

I could believe that with more money and resources everything would have been rosy, but my understanding of the facts doesn't particularly support this.


A well-designed type-inferred statically-typed language either cannot support the following (which is valid Python), or can only do so with tradeoffs that either stretch the definition of "statically-typed", stretch the definition of "well-designed", or would make the language even more intimidating than monads make Haskell:

    def fizzbuzz(i):
        if i % 3 == 0:
            return "fizz"
        else if i % 5 == 0:
            return "buzz"
        else:
            return i

    for i in range(100):
        print(fizzbuzz(i))
Now you might argue that it's a good thing for people to care about proper typing for large, maintainable codebases, and I would agree; in some of the better Python code I've seen, it's social convention to add type signatures in a comment (I once worked at a place where people added Haskell type signatures) and enforce proper typing. And yes, there are a few Python APIs that return either a single item or a list, instead of a one-item list, and they're annoying.

But it's a thing people do, it's especially a thing that's useful in a teaching language or in a scripting language that's trying to fill PHP's niche, and I don't think a language that refuses to type-check such a thing will fit every conceivable need. Sometimes the goal is just to get something done quickly, not be large or maintainable.


>A well-designed type-inferred statically-typed language either cannot support the following (which is valid Python), or can only do so with tradeoffs that either stretch the definition of "statically-typed", stretch the definition of "well-designed", or would make the language even more intimidating than monads make Haskell

A statically typed language can still have a generic type. C# for example has dynamic: https://msdn.microsoft.com/en-us/library/dd264736.aspx, and Objective-C has id.

You get static types whenever you need them and dynamic flexibility whenever you need it. No need to invoke something Haskelly at all.

For more type checking safety in this particular case, you could also say (or infer) that the return type is a union that's either "String | Integer".

I was talking about a possible very flexible uber-language here. Why would it miss those two oldest tricks in the book, that are already present in tons of existing languages?

That said, I didn't say it would have to express all idioms. Just that it would be applicable for as many domains as possible, from low (drivers, etc), to high (scripting, etc).

This particular idiom, as you also note, I don't find particular useful anyway, but rather a code smell. But if it was needed the two ways described above could solve it without much issue.

When you need to be quick and flexible use "dynamic" type, when you want to be fast and safe and take little memory switch it to a specific type. The language still covers both cases.


I'm not sure what your complaint is.

    import Control.Monad (forM_)
    
    fizzbuzz i | i `mod` 3 == 0 = Left "fizz"
               | i `mod` 5 == 0 = Left "buzz"
               | otherwise      = Right i
    
    main = forM_ [1..100] (\i -> either putStrLn print (fizzbuzz i))


Yeah, that certainly works, but one of Python's particular strengths (and in particular what the original article was about) is a teaching language. I certainly understand what you're doing there, but I wouldn't want to explain it on day 3 of high school CS class. :)

The slightly cleaner thing here would be to take advantage of both strings and integers implementing Show. I'm thinking a bit about whether that would actually work well enough in a teaching language; possibly. (Although if I'm remembering my Haskell well enough, you need to enable existentially quantified types to make this work, which, again not really day-3 material. Probably works fine in like Rust, though.)


It's because languages are designed to be niche. There are so many languages, that to be used you just need to be good at one or two things.

The "universal" language would be fairly low level, but would allow you to create your own DSLs. So you'd really be designing your own language for each task. That language would be 100% suitable to the task at hand. For example, you'd embed SQL directly into your code, not as a string, but actually parsed by someone's library so it's syntax checked.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: