Hacker News new | past | comments | ask | show | jobs | submit login
Is your programming language unreasonable? (fsharpforfunandprofit.com)
64 points by CmonDev on Feb 4, 2015 | hide | past | favorite | 108 comments



This is article actually has very little to do with any particular programming language or language comparisons. I like how it logically builds up to each of its conclusions which summed up are the following:

1. Variables should not be allowed to change their type.

2. Objects containing the same values should be equal by default.

3. Comparing objects of different types is a compile-time error.

4. Objects must always be initialized to a valid state. Not doing so is a compile-time error.

5. Once created, objects and collections must be immutable.

6. No nulls allowed.

7. Missing data or errors must be made explicit in the function signature.

I think the writer also provides a pretty practical definition of what it means for a program to be "reasonable". Really nice job!

EDIT: I pasted the wrong list as dsp1234 pointed out :)


> Variables should not be allowed to change their type.

This is the preservation component of the formal concept "type safety". Thus any type safe language must achieve this.

> Objects containing the same values should be equal by default.

This one really depends on a lot of things. You may have abstract types in which case equality should not be implicitly defined. You may also have codata types where you cannot merely compare "value containment" since it's no longer a concept. These are both important things which are sometimes called "objects" and shouldn't be eliminated from languages.

Really, this is a lot more complex than it seems. But at first swing: "structural equality should be everywhere and reference equality isn't a genuine concept" is pretty great.

> Comparing objects of different types is a compile-time error.

This is a-OK so long as it's easy to prove when two dissimilar types are actually the same. In cases where that's not true there are other forms of equality which are useful.

> Once created, objects and collections must be immutable.

This is a little overly strong. Regionalized mutability can be made to be referentially transparent from the outside.

---

But man, yeah, this are tiny nits. This list is great.


> You may have abstract types in which case equality should not be implicitly defined. You may also have codata types where you cannot merely compare "value containment" since it's no longer a concept. These are both important things which are sometimes called "objects" and shouldn't be eliminated from languages.

I think it's important to distinguish equality from equivalence. There are many cases (e.g., enumeration types) in which equality is a perfectly appropriate definition of equivalence; but there are also many cases (e.g., some codata types) in which equality is an inappropriate definition of equivalence. Therefore, equality-as-equivalence as default behaviour is probably a poor choice— though it should be easy to opt for that behaviour.

As for equality (i.e., intensional equality) itself, I think memberwise equality (or perhaps even equivalence?) is a perfectly reasonable definition. There are times when you really want to discriminate on object identity (in the storage-location sense), but you needn't restrict equality to object identity to allow that.


> I think it's important to distinguish equality from equivalence.

This is very true. In particular, equality ought to be merely a subset of equivalence. For codata types, as mentioned, equality may not be computable (or well-defined at all), but we can compute equivalences such as bisimulation if we like.

For any type which has a nice equality then merely setting equivalence to equality should be fast, I agree. This is essentially `deriving Eq` in Haskell.

Memberwise equality works for some types, but notably fails when we have abstraction. You can get away with it for (pure) functions on a finite domain if you really want but that's a little hacky---or rather, it involves (implicitly) injecting your function into some kind of container [0] and computing equality there.

As you note, codata types aren't likely to have nice equalities. In these cases they're not going to have memberwise equality, either.

[0] First chapter or so of these lecture notes if I remember correctly, http://lambda-the-ultimate.org/node/4804


> Regionalized mutability can be made to be referentially transparent from the outside.

Can you give an example of that ? Is that similar to ST Monad ?


The ST monad is exactly an example.


A loop inside a pure function. It's directly equivalent to tail-recursion.


This is not a nice job.

The first "issue" is that ``DoSomething accesses x directly rather than through the parameter'', but then goes on to talk about types.

I think this is a big part of why we are bad at programming: A because 6 therefore Go Fish.

I have noticed that when a program is small enough that I can see all of it, then I can reason about all of it, and when I cannot see all of the code, then I cannot. I may resort to note-taking or trying to remember while flipping tabs quickly, or even outright guessing, but I certainly am not reasoning about it.

I do not buy that typing variables, or lambdas, or long method names (like `Equal`), or having "compile time", or `null` or "function signatures" offers anything like the gains had by able to see all of the program, so I'm disinclined to waste energy on what I view as premature optimisation.


I have noticed that when a program is small enough that I can see all of it, then I can reason about all of it, and when I cannot see all of the code, then I cannot.

You've made a powerful observation about how programmers and human minds in general interface with complex systems. What if you had language semantics that could let you ignore most of your codebase with 100% reliability?

I've had a long career dealing with very complex "enterprise" software for multi-nationals. Semantic tricks that let you make far-reaching deductions and securely ignore most of the code are absolutely vital in contexts like that. It was precisely in the parts of the code where things weren't structured to allow this, that things broke down, and everyone dreaded to check-in code.

I'm disinclined to waste energy on what I view as premature optimisation.

There are certain contexts where one already knows that this sort of thing is not premature optimisation. (Granted, lots of programmers seem to make a fetish of acting like they are in such a context, when they really aren't. Analogous to: http://xkcd.com/538/ )


Most of my professional life has been reading other people's code and finishing other people's programs. One thing I've noticed is that nobody has ever asked me to help fix a small codebase.

Another thing I've noticed is that having well documented interfaces between small components is extremely effective at improving isolation time.

Another thing I've noticed is that interfaces with two or fewer arguments (none of which boolean) tend to be used correctly more often than interfaces with a large number of arguments, or booleans.

I would very much enjoy having a conversation about those language semantics. I haven't noticed any language semantics that would be useful in that way, however I have noticed a number of these "disciplines" that are helpful at reducing search time, reducing defect count and increase performance (run-time and programmer).

What exactly are you thinking about?


Another thing I've noticed is that having well documented interfaces between small components is extremely effective at improving isolation time.

Good point. There aren't languages that do a complete job of absolutely enforcing such patterns of behavior, but there are environments that do a fairly good job of encouraging them.

Another thing I've noticed is that interfaces with two or fewer arguments (none of which boolean) tend to be used correctly more often than interfaces with a large number of arguments, or booleans.

Goes to the same point. Most of my work with complex stuff was in Smalltalk, and there is practically no enforcement of anything in that language. However, my experience is that it's practical to have codebases that adhere very closely to a number of patterns or coding standards. Also, there were very powerful search facilities in the environment that facilitated powerful deductions about the codebase. Often, work would involve making deductions, figuring out how those could be broken, then quickly searching for these instances. Often enough, the deductions could stand and make debugging and modifications easy.

Your preference in interfaces is also the one preferred by better Smalltalkers. Also, the environment is built up entirely of small components. One path of least resistance is to build new systems along the same lines. (That said, one could still write Fortran and COBOL in Smalltalk, and I had to deal with this professionally.)


I really like your observations, and have noticed pretty much the same thing.

Your point about the boolean arguments especially, I can't tell you the number of times I've painstakingly documented the parameter in the /// section of the function (in c#) just to make sure it's clear what passing false actually means.

I should just create an enum{ doTheDance,dontDoTheDance} with two values instead, often thought about it, but for some reason never actually done that.


Just do it!

The next guy will thank you.


I disagree strongly. Good types can serve as very adequate reasoning tools for programs at large. This is the source of claims in Haskell/ML/Etc where "if it compiles it works" is used to guide 90% of even massive refactoring endeavors.

It can be hard to believe how useful good types can be.


I don't understand you.

Do you believe "good types" in a 1000 line program is easier to reason about than a 20 line program?


I don't think you can make comparisons like that and feel good about them.

I certainly believe that there are 20 line programs which are harder to reason about than some 1000 line programs with types. That's a big silly, though, I'd believe the same even without the "with types" caveat.

So, here's what I believe:

    * Types impose logical structure on programs which is continuously 
      checked, high-level, and very human interpretable

    * All things being equal, shorter programs are easier to comprehend 
      than longer programs

    * If I were to list the things which aid program comprehensibility 
      then types + most of the things on the list in this blog post 
      actually go *far* higher on that list than "short length"
(Edit: I should add, given what came after this point, that "all things being equal" above is, in my eyes, a very difficult hypothesis to satisfy.)


> I don't think you can make comparisons like that and feel good about them.

I think most professional programmers could write a relational database in under 1000 lines. Arthur did SQL92 in 20:

http://web.archive.org/web/20060505221834/http://kx.com/q/e/...

I think this example is perfectly illustrated: I find it more valuable than anything else I have found to have the entire program all on one screen.

> If I were to list the things which aid program comprehensibility then types + most of the things on the list in this blog post actually go far higher on that list than "short length"

I think there are so many useful and successful applications that "use types" and so many useful and successful applications that do not "use types" that types are probably irrelevant for success.

That many of those useful and successful programs are very large suggests that size has very little (or nothing at all) to do with success.

But comprehensibility?

I cannot imagine what universe would have a large program as being "easier to comprehend" than a short program.


> I find it more valuable than anything else I have found to have the entire program all on one screen.

Then we'll merely have to agree to disagree. I personally think this example is remarkable levels of incomprehensible. I'm not going to claim that my personal opinion has any level of generality, though. More power to you if that page of code is easy to comprehend.

> I think there are so many useful and successful applications that "use types" and so many useful and successful applications that do not "use types" that types are probably irrelevant for success.

The existence of examples both positive and negative is insufficient to demonstrate that some cause has no effect. You might strengthen that to say that in your experience the use of types has zero correlation with "useful and successful". I think with a little qualification there we could again get to a point where we'd merely have to agree to disagree.

It's also vital to point out the selection bias incurred by either of our experiences.

> I cannot imagine what universe would have a large program as being "easier to comprehend" than a short program.

I could try to demonstrate examples of both, but it'd just be my opinion as to that selection. So, instead of trying to be comprehensible, I'll just state for posterity:

Offhand, I believe it would be very easy to write a 1000 line (or even larger) implementation of SQL92 which I would find to be far more comprehensible than the example you've linked.

In particular, I believe in this case that attempting to make that code fit on one page is a significant detriment to its comprehensibility.

And again, I'm not going to try to convince you but merely state that as the fact of my opinion.


> I think most professional programmers could write a relational database in under 1000 lines. Arthur did SQL92 in 20: > http://web.archive.org/web/20060505221834/http://kx.com/q/e/....

I can make any program only 1 line long if I remove all the newlines. I don't care if a program fits on one screen if it is unreadable. The example you cite is a poor example of a valuable program as it is on purpose made to be difficult to reason about.


Can't speak for tel, but i believe that. Types establish the rules of the game, and the compiler backs you up enforcing those rules.

On the other hand, i'm the kind of guy that has read 20 lines of code and missed possible null value bugs. Sucks when you have to tell customers to wait for the next point release before they can use this new feature.


Why would you make a customer wait for a "point release" for a program that is only twenty lines long?

I wonder if you thought I meant "a bug in 20 lines" when I meant a "20 line program".

EDIT: e.g. 20 lines is enough for SQL http://web.archive.org/web/20060505221834/http://kx.com/q/e/...



It's part of kdb http://kx.com/ , a rather fast and powerful database, which is written in a language called k.

Here's an excellent introduction to K: http://archive.vector.org.uk/art10010830

Most kdb users use Q though, which is something else.


How many functions do I have to read simultaneously in those 20 lines? Good use of types can isolate functions from each other.

The typical '1000 line program' could be compacted like your program is and all fit onscreen at the same time, if you really wanted. It's not anywhere close to 50 times as much code.


None of the lines have more than 4 functions on them (that's the parse utilities which has a bunch of short helpers on it), and most have only one function.

Most of the functions are independent, with the exception of the parse utilities, so it's easier to list the exceptions: cre depends on col, w depends on w0 and w1; v and w are used in e (for tokenising); where (wh) is reused by select (sel) and update (upd). c and a are used for the pull parser.


A false dichotomy. How many well-designed, well-typed 1000 line programs can be turned into 20-line programs simply by dropping the typing requirement?


Straw man.

I'm not proposing "dropping the typing requirement".

I think having the program small and fit on the page is more important than types or no types.

To that end: I don't even want to have the argument about types because I think they're so irrelevant.

I'm sure most professional programmers could write an SQL92 implementation in around 1000 lines; Again, types or no types.

Here's Arthur's SQL written in 20 lines (including lexing and parsing):

http://web.archive.org/web/20060505221834/http://kx.com/q/e/...

Here's Steve Apter's relational (not SQL) database in 16:

http://nsl.com/k/t.k


"I don't even want to have the argument about types..."

But that is exactly what you chose to do in your original post.


No it is not.

I'm happy to see an argument that (having or not having types) is more important than a short program.

but I'm not happy having an argument whether to have a language with types versus one without.

That's what "I think types are irrelevant" means.

It seems like at least some people realise this e.g. https://news.ycombinator.com/item?id=8999077 but if after re-reading my post you disagree, let me know how I could better articulate what I mean.


I should have written 'the post to which I was replying' rather than 'your original post.'

With regard to your larger issue, almost all programs other than exercises are vastly larger than a screenful, so either the way all software is written is completely wrong, or your preference for screen-sized programs is irrelevant to the real problems of writing real-world software. Just as professional mathematicians need to be able to follow proofs that occupy more than one page, professional programmers need to be able to reason about code that occupies more than one page.


> With regard to your larger issue, almost all programs other than exercises are vastly larger than a screenful, so either the way all software is written is completely wrong, or your preference for screen-sized programs is irrelevant to the real problems of writing real-world software.

Agreed.

Arthur wrote an SQL in 20 lines of K, and a dynamic window manager in around 60 of C. His programmers editor with syntax highlighting easily fits on a page.

kOS's kernel is under 60 lines of C as well (including annotations; paging, filesystem, syscalls, etc).

Where I work, I wrote an RTB service for digital advertising demand-side platforms- implements HTTP, application protocol decoding, maps and bidding in under 100 lines of C. I fit a video player with VAST+VPAID support in 42 lines of Javascript.

So it follows how programmers write software is completely wrong.

However the inertia will be hard to overcome.


A few examples, impressive as they are, do not make the case that most of the world's programs can be replaced by programs of no more than one screenful. If you really know how to do this, you could achieve fame and wealth of Edison-like proportions as the person who revolutionized software development. If all you are going to do is brag about it in HN comments, that is not a profession, it is a hobby.


These few examples don't need to make the case that "most of the world's programs can be replaced by programs of no more than one screenful", because again that's your strawman, and not my argument. People pay millions of dollars for kx's database so it's not by any reasonable definition mere "exercises", and it's certainly not "vastly larger than a screenful".

I'm writing this way; I'm finding value writing this way, and at the moment I'm making no other claims other than the shortness of a program is the single best indicator of correctness, performance, and "reasonableness". It's certainly more important than a "types" argument.

While your best counterargument against short programs is inertia, suggesting we shouldn't leave the trees isn't helpful. You should try it. You might like it.


"at the moment I'm making no other claims other than the shortness of a program is the single best indicator of correctness, performance, and 'reasonableness.'"

If you had not strayed from that position, I would not have joined the discussion. I happen to agree that there is an inordinate amount of unnecessary complexity in the present code base, but I think it is an orthogonal issue to that being raised by the original article. That is because it seems to me that the main causes of this introduced complexity is developers' poor understanding of the requirements they are trying to satisfy, together with a trial-and-error approach to programming, rather than the language features covered by the original article.


Depends whether the 20 lines are stack-based assembly or not... Anyway, what is initially written in 1000 lines can very rarely be written in 20.


It appears that you copied the "F# does this" list, not the "How to make a language predictable" list. For instance, #6 on that list is "No nulls allowed" which is stronger than "Nulls are NOT allowed, in general".


thanks :)


Once created, objects and collections must be immutable.

Collections I understand, but what's the point of immutable objects? If you're eschewing mutability, why have objects at all? The whole point of objects is to encapsulate mutable state, together with methods for mutating that state.

If you said Once created, data structures must be immutable I would be more inclined to agree.


>The whole point of objects is to encapsulate mutable state, together with methods for mutating that state.

Citation needed. I use my objects to hold immutable state with methods that does not mutate that state.


What do you gain from using objects in that way, as opposed to immutable structs and functions?


I program in Scala, so structs are not a thing (classes can extend/"inherit" AnyVal to be allocated on the stack, though, I think). Functions are nice, but from a pure syntax standpoint I often prefer methods because it limits the parantheses. It's especially neat when the methods take no arguments, like .toString.

But yeah, otherwise there's no difference, really.


Express your immutable objects with an interface like this

    push : ('a queue, 'a) -> 'a queue
    pop  : 'a queue -> ('a queue, 'a) option
In other words, instead of mandating that state updates in objects occurs via global ambient state simply cause objects to return a new version of themselves with the suitable updated state. All your standard immutability tricks now apply. Objects maintain their meaning through their opaqueness/type-abstraction.


simply cause objects to return a new version of themselves with the suitable updated state

That's what Clojure does (and what lots of other functional languages do), only they're not "objects," they're just data structures.

Again, if you're not using mutability or inheritance, then why do you need objects at all? What do objects buy you, compared to data types + functions?

In the absence of those features, isn't an object class really just a module with a data type definition and some functions in it?


It depends on what "object" means to you. Turns out this is very, very complex. A deep answer has a lot to do with laziness, F-algebras, totality, dependent types, yada yada yada.

In particular, the tip of this iceberg is the difficulty of comparing functions for equality.

Does any of this matter? Maybe not. Structs with recursion can get you 90% of what you want (just ask OCaml). One thing you might miss is the ability to do late binding and open recursion—up to you whether or not you care.


One thing you might miss is the ability to do late binding and open recursion

In the absence of objects, that's what `letrec` is for. Scheme, OCaml, and SML all have it. In Haskell `let` is `letrec` by default. Clojure has `letfn`. C has forward declarations. Without these features you wouldn't be able to write mutually recursive functions.


You can emulate late binding in these languages (though the required types are quite clever) but letrec does not do this directly! In particular, letrec is closed immediately—you cannot extend it later.

In particular, you don't actually need or want letrec to encode open recursion. That's most of the fun :)


Ok, so my question was, why you need objects at all if you're deciding that your objects should be immutable, and if I'm understanding you correctly, your response is essentially that you need objects if you want inheritance.

Because the extensible open recursion you're talking about only matters in a situation where you're working with base classes and subclasses having different implementations of the same method name and needing dynamic dispatch to determine which one gets called at runtime.

Clojure's hierarchical multimethod system is an interesting alternative approach to this that I think fulfills the semantics you're talking about. I guess you could argue that this is a sort of immutable object system, it's just not called that. http://clojure.org/runtime_polymorphism


Firstly, "object" as a term is really hard to stick a pin in. It's slippery and hard to talk about, so I'm kind of trying to avoid it.

Secondly, "objects" can imply open-recursion which can be used to implement inheritance. That said, inheritance isn't the only thing open recursion is good for and inheritance can be implemented without open recursion.

"Object"s can also imply codata like structures as compared to data like structures. The distinction is blurry to the point of indistinguishability in essentially all languages, but it's important in theorem proving and math and thus shows up in dependently typed languages.

Thirdly, dynamic dispatch is one way to achieve open recursion but it's not the only way. I don't know if dynamic dispatch is useful for things other than open recursion, so offhand I would say "dynamic dispatch => open recursion is being used" but not the other way around.

Clojure's multimethod system, given that it's an implementation of dynamic dispatch, implies open recursion and thus has at least one factor which some people associate with objects. It's immutable (at least to the degree which Clojure is immutable) so I'm willing to call it an "immutable object system", yes. But with all of the caveats I just listed!


To "what's the point of immutable objects?": runtime polymorphism + maintaining invariants (since immutable, at the time of construction).


Why do you need objects to get runtime polymorphism?

Here's a counterexample of runtime polymorphism without objects: http://clojure.org/runtime_polymorphism


I don't think 2 is especially compelling unless you've also bought into 5. For mutable objects, object identity is very important, and a default comparison that obscures it is much less appealing.

I'm not arguing against those two conclusions as a pair, just with the order in which the article presents them.


The use of reasonable here, is, I'd argue, unreasonable and done for the purpose of creating a click bait title.

They mean 'can we logically reason about programs written in this language' rather than 'is the language a good choice' for solving problems.


> They mean 'can we logically reason about programs written in this language' rather than 'is the language a good choice' for solving problems.

At the risk of baiting flame, I find this distinction oddly amusing. If you can't logically reason about programs written in a language, how is it ever a good choice to use?


If you can't logically reason about programs written in a language, how is it ever a good choice to use?

You can never perfectly logically reason about any program – your ability to reason about a given program varies on a scale of 'easy to reason about' to 'difficult to reason about'.

While it's not always the case, it's easy to see that in some situations the cost of making a language easier to reason about can outweigh the benefits. The 'goodness' or otherwise of a language must factor in the difficult of reasoning about it, but that itself is not a sufficient condition to describe its quality.


You can logically reason about any language. It's just a matter of ease and the amount of code you have to see to be able to get a full picture. I mean, ultimately I could follow the logic of the program through the compiler/interpreter all the way down to the machine level and reason about the specific instructions executed on the processor. And when you have hard performance requirements you sometimes have to get that low. So by that definition of reasonableness all languages are reasonable. But of course he is making a pun and implying the more typical meaning of (un)reasonable, using it as a slight to languages that don't meet his criteria.

I don't actually disagree with most of his guidelines, but I really don't like his use of that term. There are many situations where his preferences are actually unreasonable and the the opposite of what he has concluded is appropriate, mutable state being an easy example. It should not be a "I'm right and you're unreasonable" type of discussion.


While I sort of see what you're getting at, it's not entirely clear what the difference between these 2 is. How is the former (a 'reasonable' language) all that different from the latter (a 'useful' language)?

While reasonableness and usefulness are not the same thing, it does seem a language needs to be both. Otherwise you could end up with a language that is both useful and unreasonable, which would be maddening.


To a lot of programmers who maintain code, those two are the same.


I dislike javascript as much as the next guy, but the example in the article is absurd:

function DoSomething (foo) { x = false}

var x = 2;

DoSomething(x);

var y = x - 1;

If you change "foo" to "x" then this works as expected. Who writes code that looks like this anyway? Decent JS developers know that declarations elevate themselves, and they also would not write code that looks like this.

This is like taking a couple words out of the middle of a sentence and complaining because the meaning changing versus the whole overall sentence.


People don't write code like this on purpose (except as an example). The point, I think, is that it happens by accident, and usually buried within a much bigger body of code.


This example though perhaps not the best actually makes it pretty obvious that if you have immutable objects and variables that cannot change their type you already know that y will always be 1, since it is impossible for DoSomething to change what x is bound to. So the example does illustrate the authors point about being able to understand code without having to look at implementation details.


That can easily be explained by the following example, which is much more sensible:

var a_number = 10;

a_number = { blah: 10 };

var a_plus_two = a_number + 4;

Better yet you can simply say "Javascript does not have strict typing" and point people at the wiki article on strict typing. This is clear and concise as an explanation.

Showing confusing examples and saying "oh look this is horrible types can change" does not make the point stronger.

The article uses a whole lot of words and examples to point out problems that professional JS developers are aware of and work hard to avoid.

It also implies that such languages without strict typing are "unreasonable". It's not unreasonable. It's that way on purpose.


I've accidentally done this in Python, though I didn't know functions could do that. Wasn't fun debugging. It's like "We've made this super-easy language where you don't need a main function, but if you don't have one all kinds of unintentional shit will happen to your code"


What is it with functional cheerleaders and their inflammatory language and cherry picked arbitrary criteria?


I'm looking forward to day when instead of endlessly comparing languages against each other everyone just shuts up and builds stuff so that merit of your language choice can be judged by what can be built with it.


Before they shut up and build something they need to go pick their tools. This is about picking the tools.

If we look at tools used in practice, we should be picking C,C++,Java,C#,VB,Python. If we look at functional languages we should be picking -- not sure, Erlang perhaps.


The way to know how good Erlang is, is to start writing an MMO in a different language, trying to fit as many concurrent users on the same hefty multi-core cloud instance in the same shard. Then, after you've come up with about a dozen clever tricks and made several key insights, read up on Erlang and discover that it already takes most of that into account. (I did this with Go and plan on learning Erlang now.)


Erlang, like Ada, is a very serious language for very serious applications. People use those languages for all kinds of things, but they belong to the class of critical application languages and therefore they're both kinda weird.


So are you going to pick F# now? None of the arguments in that post have convinced me to switch.


I might. I really like F#, I was hesitant about the .NET and Microsoft infrastructure. There was minimal interaction with it on the business side (all Linux). Only recently have started playing with Mono. Now that .NET open source, I might give it a second look!


Ah but you see your reasons are entirely pragmatic. You're worried about the ecosystem and vendor lock-in. None of the arguments in the post even come close to mentioning those things.


Ok you are right, I see your point.

I guess the assumption here is "all other things being equal" which is often not the case. There are large team, difficulty in finding libraries, legacy code, performance/testing/deploy-ability considerations.


The only benefit of Erlang is built-in actor model. Other than that being dynamic is killing it as a general-purpose language.


Yeah, I hear what you are saying. Some of the comparison seems like simple "mine is better than yours" childishness and elitism.

However, I don't think all of the comparisons are without value, and I would not want all of the discussion to go away. But a little more emphasis on "here's this useful thing I build with language 'X'" would certainly be welcome.


Yup. That would be much better than look at this "hammer" it does a hammer's job but it's better because the handle is curved slightly differently. I mean that's how I see these discussions. Tools are for doing and building.


There are more factors than 'what can be built' with a language. A more popular language will have more projects written in it, and popularity is not necessarily based on merit (think about the promotion Java received because of commercial reasons).


Java is a perfectly adequate language. I'm not saying network effects don't matter. I'm saying the focus should be on building things instead of some other metric like language purity.


Java is an acceptable programming language, but far from perfectly adequate. Its inadequacies are very well documented.


> Its inadequacies are very well documented.

Which is a very useful trait for a tool to have.


Yeah is not like you're forced to use JavaScript right?

Complaining works cause it makes the shit more visible so they can fix it in later versions of the standard.


You are indeed not forced to use JavaScript. Currently I can count PureScript, Elm, Scalajs, GHCJS, Opal, TypeScript, Amber Smtalltalk, and I don't know how many other languages that compile to JavaScript but are not JavaScript. So who's forcing you?


Even if you compile to JavaScript from another language, you'll end up eventually debugging the compiled JavaScript in some cases. Painful times debugging GWT code compiled to JavaScript suggest that the development tools aren't sufficient to avoid the need to do so.

In comparison if you write Objective C for iOS, you can debug the Objective C, as opposed to needing to debug in the compiled assembler.


I've never had to debug actual JavaScript whenever I've used TypeScript. Source maps are pretty nice and both Firefox and Chrome developer console are a joy to use when debugging. There have been plenty of times when dropping down to the assembly level and looking at obj dumps has been the right way to debug a problem so abstractions leak no matter where you are in the stack.


Your employer or your team can force you, in fact it's pretty common.

I rather use CoffeeScript instead of JavaScript, and I've forced teams to use CoffeeScript, but the fact that all browsers use JavaScript and everyone "knows" JavaScript make it easier for teams to stick with it.

You can do it sneakily but it could be a problem when you need to follow certain code conventions stablished by the team.


So is your employer currently forcing you? If you don't enjoy the experience so much then it's not like you don't have options. You could literally have another front-end job tomorrow working with your compile-to-js language of choice if you wanted to given the state of the developer market.


I'm just explaining the reason why people complain about that language that YOU can't understand, besides, the job my pay really well and you like to keep it, not everyone is like you man, and you're not right all the time...

Chill, you don't have to be a confrontational prick every single time of the day. Then when people call us jerks we don't understand...


I'm still missing it a little I think: the article says that if you remove a bunch of flexibility, like mutable properties, then its harder to make mistakes and your code is more predictable, right? But like the author mentioned, if you follow good patterns and things like decent naming conventions, and moreover know the language well enough to not make mistakes around things like the purpose of .Equals, then you wont have problems and wont be sacrificing anything either...right?


You're absolutely right. You can indeed program in a "type safe way" in a "non type safe" language. However, if you are going to do that, you may as well have some compiler support too.


I like the article and think it makes some good points, but I can't help but think the author is cheating a bit. If you really want to show the extent to which a language can be inherently reasoned about, it seems like a slight-of-hand to provide hints in the names.

Take a look at this snippet from example 6:

    // create a repository
    var repo = new CustomerRepository();

    // find a customer by id
    var customer = repo.GetById(42);
The fault is "I can't tell by looking at this code whether a null is returned or not."

The improvement is this (in a language without nulls or exceptions):

    // create a repository
    var repo = new CustomerRepository();

    // find a customer by id and
    // return a CustomerOrError result
    var customerOrError = repo.GetById(42);
Aside from the comments and a change to the variable name, isn't this code identical? The fact that subsequent code must process the result specially if the input (42) is garbage is true of any language, whether or not null is allowed. I still can't tell "by looking at it" what is happening.

In fact, I would reason differently about this code if the comments were removed and the names changed:

    var a = new B();
    var c = d.E(42);
In this context, it seems the real error is that an invalid parameter value is passed, so I'd see if I could fix that first, possibly eliminating the need for any error handling code. That approach might be common for a functional mindset, but not exclusively so.


If you ignore the clickbait use of the word 'reasonable', then there are some valid points made here. The question is – what's the benefit?

There are definite benefits to using a language that it's easy to reason about. In most cases, there's also a cost. That's fine – it means we've got the ability to trade off one against the other as the situation requires.

I'm unlikely to write tiny, single-use scripts in a strict, statically-typed language – mostly because the overhead isn't worth it. Similarly, systems that require higher levels of correctness can benefit from the guarantees that are offered by stricter languages.

We have to bear in mind that the entire field of programming languages is still very much evolving, as we gain experience of what works and what doesn't. Type inference, for example, can drastically reduce the overhead of type checking. Or option types, which eliminate a class of errors by providing support for common patterns. No doubt as the field evolves, we'll see that languages emerge which deal with the pain points of both 'reasonable' and 'unreasonable' languages.


If the past 30 years of coding tells us something, one of those things should be that the line between statically typed and dynamic languages is blurring. Boundaries between scripts, the application, the programming language, and the OS are blurring as well. Future languages should have highly flexible deployment, and be able to "feel" like both a "scripting" language and an "enterprise" language.

There is a big problem with this, however. The entrenched infrastructure and toolset reinforce the "separate" mindset. A useful analogy with which to think about this: The level of granularity we inherited for computer security from the 60's and 70's was to protect one user from another on the same machine. Now, we know we need much finer levels of control, which is why mainstream OS now have "capabilities" or their functional equivalent. However, this took a long time and met much resistance, because the industry as a whole couldn't mentally move beyond a security model developed for timeshare users on a 60's "mainframe."


> We have to bear in mind that the entire field of programming languages is still very much evolving, as we gain experience of what works and what doesn't. Type inference, for example, can drastically reduce the overhead of type checking.

I'm sorry, it's just too much for me. Yes, PL theory advances and yes, there are many new, interesting ideas being presented each year. That's true.

But your example, "type inference", dates back to the sixties. Similarly for "option types". Using these as an example of "evolution of entire field of programming languages" would suggest this evolution happens at a glacial pace. Which is indeed true for the so-called mainstream languages, but "the entire field" is much, much larger than those, moves far quicker and comes up with much more interesting ideas every year.


Yes, it does. But you're totally ignoring the practical ramifications of this – the fact that ideas that are only now making it into mainstream programming languages is a strong indication that the entire endeavour is more complex than you're admitting.

We don't just pull language features out of our collective arses and start using them. Evolution does happen at a glacial pace – we stick features into minority-appeal languages; they kick around a bit; people rediscover these features later, and maybe they eventually end up in popular languages. But there's a huge amount more involved in that process than simply coming up with the feature - testing, maintainability, long-term implications, unforeseen complexity, programmer acceptance, performance constraints… the list goes on.


> testing, maintainability, long-term implications, unforeseen complexity, programmer acceptance, performance constraints… the list goes on.

No, it's actually just "programmer acceptance" and secondary reasons like amount of money poured into marketing or language being tied to an OS. I'm not very pleased with this. In this talk: https://thestrangeloop.com/sessions/the-sociology-of-program... it's said that probability of language being used is "potential gains/pain of adoption" ratio. That's true. Language designers, at least the ones who want their languages to succeed, constrain themselves to make it as painless as possible for "normal programmers" to pick up. Nobody seems to notice that there's another option of making denominator smaller: all you need is to become better at adopting new technologies. But this somehow doesn't work, because... Probably because of long-term implications or performance constraints, right.


Great article. Couldn't agree more. These are all reasons why I don't understand why there is this rush to embrace Javascript on the server side, the one place where you are actually free to use a "reasonable" language.


1. Runs on both front- and back-end. 2. Loads of people are familiar with it. 3. Isn't that bad if you know what you're doing with it.

There's a cost to all of this, and there are better options. But it's really not hard to see why Javascript is gaining traction on the server side, as rubbish as it is.


Half the app runs on the server (maybe more). Its helpful to use a familiar toolchain across the whole app.


Maybe. It's a high price to pay for that, though. And with the rise of JS-as-target tools like asm.js and WebSharper, that toolchain doesn't necessarily have to be Javascript.


> 6. Nulls are NOT allowed, in general.

I really want to like F#, but that's a really big "in general". I love the idea of option types, but I often hear with regards to F#, "forget about NullReferenceExceptions!". Yeah, that's nice, except I sometimes use strings in my code...

What I would've liked is to have F# strictly forbid nullable references, and treat all reference types in interop situations as (implicitly) options. Not sure if that would be possible, haven't given it too much thought and I'm sure there are tons of tricky cases that haven't occurred to me, but I haven't really seen this discussed anywhere.


Swift passes the test in my view provided you can cope with using the word struct instead of object. (Objects based on classes are reference types in Swift but structs, enums and the default collections are value types).

You can write pretty functional code if you want but you also have the option to write more mutable/side effecty/stateful object code if you want.


So does Rust, I think. 2014 was a good year for programming languages.


> 2014 was a good year for programming languages

Yeah... another batch of innovative, interesting languages died because "it's not practical", another pile of language features didn't manage to get implemented because "but look at syntax, it's ugly", another bunch of languages took some decades old features, dumbed them down as much as possible and then some more (because "it would be to hard to use otherwise") and marketed them as ground-breaking improvements to the art of programming.

Very good year for PLs, indeed.


Have you got any concrete examples, or are you just going to complain?

The entire field of programming language design is more complex than you seem to be admitting.


Nice syntax and easily explained features are extremely important parts of programming language design. A really fancy new feature whose implementation compromises these qualities is usually a step backwards for the language as a whole. Look at C++.


Last I checked you can't quite get away with using struct only in Swift because you need objects for indirection in recursive structs.

But other than that I really, really hope that struct-only coding takes over in Swift given time... perhaps lots of time.


Fair point the need to Box things is a bit of a nuisance. Also staying pure and immutable gets tricky as soon as you hit the Cocoa APIs but it is definitely a step in the correct direction and I still like it.


> How to make your language predictable: > 1. Variables should not be allowed to change their type.

Completely disagree.


The author presented an argument for this proposition. Do you have an argument against it?


Even JavaScript, which is much maligned for it's poor implementation of dynamic typing, has clear rules for type coercion.

In the author's example in support of "Variables should not be allowed to change their type," the variable "x" is subject to coercion, but the real problem was poor scoping by using the global value instead of the parameter.

This would be the same scoping problem, even if the variable type was not changed. Any non-deterministic function that modifies existing state can be said to produce unexpected/unpredictable/unreasonable results. In this case, the unpredictability was not due to a variable changing type.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: