In this I prefer logical reasoning to empirical evidence. Consider a function that takes a list of things and returns the last one. Let's write that function in Haskell.
last :: [a] -> a
Without looking at an implementation, let's think of how many ways this function could possibly go wrong? It could return the first element of the list instead of the last one. It could return the second to last element of the list. More generally, it could return any one of the elements of the list that was passed in. But that's it (other than the empty list case which for simplicity we can ignore because it's a condition we can statically detect). There are no other possibilities, period.
Now think about implementing this function in any other language. What about Java?
public Object last(Object objs[])
This function can return any of the elements of the incoming array. It could also return null. It could also return the int 42, or the string "hello". It could delete files on your hard drive, open a network connection and phone home to the NSA. I could go on and on.
The point here is that you simply can't say as much about what this function could be doing as we could say about the Haskell version. This is the case for every mainstream language in wide use today. Some languages allow you to say a little more than others, but none of them come close to what Haskell gives you. Because of this, I don't really feel a pressing need for empirical evidence. Logical reasoning lets me make that claim quite confidently.
That's true! You have proved the original (deductive) claim "if it compiles, it works" for that one specific example, but that does not hold in general.
In general you may prefer the alternative (inductive) claim "more likely to be correct", but that claim requires empirical evidence.
The inductive claim "more likely to be correct" is strongly supported by the evidence gained from enumerating the number of ways the functions could go wrong. Now if we want to expand our domain out from just that function to programs in general, we need an idea of how applicable those ideas are to larger programs. The biggest source of reducing error cases in my example was purity.
Some parts of programs (namely the side effecting parts) can't be expressed as pure functions. So the next piece of evidence we need is an estimate of the percentage of large programs that can be expressed as pure functions. From my experience writing Haskell full-time for the last five years I can tell you that it is very substantial. Even in very impure code you can often factor out lots of small pure operations. I feel very safe in claiming that at least a third of Haskell code can be written with pure functions, and I think it's likely greater than half. (It could be a lot higher, but I'm trying to keep the confidence level fairly high.) You don't have to take my word for this. You can go off and browse through millions of lines of Haskell on [Hackage](http://hackage.haskell.org/packages/) and see for yourself. It probably wouldn't be too difficult to write an automated script to do this.
Now we've seen that pure functions have a small fraction of the error conditions that impure functions have. And we've seen that maybe half of all the code we write can be expressed that way. That's a pretty dramatic reduction in the number of potential error conditions we might encounter for even complex programs. That is why I think the claim "more likely to be correct" is strongly supported by evidence.
You are piecing a bunch of empirical facts together using reasoning, but there are lots of implicit assumptions in your reasoning.
One assumption is that reducing error cases, reduces errors. Maybe the other errors increase in prevalence when you reduce error cases?
You say that even in impure code you can factor out pure parts. You assume this refactoring doesn't introduce bugs.
You talk about the fraction of code that is pure in haskell repos. But we don't know how the amount of impure code compares to amount of impure code in other languages. Maybe it is the same? Just haskellers have some pure code in addition?
Finally, we are talking about the bug proneness of haskell, not just an idealized fp-language. Maybe something with haskell causes bugs. There are many candidates. Poor support for debugging has mentioned in another comment. Maybe the libraries are less mature. Maybe less IDE support. Maybe something about the syntax causes bugs. Etc etc.
For this reason I think it would be easier to just experimentally verify the hypothesis directly: Does Haskell cause less bugs?
> but there are lots of implicit assumptions in your reasoning.
Yes, there are assumptions, but now we're getting somewhere.
> One assumption is that reducing error cases, reduces errors. Maybe the other errors increase in prevalence when you reduce error cases?
I think this is pretty unlikely. If we switch from C to a programming language that makes buffer overflows impossible, we don't expect to see a corresponding increase in, for example, off-by-one errors. After working with a language like Haskell for awhile you start to see that when the compiler tells you about missing a case for Nothing, you don't just sweep it under the rug. You stop and think about that case. I think it's reasonable to assume that when a programmer stops and thinks about a case, he's more likely to get it right. Errors often happen because the programmer neglected to think about a case entirely.
> You say that even in impure code you can factor out pure parts. You assume this refactoring doesn't introduce bugs.
I'm not talking about refactoring here. I'm talking about writing your code that way from the start, so regressions aren't an issue. As for the question of how many bugs you might write to begin with, see the previous point.
> You talk about the fraction of code that is pure in haskell repos. But we don't know how the amount of impure code compares to amount of impure code in other languages. Maybe it is the same? Just haskellers have some pure code in addition?
In other languages ALL code is impure. There are no other pure languages in mainstream use today. So then the question becomes whether the pure code written in Haskell performs any useful function as opposed to being just meaningless fluff. It's obvious that this code performs tons of useful operations and every time it is used, we are avoiding the need to have to solve the same problem again. It's very clear what problems are solved by that code, and it's very clear that people are able to reuse it to avoid the work. Therefore, it's absolutely not simply additional cruft.
> Finally, we are talking about the bug proneness of haskell, not just an idealized fp-language. Maybe something with haskell causes bugs.
None of the points you mention cause bugs. Lack of debugging has nothing to do with causing a bug. It just means that you'll have to go about fixing your bug differently. You can look for counter-arguments all day long, but at some point you're just grasping at straws.
> For this reason I think it would be easier to just experimentally verify the hypothesis directly: Does Haskell cause less bugs?
It's not easier. Building software is hard and takes a long time. Furthermore, constructing a scientifically valid experiment is even more costly. Most people want to be productive, so they're not going to go to that effort. There is huge variation between people, so these kinds of experiments have to be undertaken like medical studies...i.e. with large n. If you're interested in building complex, real-world systems, you're simply not going to get a large n. Also, Haskell has only become commercially viable in the last 5 or so years, so there hasn't yet been enough time/effort spent to perform these experiments.
But even with all these caveats, we do have some experimental evidence. There was a CUFP report from a company that rewrote a substantial groovy app in Haskell [1]. And the result is that yes, Haskell did indeed result in fewer bugs.
Groovy was originally designed as a scripting language intended for manipulating and testing classes written in a statically-typed language, Java, in which substantial apps should be written. Only later did new managers of Groovy start promoting it for building systems even though it wasn't suitable for that. A better test would be rewriting a substantial Java app in Haskell.
Ok, so 50%, 95%--whatever your number is--of the code in Haskell we have established is susceptible to dramatically fewer bugs than any other language out there. How can you possibly claim it doesn't matter? Just because you can construct a situation where it seems to not matter doesn't mean that situation is representative.
And oh by the way, we can easily prevent your situation in Haskell as well.
safeRm :: SafeFilePath -> IO ()
Then we create the SafeFilePath data type (it can even be a newtype, which means it will have zero performance overhead) and we can write a smart constructor that makes it impossible to ever have dollar signs, stars, or anything else that might cause this kind of bug.
mkSafeRm :: String -> Maybe SafeFilePath
Then you hide that stuff away in a module and only expose a safe interface. Now, BOOM, safeRm cannot possibly ever have the bug you described. This is made possible by purity, strong static types, and the module system. Some mainstream languages allow you to partly achieve this, but it usually incurs lots of boilerplate and the lack of immutability means that there will be more holes in whatever solution you can come up with.
> 95% [...] of the code in Haskell we have established is susceptible to dramatically fewer bugs than any other language out there.
No! Haskell has about the same proportion of LOC performing IO as any language. In Haskell, it may be easier to find where the IO code is, using the type system, but that doesn't make any of the code correct.
> Then you hide that stuff away in a module and only expose a safe interface.
Now you're not even listening to what I said. In my topmost post in this thread I showed how pure code in Haskell is susceptible to dramatically fewer potential bugs than equivalent code in any other mainstream language.
I said:
> There are no other possibilities, period.
And you agreed:
> That's true!
Most of that bug reduction was made possible by Haskell's purity/immutability. Now you're completely ignoring that point that we already established.
> In Haskell, it may be easier to find where the IO code is, using the type system, but that doesn't make any of the code correct.
No, it doesn't deductively "make any of the code more correct". It makes the code inductively "more likely to be correct". You asked for empirical evidence for the inductive claim and I gave it to you.
> You could do that in any language.
No, you could not. Other languages have module systems with hiding, but they do not have purity, the ability to categorically eliminate things like null errors, etc. You're zooming in to one feature, but it's not just one feature that enables all this. It's the combination of several.
I did agree that you can prove some properties about your code. Specifically, you can prove which parts of the code do not perform IO, and in that limited sense "if it compiles, it works" for that example, but that does not hold in general.
I did not agree that your code is more likely to be correct in general. The non-IO code may still be wrong (as you said then: it could return the first element of the list instead of the last one) and the IO code may also be wrong. And you have no evidence that any of that code is more likely to be correct.
The Haskell compiler can enforce some organization on your code (IO code goes here, non-IO code goes there) but the bugs could still be anywhere. All you have done is to shift the deckchairs on the Titanic!
> Most of that bug reduction was made possible by Haskell's purity/immutability.
You have no evidence of any actual bug reduction.
> You asked for empirical evidence for the inductive claim and I gave it to you.
Working deductively from the definition of Haskell is not empirical evidence for the claim that Haskell programs are more likely to be correct.
> And you have no evidence that any of that code is more likely to be correct.
> You have no evidence of any actual bug reduction.
> Working deductively from the definition of Haskell is not empirical evidence for the claim that Haskell programs are more likely to be correct.
My deductive reasoning is not the evidence I'm citing here. We counted the total number of ways that a function could go wrong and saw that in pure Haskell code, the number is drastically reduced. We know that the bug types we eliminated actually occur in practice, so under the reasonable assumption that the distribution of bug types is relatively constant, those numbers do provide empirical evidence (observational, not experimental) that Haskell programs are more likely to be correct.
Yes, pure code can still be wrong, but it can be wrong in a lot less ways than the impure code that you're stuck with in other languages. That code is still doing useful work that is also bug-prone in other languages. Therefore we can be reasonably confident that it will reduce bugs.
> My deductive reasoning is not the evidence I'm citing here.
You have no evidence, only (invalid) deductions.
> We counted the total number of ways that a function could go wrong
If you did this it would be deduction.
> and saw that in pure Haskell code, the number is drastically reduced.
No.
Say you have an IO function with two parts, a non-IO part to compute a directory name using some input, and an IO part to remove that directory:
rmrf_steamroot some_input =
-- some code to compute a directory name using some_input
...
-- some code to remove that directory
...
Now, the first part does not perform IO, so let's extract that into a non-IO function:
compute_directory_name some_input =
-- some code to compute a directory name using some_input
...
rmrf_steamroot some_input =
let directory_name = compute_directory_name some_input
-- some code to remove that directory
...
How many bugs were removed by doing this? None. It's the same code, just moved about a bit.
Sure, the code in the non-IO function "compute_directory_name" does not perform IO (and therefore does not have IO bugs) but that code did not perform IO (and therefore did not have IO bugs) when it was in the original IO function "rmrf_steamroot"! So there is no bug reduction.
> those numbers do provide empirical evidence (observational, not experimental)
That is not empirical evidence. Perhaps it may help you to read this introduction to the topic:
https://en.wikipedia.org/wiki/Empirical_evidence
(And where it says "observation or experimentation" it may help if you follow those links.)
> If you absolutely insist on experimental evidence, we have that too
As you said earlier, they "rewrote a substantial groovy app in Haskell". There are too many confounding variables there.
We're not even arguing about the same thing. I wasn't talking about the rm function, I was talking about the "last" function in my original comment.
> If you did this it would be deduction.
I did enumerate every possible way that function can go wrong in Haskell. It is precisely n-1 where n is the length of the list (we can ignore the empty list case without loss of generality because it is easy to handle that case statically). In every other language in mainstream use today, the number of ways it could go wrong is infinite. And we actually do see those happen sometimes in practice.
The analysis in the above paragraph is the "active acquisition of information from a primary source", i.e. observational evidence for the inductive argument that code written in Haskell is "more likely to be correct".
> As you said earlier, they "rewrote a substantial groovy app in Haskell". There are too many confounding variables there.
I'm keenly aware of the confounding variables. But the fact is, you're never going to find a significant study that doesn't have a bunch of confounding variables. If that's your criteria for making a decision in this domain, then you won't be accomplishing much. In fact, I doubt we even have strong experimental evidence that C is more productive than assembly language. If you have different people write the different languages, the skill of the people is a confounding variable. If you have the same person write them all, then their knowledge of the languages and their mounting domain experience are confounding variables.
So we take whatever evidence we can get. If my bayesian prior was that all languages are equally likely to result in correct programs, then this evidence tips the scale slightly in Haskell's favor. We can't take it to the bank yet, but we've only looked at a couple pieces of evidence. (That's not the only evidence out there, but I'm constraining the discussion to this because of the bandwidth of this communication channel.) Point is, I think we're at the point where savvy forward-looking people should at least take note of Haskell's potential benefits. However much we know now, we will continue to learn more as we observe more and more people using Haskell to build real things.
> I was talking about the "last" function in my original comment.
Then let's say that "some_input" is a list and the non-IO function "compute_directory_name" should return the last element of that list. If that code were instead a part of the IO function "rmrf_steamroot" then the code would have just as many bugs. Conversely, extracting code from an IO function into a non-IO function proves only that the code did not perform IO (and therefore did not have IO bugs) in the first place. There is no bug reduction.
> I did enumerate every possible way that function can go wrong in Haskell.
This is deduction.
> "active acquisition of information from a primary source"
Did you read beyond the first sentence of that article? "In living beings, observation employs the senses." Which senses did you use to deduce the number of ways a function can return an element of a list?
> i.e. observational evidence
No, a mathematical "observation" is not an empirical observation.
It's more like "if the code were written like Haskell programmers write programs, taking full advantage of static types to encode information about function return values, then it would be correct. Forgetting to handle the 'null' option would be detected at compile time."
There is a big difference: "set -u" won't reject incorrect bash programs, it will merely stop some kinds of incorrect behaviors by stopping the program and preventing it from causing greater harm. However, the program won't have completed its intended task, therefore remaining incorrect.
In contrast, using Maybe to indicate "possibly no value" in Haskell (as it is standard practice; in fact there are no nulls in the language) prevents you from even writing some kinds of incorrect programs in the first place. In this sense, a program written in Haskell is a lot more likely to be correct than one written in bash, because the compiler will force you to consider cases you may have missed in a less safe language (like bash).
Of course, neither language guarantees correctness. It's just that one is safer than the other by design.
> No, but "set -u" does help the bash programmer to reject incorrect bash programs and to put "test" where necessary, just as the Haskell compiler helps the Haskell programmer.
> Perhaps the Haskell compiler helps more. Perhaps Haskell programs are more likely to be correct. Perhaps. There is only one way to find out.
I think you're being disingenuous. Haskell is designed to be safer (i.e. able to reject more incorrect programs) than bash. There is no "perhaps" here -- I've shown how the compiler helps more. In your example, bash simply fails to reject an incorrect program. The program will NOT run as intended, but instead will abort when it reaches an inconsistent state (such as attempting to access an uninstantiated variable). Do you see the difference between finding out at run-time that your program misbehaves, as opposed to entirely preventing the misbehavior at compile time? You simply cannot write a Haskell program that will attempt to reference an undeclared variable. That's a whole failure mode entirely gone! You don't even need to test for this, because Haskell is comparatively safer than bash.
If the Haskell programmer uses Maybe, but there is also a cost to doing that, and overall that approach may (or may not) help more in writing correct programs than using bash with a test suite using "set -u".
> The rest of your comment reads like a confused mess to me.
You have two options: 1. you can make deductive claims like "if it compiles then it works" which require proof; or 2. you can make inductive claims like "more likely to be correct (all else being equal)" which require empirical evidence.
You appear to be offering deductive arguments for an inductive claim.
> If the Haskell programmer uses Maybe, but there is also a cost to doing that
No, no ifs about it. Haskell uses Maybe to denote the absence of a value; this is a fact. There is no other possibility because the language doesn't have null values. If there to a cost in using Maybe, it's definitely not paid in safety/correctness, which is what we're discussing here.
You continue to write "may or may not" but I've shown you how the approach is better than bash. You can even forget the Maybe monad if you're less experienced with it, and instead focus on one simple fact: you can write and run a bash program that aborts after attempting to access an undeclared variable, but you cannot write and run such a program in Haskell because it's impossible. Haskell is safer.
Deductive/inductive is neither here nor there. I didn't make the ridiculous claim that "if it compiles then it works", and nobody else here did either. The claim I did make is that entire failure modes which are present in bash aren't possible in Haskell; I leave to you as an exercise to consider whether there exist failure modes in Haskell which are not in bash.
> Haskell uses Maybe to denote the absence of a value
A Haskell programmer could use an empty string for that, or given a Maybe could do this:
fromMaybe ""
But no true Haskell programmer would do that.
> you can write and run a bash program that aborts after attempting to access an undeclared variable
Yes, and perhaps the bash programmer could likely eliminate that fault through testing, perhaps about as likely as the Haskell programmer could get the Haskell compiler to accept a program at all!
> The claim I did make is that entire failure modes which are present in bash aren't possible in Haskell
And therefore Haskell programmers are more likely to write correct programs?
> Yes, and perhaps the bash programmer could likely eliminate that fault through testing, perhaps about as likely as the Haskell programmer could get the Haskell compiler to accept a program at all!
Testing all the things that can go wrong in a bash script is much more difficult than getting a Haskell compiler to accept a program.
> A Haskell programmer could use an empty string for that, or given a Maybe could do this: fromMaybe ""
Do you see how that's totally unrelated to accidentally accessing an uninitialized variable? In one case, you have an unforeseen event, usually caused by a typo; because the programmer didn't plan for this, the program will abort, crash or launch nuclear missiles, depending on the non-statically typed language of your choice. In the other case, the programmer who chooses to use fromMaybe has to decide what the actual default is, and since he has to consciously pick one, he will choose a useful one. Also, because he is using Haskell, if he wants to express "there is no meaningful value", he will always use Nothing. Feel free to reply to this with yet another snark, but it will only show you're unfamiliar with the subject under discussion.
> perhaps the bash programmer could likely eliminate that fault through testing, perhaps about as likely as the Haskell programmer could get the Haskell compiler to accept a program at all!
No. While testing may eliminate some of these errors, it's even better if they cannot happen at all. That way you can focus your time on testing actual errors. A language which automatically disallows some classes of errors is safer than one where you have to remember to check for them. Note: safer, not necessarily better. I'm not saying Haskell is "better" than Ruby, if that's why you're being so defensive about this: there are valid reasons why one would choose a language which offers fewer guarantees of correctness (i.e. there are "safer" languages than Haskell, and valid reasons not to pick them as well).
> > The claim I did make is that entire failure modes which are present in bash aren't possible in Haskell
> And therefore Haskell programmers are more likely to write correct programs?
Yes, fewer failure modes -> increased probability of correctness.
My advice to you: stop being sarcastic, because it's not helping your argument. Also, how many Haskell programs have you written, or at least code reviewed, since you're so sarcastic about how a "true" Haskell programmer writes code?
> Do you see how that's totally unrelated to accidentally accessing an uninitialized variable?
I was responding to your comments about Maybe, but, yes, that is a separate issue.
> While testing may eliminate some of these errors, it's even better if they cannot happen at all. That way you can focus your time on testing actual errors. A language which automatically disallows some classes of errors is safer than one where you have to remember to check for them.
Perhaps. But perhaps it makes no actual difference in practice. And perhaps the cost of using Haskell, of satisfying the compiler, of fixing type errors that would not be (runtime) errors in a dynamically-typed language, perhaps that cost outweighs any benefit that type-checking may provide.
I don't know. And neither do you.
This entire thread I have asked for empirical evidence, but you have offered only deductive arguments, and that's fine, do as you like, but it's not persuasive.
> Yes, fewer failure modes -> increased probability of correctness.
That's an interesting hypothesis.
I wonder what would be found in a comparative study of statically-typed Haskell code developed with the help of the compiler vs dynamically-typed Haskell code (more failure modes!) developed with the help of an interpreter (if this is possible). Using the same core language would eliminate many confounding variables.
BTW, have you not heard "no true scotsman" before?
> perhaps the cost of using Haskell, of satisfying the compiler, of fixing type errors that would not be (runtime) errors in a dynamically-typed language, perhaps that cost outweighs any benefit that type-checking may provide.
That's completely irrelevant to the current discussion. We are not discussing convenience; we're discussing safety. Are you trying to de-rail the debate? :)
By the way, because you're intent on "teaching" me what deduction and induction mean: not that it has anything to do with the argument at hand, but deductive reasoning (when you can apply it) trumps inductive.
> > Yes, fewer failure modes -> increased probability of correctness.
> That's an interesting hypothesis.
That's... uh... that's not an hypothesis. Something that can fail in fewer ways is safer than something that can fail in more varied ways, all else being equal. There is no need for inductive reasoning here.
> BTW, have you not heard "no true scotsman" before?
I have, and I have also noticed you were trying to sarcastically invoke this fallacy all along. Unfortunately, it seems you've misunderstood it: No True Scotman applies when you make an assertion, then you're presented with a contradiction, and in order to counter it you adopt an ad hoc assertion which serves as an exception. The fallacy doesn't apply when the assertion is not ad hoc, but in fact that which defines the original assertion. Haskell programmers program using Maybe, that's not a "No True Scotman" fallacy. For example, if I said "Haskell programmers do X", and you say "that's not true for Haskell programmers using Ruby!" and I counter "I was talking about Haskell programmers using Haskell", that's not a case of No True Scotman. It's the same for Haskell programmers using the Maybe monad.
Let me explain the kind of flawed reasoning you've been using so far. Say you read something about handguns with safeties being safer than handguns without safeties. You start "blah blah inductive reasoning no true scotman yadda yadda". When someone counters, "well, are you saying handguns with safeties designed to make them safer are not actually safer than handguns without safeties?" your reply seems to be "well, if you give the handgun with safety to a kid, load the gun, switch the safety off, and tell the kid to pull the trigger, then it's not safer!".
Do you see the ridiculousness of your position? Any Turing complete language can be used improperly and cause harm. But some languages are safer than others by design. Not better, merely safer. For example, Haskell is safer than Ruby :)
> We are not discussing convenience; we're discussing safety.
We were discussing whether Haskell programs are more likely to be correct (than bash programs), all else (including convenience) being equal. Who is more likely to write correct programs, for the same effort: a Haskell programmer or a bash programmer? Did you think we should allow the Haskell programmer more time, to satisfy an inconvenient type-checker?
> you make an assertion, then you're presented with a contradiction, and in order to counter it you adopt an ad hoc assertion which serves as an exception.
That is exactly what you did!
You said "Haskell uses Maybe", I said "A Haskell programmer could [do something else]" and then anticipated a no true Haskell programmer fallacy, but you went right ahead anyway: the programmer who chooses to use fromMaybe has to decide what the actual default is, and since he has to consciously pick one, he will choose a useful one. Also, because he is using Haskell, if he wants to express "there is no meaningful value", he will always use Nothing.
But let's make it official: if Angus writes fromMaybe "", which is accepted by the Haskell compiler but deletes all my files, is he a Haskell programmer?
> Say you read something about handguns with safeties being safer than handguns without safeties.
Is that an empirical study, or just a salesman saying "it has a safety therefore it's safer!"
A safety may provide less safety than other measures, for the same effort. Obviously, using a safety is not much effort. How much effort is required to satisfy the Haskell compiler? And how does that compare to other measures? That is an empirical question.
A safety may provide no safety for people who always handle firearms safely. Perhaps bash programmers who always follow conventions (just as true Haskell programmers always follow conventions) in practice would never deploy programs that access uninitialized variables (for example). Where can I find these bash programmers? That is an empirical question.
A safety may reduce safety in situations where a firearm must be used quickly. How quickly can a Haskell programmer fix a critical bug when the Haskell compiler reports type errors that would not be runtime errors in a dynamically-typed language? That is an empirical question.
A safety may induce risk-compensation. Perhaps Haskell programmers often throw caution to the wind, believing that Haskell programs are more likely to be correct anyway, no empirical evidence needed! How many Haskell programmers believe that? That is an empirical question, but obviously there are quite a few :-p
> But some languages are safer than others by design.
What do you mean by "safer" here? Let's say "type-safety" means "a type-checker enforces some constraints" and "practical-safety" means "more likely to be correct, all else being equal". It seems you think "type-safety" == "safety" == "practical-safety", by definition.
> Haskell is safer than Ruby
By how much? How much more, exactly, are Haskell programs likely to be correct?
Your post can be pretty much answered with a big "no".
- No, we were talking about safety, not convenience. Stop shifting the subject. (But do you know what IS inconvenient? Being hit by a bug on production in your Ruby program that could have been prevented by a static type checker).
- No, this is not a case of No True Scotsman. First, you seem to misunderstand how fromMaybe is used (have you EVER programmed in Haskell?). Unlike other languages, where null can hit you by surprise, the programmer ALWAYS chooses the default value for fromMaybe. Since HE chooses it, it's not an accident. It can never happen by accident, as with null. Repeat after me: this. is. not. a. case. of. the. no. true. scotsman. fallacy. "So there is this Scottish guy..." "Wait, what if this is a Scotsman from Mars?" "But Scotsmen are from Scotland!" "A-HA! Fallacy!" This gets old fast.
- No, a programming language with additional safeties is logically safer than one without, all else being equal (i.e. if no other safeties get removed instead). Especially since those safeties don't prevent you from also applying every engineering practice from the other, no-safe language. You can do all the fancy testing you do for your Ruby programs in Haskell as well! In fact, a lot of interesting testing tools have come from the Haskell community, such as QuickCheck (aka property testing). In fact, it wouldn't surprise me if there was a RubyCheck nowadays.
- No, Haskell programmers aren't smug fools who believe that because their programs compile then they must be correct, and therefore no additional testing or QA is necessary. You keep bringing this old nonsense up, and you're the only one who believes it. Please accept as a given that no-one believes this and never bring it up again.
> What do you mean by "safer" here? Let's say "type-safety" means "a type-checker enforces some constraints" and "practical-safety" means "more likely to be correct, all else being equal". It seems you think "type-safety" == "safety" == "practical-safety", by definition.
Actually, rather than being equal, what I think is that an increase in one implies an increase in the other. A type-checker enforcing some constraints automatically makes your language practically safer than one without it. It's yet another tool in your toolset, one that gets automatically applied by the compiler. I get that, by convenience, some people want to write programs without some safeties; that's an issue of convenience, which is separate.
> Obviously, using a safety is not much effort. How much effort is required to satisfy the Haskell compiler?
Not much. Haskell programmers everywhere do it daily. What's your point? Have you written a program in Haskell and found it too hard?
Please think before you want to lecture me again about deductive and inductive reasoning. You sound condescending instead of clever.
I refuse to discuss this with you any further until you drop the sarcasm and answer this simple question: how fluent are you with Haskell? If you claim to have actual Haskell programming experience (tutorials don't count) I'll take your word for it, accept that you understand its trade-offs, and let you have whatever last word you want to have in this subject, and move on.
> - No, we were talking about safety, not convenience.
If you don't want to compare programs written for the same effort, then what? Should we compare Haskell programs written with more effort to bash programs written with less effort? That's a confounding variable!
For the programmer there is always an opportunity cost: an hour spent satisfying a type-checker is an hour that could have been spent testing. Which use of that hour would provide the greatest safety? That is an empirical question.
> Stop shifting the subject.
No, you keep shifting the subject when you claim that Haskell programs are safer in practice but you then run away to your wonderland, where time stands still and the problem of opportunity cost has been solved, where you "prove" whatever you like while detached from reality, and then return to pretend that your "proofs" are valid in practice!
> - No, this is not a case of No True Scotsman.
So... if Angus writes fromMaybe "", which is accepted by the Haskell compiler but deletes all my files, is he a Haskell programmer?
> - No, a programming language with additional safeties is logically safer than one without, all else being equal (i.e. if no other safeties get removed instead).
Oh, "logically"! It's amazing what you can "prove" when you postulate a magical language that stops the clock while you satisfy the type-checker! Unfortunately, on this planet, time waits for no man, and you must choose how you spend it. Which approach provides the greatest safety, for the same effort? That is an empirical question, and you still have no empirical evidence for your position.
> - No, Haskell programmers aren't smug fools who believe that because their programs compile then they must be correct, and therefore no additional testing or QA is necessary.
No, you just believe you don't need empirical evidence for empirical claims! :-/
> A type-checker enforcing some constraints automatically makes your language practically safer than one without it.
Perhaps. But perhaps the time spent satisfying a type-checker would be better spent testing. Perhaps bash is safer than Haskell in practice!
> perhaps the time spent satisfying a type-checker would be better spent testing
One of the reasons I like languages with more advanced types systems such as Haskell's or Scala's is because test suites in other languages to achieve the same code coverage take much longer to run.
> One of the reasons I like languages with more advanced types systems such as Haskell's or Scala's is because test suites in other languages to achieve the same code coverage take much longer to run.
Do you mean Haskell has better runtime performance than dynamic languages, test for test, or that programs in dynamic languages need more tests (say, tests of function return types), or both?
In practice programs are written by programmers, not generated randomly, so it is not necessary to test "all the things that can go wrong" but only the things that programmers actually do wrong in practice.
So you refused to answer my direct question: how fluent are you with Haskell? I must assume the answer is "not much", which is why you keep building strawmen and disregarding everything people tell you about the language.
For an empirical advocate, you keep arguing from a quite theoretical point of view.
No, but "set -u" does help the bash programmer to reject incorrect bash programs and to put "test" where necessary, just as the Haskell compiler helps the Haskell programmer.
Perhaps the Haskell compiler helps more. Perhaps Haskell programs are more likely to be correct. Perhaps. There is only one way to find out.
> It's just that one is safer than the other by design.
What do you mean by this? Is this a deductive claim, contradicting the previous sentence, or an inductive claim that Haskell programs are "more likely to be correct" (all else being equal), which requires empirical evidence, as you cannot establish an inductive claim with deductive arguments.
> "set -u" does help the bash programmer to reject incorrect bash programs and to put "test" where necessary, just as the Haskell compiler helps the Haskell programmer.
Except the Haskell compiler (GHC here) doesn't just equivalently put test "where necessary", but it puts test everywhere and on everything which would be wildly impractical and tedious with bash.
I really wish static typing advocates would consider making inductive claims. The last few times I've asked for such evidence the response has been along the lines of "you just don't understand". It's indicative of a general intellectual orientation that I think has held back the introduction of valuable PLT concepts into mainstream languages.
> You have proved the original (deductive) claim "if it compiles, it works" for that one specific example, but that does not hold in general.
Are you sure it does not hold in general? Upon repeating mightybytes example with all Haskell functions vs another language I feel like you'd get similar results due to pervasive purity and the type system.
I chose that non-total function because it is easier to enumerate all the possible error cases than with the map function. And the error case is easily handled by the combination of incomplete pattern match and Safe Haskell. I was also trying to emphasize purity more than free theorems.
>What is the effect of programming languages on software quality? This question has been a topic of much debate for a very long time. In this study, we gather a very large data set from GitHub (729 projects, 80 Million SLOC, 29,000 authors, 1.5 million commits, in 17 languages) in an attempt to shed some empirical light on this question. This reasonably large sample size allows us to use a mixed-methods approach, combining multiple regression modeling with visualization and text analytics, to study the effect of language features such as static v.s. dynamic typing, strong v.s. weak typing on software quality. By triangulating findings from different methods, and controlling for confounding effects such as team size, project size, and project history, we report that language design does have a significant, but modest effect on software quality. Most notably, it does appear that strong typing is modestly better than weak typing, and among functional languages, static typing is also somewhat better than dynamic typing. We also find that functional languages are somewhat better than procedural languages. It is worth noting that these modest effects arising from language design are overwhelmingly dominated by the process factors such as project size, team size, and commit size. However, we hasten to caution the reader that even these modest effects might quite possibly be due to other, intangible process factors, e.g., the preference of certain personality types for functional, static and strongly typed languages.
Empirical evaluations break down quickly as the complexity of what is being measured increases, so your best bet in doing a study is to focus on one or a few features.
PLs are of course evaluated over time in the market place over time, like any other designed artifact.
I love that report and find it very inspiring (and it gives me an intuition that Haskell has an edge over the other languages), but the methodology is very disappointing:
- The requirements were mostly up to the interpretation of implementors, which more or less decided the scope of their programs.
- All results were self-reported. Even ruling out dishonesty, there are a lot of ways uncontrolled experimenters can report incorrect results.
- Many implementations weren't even runnable.
- No code was run by the reviewers, at all.
I really would love to see a more serious attempt at this experiment (probably with more modern languages).
Is there any empirical evidence for this?