Hacker News new | past | comments | ask | show | jobs | submit login

> What Haskell does in no more "safer" than if I do tag-checking at runtime.

No. Haskell checks at runtime which tag a value has. But, crucially, the compile-time check guarantees that these tags will always be valid. Assuming you have handled every possible variant of a type (e.g. `Just ...` and `Nothing` for the `Maybe` type), you are guaranteed that you are "covered". Unlike in, say, python, where you might get passed a string, or None, or...?

> The idea that compile-time checks is much better than runtime checks is also not obvious

Dynamic languages have their place, but there's no contest whatsoever when it comes to safety. What's better: discovering an error before the code has run, or discovering the same error while the code is running, possibly in production? In the huge majority of cases, the former is preferable.

> What is better - to clutter the code with explicit type annotation or clutter the code with explicit type-tag checking in the runtime code is another question.

With type inference, you get the best of both worlds :) Most Haskell code doesn't need any type annotations, and most of what's out there only has explicit type annotations for top-level expressions, for purposes of documentation.




It depends on what definition of safety you are using. If it's "won't segfault" or "allow access to a user's private data" then dynamic solutions can be as safe as static solutions. If you define safety as a lack of faults at run-time, then by definition static solutions are safer.

If you want to define safety broadly, then both static (upfront analysis, type checking) and run time solutions (hyper visors, tag checks, monitoring) are necessary to achieve it.


I don't know why you're getting downvoted. Obviously, Haskell can buy you a number of guarantees, but just like the fact that your program type-checks isn't proof against logical errors, it also doesn't mean you wrote the stop condition for this recursive function correctly, or that you are not going to run out of memory at runtime.

In practice, and it has also been my experience, as long as you leverage the type system, the debug time necessary to get a correctly-working program in a statically, strongly typed language tends to be significantly shorter than with a dynamic language (or with a statically typed language with a more primitive type system). The size of the code is often comparable as well.


Productivity is a different metric from safety. It is more reasonable to argue that haskell allows you to write correct code faster than it is to argue that haskell allows you to write "more correct/safe" code.


That's a good way to think about it. Any code can be made correct (for most use cases) with the right amount of man-hours. It's more a question of how many man-hours that is (and how many of them are needed to ensure that correctness is not lost when the code evolves).


Agreed. But note that "faults at run-time", even with recovery procedures, can be very costly in some critical systems.


Run-time faults would happen even in a static language however, as those aren't just about tag match errors or NPEs. A really heavy-duty (dependant) type system is needed to get rid of resource (out of memory) and arithmetic faults (divide by zero), so those safety critical systems are more likely to be formally verified instead.

Haskell is also completely unsafe when it comes to resource usage, which is why it is not used often in say avionics.


Yes, runtime faults can happen in a static language as well.

However, I don't buy the usual argument that follows up (not saying you are saying this now, but I'm sure you've heard it): "therefore, you're not better off using a statically typed language, therefore just use a dynamically typed language".

Just because a statically typed language cannot rule out every kind of runtime flaw is not a good reason to stop using them. Statically typed languages rule out some errors that dynamically typed languages don't. In both cases you also need to perform additional checks, and in both cases, safety critical systems should be formally verified. Not "instead", but "in addition to".

A statically typed language is an additional, automatic safety net on top of everything else. Every bit of safety counts. You don't need to formally verify your generic list algorithm isn't going to try adding two elements (and throw an exception when the elements cannot be meaningfully added) because it can't by definition. That's one less formal verification you need to do.


I never claimed dynamic languages were better, just that Haskell is a dynamic language with more static features than say Python. There are no truly static languages, except maybe perhaps static macro languages like C++'s STL.

> In both cases you also need to perform additional checks, and in both cases, safety critical systems should be formally verified. Not "instead", but "in addition to".

In practice, there are a lot of other things going on in Haskell that work against formal verification in safety critical real time systems. When you need to know for sure, non-determinism (say in the form of lazy evaluation) is your enemy, and the type system itself becomes less useful because those paths have to be rigorously explored anyhow. Heck, at that point, you might even want to use assembly since even the compiler is suspect.

Thankfully, most of us don't write that kind of code.

Pedantic: runtime is a thing that runs your program while run-time is a phase.


> I never claimed dynamic languages were better, just that Haskell is a dynamic language with more static features than say Python. There are no truly static languages, except maybe perhaps static macro languages like C++'s STL.

That's a pretty non-standard view. It's as unhelpful as static typing proponents claiming dynamically typed languages such as Python are "unityped". As unhelpful as Simon Peyton Jones jokingly claiming that Haskell is the world's best imperative language :) Any of those claims may be technically true, but it doesn't help us understand, choose or use those languages. Other similarly unhelpful claims include the old "these languages are all Turing-complete anyway, so it doesn't which one you use."

So I disagree with you, philosophically, but since I acknowledge you're a very knowledgeable guy, let me ask you about your opinion:

- Would you say there is no advantage to using a statically typed language like Haskell/ML over a dynamically typed language?

- Or if you think there is an advantage, do you think the costs outweigh the benefits?

- If you were asked to develop a safety critical system, and given the choice of using Haskell (or ML if you dislike lazy evaluation by default, or OCaml if you prefer a more hybrid language) and Python (or a similar dynamic language of your choice), and every other analysis tool you can think of, static or run-time, which language would you pick? Assume you cannot pick anything else, you're given time to become proficient on the language you pick, and you cannot refuse the assignment. I know, this is a fantasy scenario, but indulge me.


Static typing is definitely useful, but like humans and chimps, the so called static languages share much of their DNA with their dynamic counterparts; and static typing doesn't necessarily account for more than a few percentage points of productivity increases (though those percents are definitely noticeable). More to the point, you still need to test non trivial Haskell code and the lack of a decent debugger is a tragedy.

Safety critical systems are generally real tine so Haskell is off the table. I'll also spend a lot of tine manually verifying the code, and using alot of external analysis and verification tools, so restricted C++ is fine in that case. Now, if you told me that the system was safety critical and nit real time, and the system wasn't important enough to merit lots of manual verification, then Haskell would be a great choice because its static type system is better than nothing.


Thanks for your answers.

Why do you think Haskell is fine as long as the system "isn't important enough to merit lots of manual verification"? That sounds puzzling to me. I'd say one thing complements the other: automatic and manual verification seems the best option.

Is lazy evaluation your biggest issue with Haskell? How about languages like ML or OCaml, which are arguably safer than "restricted C++" and do not default to lazy evaluation?

Or is GC your main problem with these languages? This would rule out most dynamically typed languages as well.


Most languages focus on productivity, because that is where the market it is. Safety critical systems are not dominated at all by productivity concerns, and a lot of resources are thrown at them to make sure they are perfect. In this case, the abstractions provided by any HLL just get in the way! Its like putting a cattle guard on front of a tank.

Anyways, you might want to read the stack overflow article on this subject:

http://stackoverflow.com/questions/243387/which-languages-ar...


You can get rid of divide by zero without anything sophisticated. The key is that you cannot expose normal division. There are two reasonable solutions:

    Perform test in the operation:
    (/) :: i -> i -> Maybe i

    Require test before the operation:
    nonzero :: i -> Maybe (NonZero i)
    (/) :: i -> NonZero i -> i

Both of these come at a cost, of course, of making some arithmetic expressions look less natural - which dependent types could probably help with.


Why do you use the phrasing "heavy-duty (dependant) type system"? Are you of the opinion that dependant typing is overkill for general programming?

What is your opinion of Idris?


DT is definitely heavy-duty. There are massive unsolved problems about how to make, say, proof management... basically even possible.

This is part of what makes HoTT exciting, I believe, as it provides a formalized way of passing proofs and operations around over different types so long as there's a suitable homotopy.

So while Idris is a very exciting research direction, it's completely fair to state that they have a hefty set of barriers to overcome before they truly demonstrate the value of DT in some larger variety of usecases besides just theorem proving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: