Hacker News new | past | comments | ask | show | jobs | submit login
Examples of Incorrect Abstractions in Other Languages (reddit.com)
115 points by bshanks on May 19, 2020 | hide | past | favorite | 133 comments



Some of these are arguably "incorrect abstractions", but others are just design decisions that are being treated as "incorrect" because they don't follow Haskell's design sensibilities. A perfect example is the complaint about sum in Python:

   >>> sum(["a", "b"], start="")
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TypeError: sum() can't sum strings [use ''.join(seq) instead]
I don't think this is incorrect at all! There's a very good reason to use join instead of sum for adding strings together: join is a lot more efficient than simply adding strings together, because it will precompute the amount of space needed for the resulting string at once and then put all the strings into the buffer at once, whereas using sum in the naïve way suggested would do a different append for each element. These are just different tools for different purposes!

Could you design a sum that automatically looked at its argument type and deferred to join for strings? Sure! And that'd be a valid design decision as well. But the specific choice Python made here is motivated and defensible: it's only "incorrect" if your personal definition of "correct" is "built with the same algebraic sensibility that Haskell's designers had," but at that point, all other languages are "incorrect" because they're not Haskell!


> I don't think this is incorrect at all! There's a very good reason to use join instead of sum for adding strings together: join is a lot more efficient than simply adding strings together, because it will precompute the amount of space needed for the resulting string at once and then put all the strings into the buffer at once, whereas using sum in the naïve way suggested would do a different append for each element.

Differing performance characteristics should not break the semantics of an abstraction. That's an abstraction leak which prevents writing generalized abstractions.


Special cases like this make generic code very hard to write. If I'm writing a library that needs to handle user-provided data agnostic of type and I want to do a sum operation, now I have to add checks all over the place for whatever special cases python has.


I think never have I written generic code for summing up numbers AND concatenating strings. Even the Haskell people would agree I think. So the design problem is in overloading `+` for string and list concatenation in Python in the first place.

But I guess in terms of strings let's take a moment and remember the one truly good design decision that Python devs made about strings (which isn't usual for an imperative language). Strings are immutable by default.


> I think never have I written generic code for summing up numbers AND concatenating strings. Even the Haskell people would agree I think.

Erm, Haskell people write code that can do this all the time - you write stuff in terms of generic monoid and then it just does. I can even think of a concrete case where I used those two examples: I had essentially a report that looped over a bunch of rows and accumulated something as a secondary effect, and in one client's case that was a string explaining what had happened, and in another client's case that was a count of how many things had been processed.


So you've written (of course type safe) generic code that was in principle able to achieve the same effect as described in https://stackoverflow.com/questions/9032856/what-is-the-expl... ? ;-)

Batman!


No. He was talking about monoids. The content in the link you are talking about aren't monoids.


Do the implementation details matter if you could achieve the same result?


It's not about the implementation details. It's about that the things that are mentioned in the link make no logical sense. The addition operator allows values of different types to be added together and even worse the resulting values type isn't required to have the type of either operands.

It's not possible to write the generic code since there are basically no constraints. Generic code is all about constructs with certain invariants and use those invariants to create something new. Javascript addition basically has no invariants.


>It's not about the implementation details. It's about that the things that are mentioned in the link make no logical sense. The addition operator allows values of different types to be added together and even worse the resulting values type isn't required to have the type of either operands.

I would argue that that's not true.

JS is a "mono-typed" language and there is therefore only one type of things you can operate on and return as a result.

>It's not possible to write the generic code since there are basically no constraints. Generic code is all about constructs with certain invariants and use those invariants to create something new. Javascript addition basically has no invariants.

That's backwards to my point. I tried to point out that having generic code which can operate on (almost) arbitrary data structures mixing for example natural numbers or lists of arbitrary things will lead in the end to the same puzzling results as the "classical JS-Batman" example.

Don't get me wrong, generics are a great tool! But imho it's easy to "overstretch" their usage until a point where things become "so generic" that your functions can operate on "almost everything" but the results become as surprising as the alleged "JS insanities".

Something like the Monoid abstraction is incredibly powerful (most languages can't express such an abstraction in a sensible way even). Using such a powerful tool to abstract over some very concrete and banal things (that would be handled better separately as they're "not the same", no matter there may be a deep mathematical connection) is not a good architecture imo. If your code handling "concrete things" is written in a very generic and abstract way you won't have any "normal" tools left on your belt in case you would need to abstract over that stuff you've just written (at this point you could enter type-level programming or something that will achieve something similar but you will than end up with laughable complex code to handle in the end banal business logic).

https://en.wikipedia.org/wiki/Rule_of_least_power


> JS is a "mono-typed" language and there is therefore only one type of things you can operate on and return as a result.

Well, in that case programmers should not be surprised when they subtract an integer from a string and get NaN. Apparently they are are surprised though. I'd prefer my language not let me subtract an integer from a string, since if I ever attempt to subtract an integer from a string that's almost certainly an error.

> That's backwards to my point. I tried to point out that having generic code which can operate on (almost) arbitrary data structures mixing for example natural numbers or lists of arbitrary things will lead in the end to the same puzzling results as the "classical JS-Batman" example.

But that's not true. Nothing you do with a monoid will create such a puzzling result, because summing a list via a monoid is - by definition - equivalent to summing it by calling + pairwise. (Of course, if you have a type for which + behaves in a confusing way, then this might be confusing - but it's not the monoid that's being confusing there).

If you want to claim monoid-based code can be confusing, try using some actual examples of monoid-based code, rather than code in a different language that isn't using monoids and wouldn't compile if it did.

> If your code handling "concrete things" is written in a very generic and abstract way you won't have any "normal" tools left on your belt in case you would need to abstract over that stuff you've just written (at this point you could enter type-level programming or something that will achieve something similar but you will than end up with laughable complex code to handle in the end banal business logic).

This is completely wrong. If you write your code generically it becomes easier to abstract over, not harder. For example, if you write a sort function that operates generically from lists and separate that from your comparison function instead of writing separate functions for sorting lists of integers and lists of strings, it's much easier to then make further abstractions over your sorting.

> https://en.wikipedia.org/wiki/Rule_of_least_power

Again you're getting it backwards. Code written in terms of monoids is less powerful than code written in terms of a specific datatype, which means there are far fewer ways to get it wrong.


> JS is a "mono-typed" language and there is therefore only one type of things you can operate on and return as a result.

Javascript has a type system that is enforced at run-time.

> That's backwards to my point. I tried to point out that having generic code which can operate on (almost) arbitrary data structures mixing for example natural numbers or lists of arbitrary things will lead in the end to the same puzzling results as the "classical JS-Batman" example.

I don't agree. The entire point of the link you posted was the insanity of the semantic of the addition operations in Javascript. Monoids are very simple and well defined. The javascript insanity would need pages upon pages of explanations because basically everything is a special case. In contrast monoids can be fully explained in a two sentences.


>> JS is a "mono-typed" language and there is therefore only one type of things you can operate on and return as a result.

>Javascript has a type system that is enforced at run-time.

Nobody questions that JS has "runtime type-checks". But that's irrelevant to my point. I've referenced of course this here:

https://existentialtype.wordpress.com/2011/03/19/dynamic-lan...

>Monoids are very simple and well defined.

Sure. Just define an instance for a type that models a JS runtime-object, let's see what happens… ;-)

But I see, my point is still not explained well: What I'm trying to say is that using abstractions that are so powerful that the resulting code can operate on (almost) arbitrary "unrelated" things using that code will lead to the same kind of puzzling outcomes as "JS Watman".

(And like I said, a deep mathematical relation doesn't make things "related" in a practical sense automatically. For example when it comes to "correct" domain modeling of business cases where you explicitly want to treat different parts of your system separately; as this results in a less "entangled" architecture. Gluing together parts that are unrelated through clever code on the other hands side creates maintenance hell where everything depends on everything, but you can't even see what depends on what as everything is generic; that's the opposite of good software engineering, imho. To throw a few phrases: "Divide and conquer!" "Use the right tool for the job!" "Use the least powerful tool that gets the job done well!").


We don't just write generics over types of strings ([Char], Text, etc..) we write over Semigroups and Monoids. Concatenation with <> is standard practice.


It's very common in data processing to sum up stuff (e.g. unique visitors, total visitors, total revenue, etc.) To handle this you typically want to write a generic library that makes summing up stuff easy. I can't think of an example where you'd want to add up strings (more likely lists of strings) but it is definitely the case for other data structures (sets, vectors, lists, numbers, etc.)


Python sum() can handle summing up other data structures, although, as with foldl, you may have to give it an initial accumulator argument:

    >>> import numpy as np
    >>> xs = np.arange(9).reshape((3,3))
    >>> xs
    array([[0, 1, 2],
           [3, 4, 5],
           [6, 7, 8]])
    >>> sum(xs)  # builtin sum, not numpy sum
    array([ 9, 12, 15])
    >>> sum([[3, 11], [5]], [])
    [3, 11, 5]
    >>> sum([(3, 11), (5,)], ())
    (3, 11, 5)
However, that doesn't work for strings; there's a special-case check for that.


There's no language in which you would be able to write a function for arbitrary data structures using sum(), and get a reasonable outcome.

The problem with Python is that it doesn't have typing that can express "I can sum".


Actually, I believe this is expressible in Python's type system:

    # Python 3.8
    from typing import Protocol, TypeVar

    T = TypeVar('T')

    class Summable(Protocol[T]):
        def __add__(self, other: T) -> T: ...

In Python 3.4-3.7 you'd need to install typing-extensions library:

    from typing_extensions import Protocol


Interesting. It seems that "every second language" gets "type-classes" these days…

C++ added such feature, this above looks like a Python version, Rust has it, Scala had them as one of the first "OOP" languages.

It's nice to see that Haskell pioneered a very useful and now spreading language feature!


Ada had them before Haskell, and I'm not even sure it was the first language with them.


> There's no language in which you would be able to write a function for arbitrary data structures using sum(), and get a reasonable outcome.

Computing the number of bytes used by a data type seems perfectly reasonable use of sum in that context. This is also achievable in many languages. All 'sum' needs is a map from a value to a number, which can have many sensible meanings even for arbitrary data types.


And that works in python to, even if the source type is strings:

  sum(map(mappingFunction, iterable))
works fine as long as mappingFunction maps the elements of an iterable to numbers.


I recently got bitten by trying to do exactly this, concat strings with sum, and spent a while trying to find out if I was being stupid (in this case, no). It damn well should work.

This works

  from functools import reduce
  nums = [1,2,3]
  def add(p, q): return p + q 
  print(reduce(add, nums))
  strs = ["how", "are", "you"]
  print(reduce(add, strs))
so why not that?


Python is a language that seems to make a lot of sense at first but gets a bit weird when you dig into things. The + operator particularly is a bit funny; I recently came across a case where x+=y was not equivalent to x=x+y... Though I can't figure out what it was now.


> a case where x+=y was not equivalent to x=x+y

This is true for lists in Python. `a = a + b` creates a new list, but `a += b` works in-place:

    >>> a = [1, 2]
    >>> b = a
    >>> a = a + [3]
    >>> print(a, b)
    [1, 2, 3] [1, 2]

    >>> a = [1, 2]
    >>> b = a
    >>> a += [3]
    >>> print(a, b)
    [1, 2, 3] [1, 2, 3]
There are other issues with the `+=`-style operators as well - IMO it was a mistake to have `a += b` be anything other than a shorthand for `a = a + b`.


Agreed, but seems haskell also fails to keep commutativity here tho?

Concatenation is not addition in my opinion

I've often wanted something like bits++bits concatenation to work in the same way for numbers as for strings


> Concatenation is not addition in my opinion

For python, syntactically it is. In a general sense, an operation is what you define it to be. Just my opinion.


You were likely doing something with threads and failed to lock `x`, which I would describe as an issue that should be expected. It doesn't really have to do with `+=` so much as it has to do with being aware of race conditions on variable unprotected by a mutex.


Python's list sum are funny like that.

x+=y is equivalent to x.append(y)

x = x + y creates a new list, then assign it to x (highly inefficient in a loop)


> I recently got bitten by trying to do exactly this, concat strings with sum, and spent a while trying to find out if I was being stupid (

Perhaps not stupid, but ignorant, and not just of the special casing in sum which exist because of a footgun.

sum is basically a readability alternative for functools.reduce for adding numbers, and it's special cased for strings because though it would superficially work for strings otherwise, the performance of repeated string concatenations is pathological in python (as in many garbage-collected languages where it forces many allocations), so a special cases was added which points people to the method to produce exactly the same result without the pathological performance issue.

If you are doing something more complex than numbers with heterogenous data that includes strings in a reduction by the addition operator so that the join you'd use with homogeneous collections of strings isn't what you want (I'm not sure that hypothetical is ever going to happen), use functools.reduce, which for readability you probably should be using in preference to sum() if you aren't doing a mathematical sum anyway.


Like you, I know the O(n^2) bahaviour that would occur with a naive repeated concatenation, and like you I would know how to deal with it. You'd use "".join(...) for the special case it was strings and therefore so would I. This needn't be a footgun of any sort, and don't assume I (or others) are ignorant.


> If I'm writing a library that needs to handle user-provided data agnostic of type and I want to do a sum operation, now I have to add checks all over the place for whatever special cases python has.

No, you don't, because: (1) if your idea of a sum is just reducing with “+”, you use functools.reduce; and (2) if your idea of sum is a mathematical sum, you use the very slight shortcut of the sum() built-in function, which is (barring things like the runtime type checking) equivalent in result to:

  def sum(iterable, start=0): 
    return functools.reduce(lambda x,y: x+y, iterable, start)


What's a sum operation though? Is the sum of a, b and c the same thing as the sum of b, c and a?


You needed those checks anyway. Python is just making sure you don't forget.


There is a rather silly "workaround":

  In [17]: class Start(object): 
      ...:     def __add__(self, other): 
      ...:         return other 
      ...:                                                                                                                                                                                                                                     
  In [18]: sum(["a", "b", "c"], start=Start())                                                                                                                                                                                                 
  Out[18]: 'abc'


I agree with your overall point, but regarding your “could you design....” rhetorical: in that example, isn’t it already inspecting the type of the list elements and then doing a specific different thing (printing that message) if it happens to be a list of strings? Why not just go one step further and just do the thing instead of nag the user to do the thing?


Because Python's design philosophy is that, for a given operation exposed by the language, there should ideally be a single clear way of doing it, and that philosophy is optimized around reading code rather than writing it. You might need to do a few more edits while you're writing the code, sure, but now a reader of your code will always find sum for numbers and join for strings without even having to examine the context of the code in question.

I'm not saying that's necessarily the correct design approach, but it's the one that Python has explicitly established as their guiding principle! A different language will take a different approach: Ruby, for example, likes to build multiple alternative ways of accomplishing the same thing, which leads to multiple aliases for built-in methods or various alternative flags that allow a method to behave in a number of ways. It shouldn't be a surprise that Ruby is fine with the equivalent operation:

    irb(main):001:0> ["a", "b"].sum(identity="")
    => "ab"
But that's because Ruby's design sensibility is different from Python's. You might like one better than another, but they both come from a consistent vision of how to design programming languages.


> Because Python's design philosophy is that, for a given operation exposed by the language, there should ideally be a single clear way of doing it

This philosophy falls apart immediately once you start using Python though. I have yet to see it actually be used for anything but not implementing features that someone wants.


What other languages are you comparing against? I came to Python from Perl and the one-way-to-do-it-ness was noticeable and refreshing.


I don't think many other languages make that claim, and I think it's generally an anti-goal in diverse multiparadigm languages. I don't even understand why Python tries to have it: I can think of three fairly idiomatic ways to sum a list of numbers right off the top of my head; it's immediately clear that such a proposition is not feasible for the language. The only time I have heard it come up is when there is an unsaid accusation that the language is getting "to complex" and someone wants to block the addition of a feature…


But in almost all of the langauge, TOOWTDI is not strictly enforced, it is merely a highly encouraged convention. For example, why am I allowed to do

    mylist = [x*2 for x in range(1,10)]
but also do

    mylist = []
    for x in range(1,10):
      mylist.append(x*2)
That's two ways to do the same thing, and I bet you would find a lot of the latter in beginner's python code. I think a better choice would to just have a linter which enforced the correct usage of sum() and join(), and not the implementation itself.


There's probably a lot more than two ways too, eg.

    mylist = map(lambda x: x * 2, range(1, 10))


map will return an iterator, not a list


Then call list(map(...

The point stands.


Only in Python 3, not Python 1 or Python 2.


Let's concatenate strings.

  a + b
  f"{a}{b}"
  "".join([a, b])
  "{}{}".format(a, b)
If a == b, then you can also, which itself is a bit ridiculous:

  a * 2
Is mentioning StringIO considered cheating?

I would feel the Zen if Python would have a separate concatenation operator, it could be even the same for strings and lists. Lack of the operator in popular languages is my pet peeve.


You forgot:

  "%s%s" % (a, b)
(Still works at least as of python 3.6)


Unless something has changed recently in Ruby, won't identity="" as a parameter just assign the empty string to a variable named identity and then pass the empty string as an unnamed positional parameter?


You're right, it should be:

    ["a", "b"].sum("")


One reason is that using "sum" versus "join" helps the reader understand which operation is being performed. You can tell whether the author was expecting a list of strings or a list of numbers. This hint about the author's intent also helps the compiler or runtime generate better error messages.

As a reader, you don't know what an abstract operation does, just that it should obey certain rules. Writing code abstractly when abstraction is unnecessary removes information, which has a cost in readability.

Now if you want to write abstract code then it will be frustrating, but some languages are optimized for writing concrete code.

At least, in this particular case. I'm not sure how consistent Python is about this.


This reason doesn't make sense to me, because of `+`. If I do `a + b`, you don't know if my intent is to add numbers, join strings, or concatenate lists. If `sum` and `join` are supposed to look different to convey intent, then the binary operators should look different as well.


The argument against operator overloading doesn't hold water with me at all.

If 3 + 2 is allowed but you don't think "a" + "b" should work, why should 8.1 + 1.9 work? If your argument is that it's because they're numbers, what about a Complex library, or numeric vectors, or GMP? If I can do 3 + 2 and 8.1 + 1.9, I should be allowed to make Complex{0, 1} + Complex{4, 9} work, or Vector{1, 0} + Vector{2, 2}, or BigInt{99999...} + BigInt{9999...}.

I can sort of see the argument in dynamically-typed languages when you might not know what the operands are ahead of time, but for statically-typed languages like Rust and Go (Rust gets this very, very right FWIW)? There's no defensible argument against it. Why is `+` magic and holy, but a `fn add(...)` isn't?


That would very greatly increase the complexity of sum. Consider that python is dynamically typed, has heterogeneous lists, and that sum takes an iterable, not just a list.

> isn’t it already inspecting the type of the list elements

Actually it only checks if the type of start (the initial accumulator value) is a string. If the accumulator becomes a string later on, it's happy to sum strings.


If that check is there anyway, and the preferred path is known, I would think that the great increase in complexity would look something like this:

     if trying_to_sum_strings(args):
    -    complain_bitterly()
    +    separator.join(args)


Like I said, the current check for trying_to_sum_strings is just that start is a string. So if you would do this you would actually make sum less consistent, since sum([x], ''), which should logically be '' + x, would become ''.join([x]), which not the same (think of an example with __radd__).


I don't follow. In your example, if x is a string, there is no problem since '' + x is just x. If x is not a string, the new path would not trigger, so the behavior would be as before.

EDIT: OK, I think if the example is sum([some_string, some_not_string], '') then there is a possibility that some_string + some_not_string might be a valid expression computing a value while ''.join([some_string, some_not_string]) would complain that some_not_string is not a string.


> Why not just go one step further and just do the thing instead of nag the user to do the thing?

At least, readability suffers.

"".join(["a", "b"]) is a lot more intuitive than sum(["a", "b"], start="")

edit: A beautiful abstraction is the one that feels as good when it's read as when it's written.


> "".join(["a", "b"]) is a lot more intuitive than sum(["a", "b"], start="")

No it isn't. The join method being on the separator string is a Pythonism that looks crazy coming from any other language.


The point they are making is that

    join(xs, "")
    xs.join("")
or whatever else you want to come up with is more readable than building more string stuff in to sum, not that having join be a str method is the best way.


Lits of maths based languages do [1,2] + [3,4] = [4,6], now you really need a different join operator.


Sum and join mean different things. If sum didn't fail on chars I'd expect it instead to sum their binary values, which is a thing one occasionally actually does want to do, not to join/concatenate them. If it can't do that (or you have strings, as in this case, for which it makes less sense—the most sensible thing there would probably be to treat it as a byte array and do something with that, though any conceivable use of that probably breaks down with UTFs 8 or 16 instead of ascii) I'd want it to fail.


Sum is the iteration of +. + on strings is concatenation. If you expected otherwise from sum, it's your expectations that might be wrong. What even is a Python string's "binary value"? Also, Python has no chars.


so you'd expect this behavior?

  sum("abc") =
  sum(ord(c) for c in "abc") =
  sum([97, 98, 99]) =
  294
because wow, i... can't think of why you'd ever actually want that. (though it'd be perfectly natural for byte arrays)


Byte arrays sure. Not strings. This is why having clear types is important! Strings aren't just byte arrays. Characters aren't just bytes.


Magic has the unfortunate downside of preventing people comprehending what's happening. This isn't an issue exclusive to str, and having it silently autocorrect for only that one case will teach people the wrong thing.



Besides being kinda flippant, I don't see how this aids the discussion in any way.


The meme summaries the point very well: "haskell people will say anything that isn't haskell isn't haskell, and is therefore bad".

It's ok for a meme to be relevant sometimes. Don't have to do a bunch of intellectual eye rolling just because a Simpsons character is making the point.


I agree with the person I responded to - this isn't an honest view of incorrect abstractions in other languages. This is haskellers complaining that other languages are different. The meme I think accurately conveys that, and why not have a meme every now and then?


What you just described is a leaky abstraction: you need to understand implementation details in order for it to make sense. Your overall point may be valid but the example seems in that sense to be poorly designed.


The problem here is that using "sum" or "+" for string concatenation is an idiotic choice of words in the base language (because concatenation is not commutative).


Right! Isn’t concatenation “++” rather than “+” in Haskell for exactly that reason?

So if “sum” is a shortcut for “fold with +”, summing a list of strings wouldn’t make sense.

Python still isn’t as regular as it could be, of course, as it does use “+” for concatenating strings.

I think this can all be justified for ergonomic reasons -- using the simple “+” for strong concatenation is convenient, but “sum” is discouraged for strings because it’s slow.

It would be interesting to see a parallel-universe Python that didn’t allow “+” on strings. I wonder if it would have caught on.


Literally any symbol other than "+" would work better for string concatenation. For example, the product, the comma, a dot, a space (but this might require other changes in the language), or heck, even the minus sign!


It's incorrect because of consistency issues. Sum works with every overloaded + operator. This is not a Haskell thing nor is it a "personal definition" of correct. People expect a definition to be consistent, this is a generally prevalent notion that transcends the boundaries of "personal" as the general population agrees with this notion.

Due to this, under a general definition of "correct" without the context of Haskell such a thing is still classified as generally incorrect. Instead it is more accurate to phrase it as "this is Guido's personal definition of correct." Not that there's anything wrong with that but this is generally unusual and opinionated behavior encoded into python and I wish people will acknowledge that.


In Haskell, `sum` adds a collection of numbers, i.e. instances of the Num typeclass:

    Prelude> sum ["a", "b"]

    <interactive>:4:1: error:
    • No instance for (Num [Char]) arising from a use of ‘sum’
    • In the expression: sum ["a", "b"]
      In an equation for ‘it’: it = sum ["a", "b"]
You appear to be refuting a straw man.


Fun trivia, the promise absorption thing was dismissed when it was brought up, with the argument that the "ship had already sailed on that" and that the arguments for supporting promises of promises were too academic (in the detractors' words, it was "fantasy land").

Which then resulted in the creation of a specification called fantasy-land[1] as a form of mockery towards the "incorrect abstractions are fine" attitude. The goal of this spec is to standardize on functional algebra nomenclature in JS.

[1] https://github.com/fantasyland/fantasy-land


lol, I knew fantasy-land, but I didn't know the story, thanks!

I have to admit, when I read stuff by hardcore FP proponents like in that Reddit thread, I also feel the urge to call it fantasy land.


This is a thread from r/haskell. I think a fair summary of the contents is "things Haskell people dislike about other languages."


To me it looks like someone who wants to compile a list of arguments they can refer to when they are having an argument about programming languages


> And yet, for a long time, it lacked a function analogous to <>, which is strange when considered from a "folds are Applicative" perspective.

Man. I enjoyed hacking around with Haskell back in the day, but I found it to be a write only language with forms like <> making me stop and stare too often.

Also, after maybe 15 years of working in statically typed languages, I also decided that ultimately, I prefer dynamically typed languages. Still have yet to use my favorite (Clojure) in production, though.


> Also, after maybe 15 years of working in statically typed languages, I also decided that ultimately, I prefer dynamically typed languages. Still have yet to use my favorite (Clojure) in production, though.

I like to have types when I need types. If I want to write a numeric value in a json output I'm creating that will be consumed by some other system, I want that thing to represent exactly that and spit an error if I ever try to assign something else, such as a string. But more than that, it should be easy to define my own types. I need to be able to say, "this number is not just any number, this is actually a temperature measurement in Celsius" or "this is talking about screen coordinates, not world coordinates". And then only be able to send that to the correct functions – or treat as a normal number if I'm doing basic math.

But most of the time, most people don't care about types. We want to say "do this operation across all elements of this array". If I trust whatever lower level function that added the properly typed elements in the first place, now I can focus on manipulating them. Glue code is very amenable to dynamic types.

Haskell has a very rich type system (maybe too rich?), but it also allows you to say that function receives an A and returns [A]. What's A? If you don't care, you don't care. That's beautiful. And the compiler can even inference some of it for you.

Now, if the only thing I have is a poor type system(like Java), then just give me primitives and a struct.


You want F#.

> If I want to write a numeric value in a json output I'm creating that will be consumed by some other system, I want that thing to represent exactly that and spit an error if I ever try to assign something else, such as a string

https://fsharpforfunandprofit.com/posts/serializating-your-d...

> I need to be able to say, "this number is not just any number, this is actually a temperature measurement in Celsius"

https://docs.microsoft.com/en-us/dotnet/fsharp/language-refe...


<> is not write-only

Just because you used + in grade school doesn't mean it's more intuitive than <>. Your child brain could learn + so your adult one can learn <>

The code I work with professionally that makes heavy use of <>, <*>, <$>, and >>= is usually the easiest parts of the codebase.

I'm not mathematician. I didn't go to a fancy private school. I didn't learn FP or category theory formally.


OP meant <*> but HN ate the star as an italic character


Ah ok. Well then I'd use multiplication as the analogy there. It was slightly abstract for my child brain but opened up so many possibilities. Applicatives feel like the adult version of that.


Have a look at Kotlin with the Arrow library, it's a great compromise


I occasionally find myself interested in learning Haskell, wondering why I haven't done so yet. So I go out and find an introductory tutorial, which is without fail written with exactly the sort of smug self-satisfaction that this thread is written. And I immediately lose interest.

Maybe the Haskell community is genuinely peopled by the most brilliant minds ever to grace the planet, and the rest of us are out here farming shit. I'll never know, because I'd rather struggle for a lifetime through poorly-implemented languages than have to turn every time I have a problem to a community completely unable to process the idea that they might be wrong.


> So I go out and find an introductory tutorial, which is without fail written with exactly the sort of smug self-satisfaction that this thread is written. And I immediately lose interest.

I think reading that tone into a tutorial, or from this thread, says more about you than the community.

And even if they were smug, consider the possibility that maybe they have good reason. Would you never buy a Tesla because Elon Musk smugly claims his cars are the best on the market?

> I'll never know, because I'd rather struggle for a lifetime through poorly-implemented languages than have to turn every time I have a problem to a community completely unable to process the idea that they might be wrong.

Cut off your hand to spite your face? Your choice of course, but from my experience as a non-Haskeller, the community is perfectly welcoming and willing to explain details to beginners.

Also, if the Haskell "community" never thought they were wrong, they would never disagree. That's clearly not the case if you perused any mailing list or thread.


TFA:

> which really should have implemented a well-known concept (Monoid, Monad, Alternative, whatever), but they fell short because they (probably) didn't know the concept

If that's not smug self-satisfaction, I don't know what is. No consideration that there were reasons they implemented their system in a different way. Suggestion that it was done due to ill-education on their part. I reread it carefully after your accusation. I don't want to too readily assume the worst in people. I am extremely comfortable with the accusation I laid on that thread.

But to your other point, I don't consider the possibility that they have good reason for their smugness. I don't find it implausible (as I said, albeit sarcastically), I just find smugness entirely orthogonal to correctness. There are brilliant smug people, but flat-earthers are also among the more smug people I've encountered.

If I lived in a world with only ten languages, I might agree I was punishing myself more than them. But I will never learn every language, tool, technique or skill I want to, so why would I ever start here?


> If that's not smug self-satisfaction, I don't know what is. No consideration that there were reasons they implemented their system in a different way.

Of course there was consideration. The history surrounding this is well documented. People familiar with the relevant concepts raised the problems with the JS Promise API early on, but were dismissed.

Which is also beside the point, because generalizing from N=1 is silly. If I'm being generous, you pointed out one smug Haskeller, where you claimed that the entire community was smug.

> If I lived in a world with only ten languages, I might agree I was punishing myself more than them. But I will never learn every language, tool, technique or skill I want to, so why would I ever start here?

Allow me to rephrase with an analogy to illustrate: I can easily solve 90% of my problems perfectly well with just integers, reals and complex numbers, so why would I ever learn general vector or matrix math?

Certainly that's your perogative, but I hope you'll agree that you're ignoring a powerful tool that would subsume all of your previous understanding of these abstractions, and which would let you solve those same problems, and more, better.


It sure is easier to bash people than learn a new language!


While you're not necessary unfair in your jab, I have trouble mustering any sense of guilt when I'm commenting on a thread purporting that every alternative choice in other languages is in fact an error on the part of its creators.


> I'm commenting on a thread purporting that every alternative choice in other languages is in fact an error on the part of its creators.

That's because the thread is called "Examples of Incorrect Abstractions in Other Languages". You will exactly read about poor choices on the part of other languages' creators. If you want to read about poor choices on the part of Haskell's creators then read threads discussing those. You'll find the Haskell community can provide plenty of examples for discussion there too.

Haskell: The Bad Parts

https://www.reddit.com/r/haskell/comments/4sihcv/haskell_the...

What Template Haskell gets wrong and Racket gets right

https://www.reddit.com/r/haskell/comments/4tfzah/what_templa...

If you had the ultimate power and could change any single thing in Haskell language or Haskell ecosystem/infrastructure, what would you change?

https://www.reddit.com/r/haskell/comments/9fefoe/if_you_had_...

examples of bad warnings/errors?

https://www.reddit.com/r/haskell/comments/4yxog6/examples_of...

What's wrong with GHC Haskell's current constraint system?

https://www.reddit.com/r/haskell/comments/117r1p/whats_wrong...

Could you give me an example where Haskell is not good ?

https://www.reddit.com/r/haskell/comments/3b94w0/could_you_g...


I think that classes (in the school of OOP coming from C++ and Java) are incorrect abstractions. Here incorrect is in the eye of the beholder, but I would argue that tying the three fundamental principles of OOP (encapsulation, inheritance, polymorphism) into one abstraction leads to more problems than it solves.

In particular in Haskell, these things are separated into modules (encapsulation mechanism), abstract data types (inheritance mechanism) and type classes (polymorphism mechanism). But other modern languages have this separation as well, and it is cleaner.

Consider for example a common Java exam question, should you use interface or abstract class? This is a completely artificial problem that doesn't occur in languages that separate these abstractions orthogonally. Also another common exam question, access control (private, public, protected) is more complicated due to lack of the separation of encapsulation and inheritance principles.

Another example is Java is arguably much less readable than it could be because organizing all the code as classes makes code more difficult to follow (especially without IDE it's insanity of directories) - you don't know which classes are just utility types or simple functions and which actually contain the meat of the code. A properly organized module is much more pleasant to work with.


In Java, it is perfectly acceptable to have either a file containing the primary public class and any number of private utility and implementation classes, or just have a non-instantiatable enclosing class that works as a module and a bunch of interfaces, classes, and enums within it. Consumers then can then refer to them either by their Class.SubClass name or the shorter SubClass name.


OK, thanks! I didn't realize it is possible, I haven't seen code that uses this technique.


Typically nested classes in Java make it harder to follow what each class does, where it ends and starts. The files then tend to become too big and at worst end up cross referencing each others privates too much. At least, that is my experience with code that uses nested classes all the time and - I am slowly refactoring it to split classes into separate files.


The underlying misconception is that inheritance and polymorphism are OOP fundamentals. c.f. http://www.purl.org/stefan_ram/pub/doc_kay_oop_en and both C++ & Java certainly do thereby walk themselves into algebraic incontinence.


I deliberately didn't want to get into Kay argument and the debate, because it is irrelevant to my point, if you don't want to call whatever Java/C++ is doing OOP, that is fine.

But if you have to ask, I think Kay is wrong too. I appreciate his vision, which is inspired by biology, but I think it's not the right way to go about building software systems, either.


The definition of OOP is super relevant when the definition encodes principles and abstractions in a discussion of ... principles and abstractions. It’s not a matter of what you or I want. Inheritance and polymorphism are incoherently designed in C++ and Java, but this isn’t a consequence of OOP. It’s a consequence of language designers experimenting and compromising.

Re. second point, I didn’t ask, so I won’t be stating my own opinion, but I will state my favourite OO language: Erlang.


I'm reminded of Mike Acton's 3 Lies of Object Oriented Programming. Lie 2: Code should be designed around a model of the world. He makes the point that object-oriented models can never really model real-world objects, and shouldn't try to. A game engine may have several ways of representing a chair, for instance (a physics chair, a breakable chair, a static chair, a usable chair). These probably shouldn't share an interface, despite all relating to real-world chairs.

https://youtu.be/rX0ItVEVjHc?t=1154


Ok, but that is argument against idea that died 20 years ago. I cant think of a code written in last years that would try to model real world or blog post that would promoted the idea of modeling real world in classes.


A big problem is that university teachers are sometimes stuck 20+ years ago and teach that to students every year - one teacher can hurt a hundred+ students every year. And then we have to undo the damage when they join the workforce.

e.g. take this example (sorry guys if you see that, I googled for PassagerStandard) : https://github.com/sbrouard/public-transport-manager/tree/ma... - during a whole semester you model a freakin bus with different "passengers" - "standardpassenger", "stressedpassenger", and the bus stops also have behaviour modeled by objects like "CarefulStop", "PoliteStop", etc...

The whole thing is basically worst-case OOP where everything is modeled by an interconnected mess of objects and every exercise becomes just worse and worse than the previous. And students have to learn to do that every year and are judged on how "well" they did apply OOP techniques in order to graduate.

(note to reader - the authors of this repos are students who had to implement this mess, not the authors of this exercice)


> "passengers" - "standardpassenger", "stressedpassenger"

That sounds pretty terrible. I presume passengers are unable to change state? Or is the instance replaced with an instance of another class?


Don't forget about Go:

    func Open(name string) (file *File, err error)
Your basic Optional/Maybe monad would have worked better here.

There is a lot of evidence supporting the fact that Rob Pike and company when inventing Go were not aware of Sum types indicating that this special Tuple return type (that can never be reused anywhere and must be pattern matched immediately after it returns) was created because of lack of knowledge and not as a design choice.

See this thread as evidence: https://news.ycombinator.com/item?id=6821389


Limbo, one of Pike's (et al) predecessor languages to Go, did have sum types, called there "pick adts".

Some people are ridiculously daring at dismissing others's reputations without the most basic of background checks.


Hold on. Instead of saying "some people" you should be more direct in what you mean. You should say, YOU and refer to me directly and have your words be inline with your intentions.

Now that being said. Go looks to me, to indeed be made by someone who doesn't know that a sum type exists. In my link Rob Pike himself admits that he didn't know about structural typing.

Additionally Rob seems to be in general not up to date on type theory. Please follow and read the link above to see what I'm talking about.

Now, that being said. Never EVER did I dismiss Rob Pikes background. He's done great things. Now I will say something exactly in line with my intentions: You have told an utter lie where you accused me of dismissing Rob Pikes reputation. I have done NO such thing.

Additionally one question. What site are you using to run a background check? doesPikeKnowWhatASumTypeIs.com? Sometimes people make mistakes. I admit the possibility that I'm wrong, but limbo is kind of an obscure programming language. It's not easy to given that they use the term discriminated union type rather then the "sum type."

I would argue that although this type is included in limbo, Rob likely didn't use it too much or know about its use case to bring errors into the type system as he could have easily used it in Go. I could be completely wrong here but one thing is not wrong:

>Some people are ridiculously daring at dismissing others's reputations without the most basic of background checks.

Accusatory language like this even disguised with "Some people" and that are additionally utter lies are not an acceptable way to start a discussion.


Where did you learn (or, more generally, where do people learn) about those Sum types?

The Wikipedia page for "Sum type" redirects to "Tagged union", implying those are synonyms; do you agree with that?

Sadly, that Wikipedia page only seems to refer to pages specific to some programming languages.

Could you maybe recommend an introductory book about type theory, preferably with an orientation to how it can be applied to language design? EDIT: or maybe a programming language theory book? To be honest, I am mainly interested in imperative languages, basically how could a better Go have been made.


Types and Programming Languages[0] by Benjamin C. Pierce is a good introduction to type theory and programming languages.

[0] https://www.cis.upenn.edu/~bcpierce/tapl/


Languages with Algebraic Data Types will help you get practical knowledge on how to use these types. Basically types share an isomorphism with algebra allowing certain types to share properties with algebra, hence the name sum types and product types.

Bool has a cardinality of two. True and False. A Product type (essentially a tuple) of two bools represents a bool and another bool and has a cardinality of 4. (True, True), (True, False), (False, False), (False, True). The cardinality of a product type is essentially the product 2 * 2 of the cardinality of both individual types. Product types represent the concept of AND: a bool AND another bool. The product type in itself is a type evaluated as the product of two (or more) types. In most languages this concept is represented as a struct or a tuple.

To remove confusion for sum types let us create a type with a cardinality of 4. FuzzyBool. It has 4 instances: VeryTrue, SortaTrue, SortaFalse, VeryFalse. A sum type represents the concept of OR. Let's see a sum type of a Bool OR a FuzzyBool (Bool + FuzzyBool). The new type can either be one of: (True | False | SortaTrue | SortaFalse | VeryTrue | VeryFalse). The cardinality is 4 + 2. Hence the term sum type.

Languages are typically asymmetric in types due to the lack of the explicitly offering the user the ability to combine types to form a sum type.

The Golang return value for functions is actually the opposite of a sum type. It is a product type. Rob Pike was likely unaware that you can create a type that is either (File* or Error) and instead utilized ((File* | null) && (Error | null)) to attempt to keep error handling within the type system. It results in IO functions having a return type with Cardinality of greater than 4: (File, nil), (File, err), (nil, nil), (nil, err) and they just expect you to ignore (File*, err) and (nil, nil) and assume it will never occur even though the type system does not restrict it from occurring.

If you want to learn this stuff imperatively I would recommend Rust, however you could miss the big picture with Rust due to the borrow checker and the option to do typical C-style programming that avoids utilizing the ADT's to the full extent of their power.

I would recommend diving into something like Haskell and getting to know algebraic data types and exhaustive pattern matching. You can avoid fully understanding Monads if all you want to understand is Sum types. Haskell will force you to utilize sum types very early on via exhaustive pattern matching.


My favourite example, because I've been working with it recently, is the nunjucks templating language. It has three or four different ways of almost-but-not-quite implementing functions. It has dynamic scoping (hello, 1960s!) But my favourite feature is its recursive for-loops. These are truly majestic. I would not believe someone could implement a language so broken if I had not witnessed it.



This reminds me stdout's fine tune: My language is better than your language. It captures a lot of what this thread is about.


I never understood the complaint about generators. Languages can have both light-weight laziness (like generators) and more heavy abstractions (like scheme's srfi-41). The overhead of generators in, say, JavaScript is minimal. If you want odd lazy lists, generators is a great thing to use to build it. Generators and streams (as in scheme's SRFI 41) are not mutually exclusive.


If Unicode were a monad, Haskell would have gotten String right.


Null and exceptions. (in Haskell, Maybe and Try/Either)


Yes, null is, as it turns out, an incorrect abstraction. It's like saying that all your types are Maybe a, and you cannot get create a type that is not (with few exceptions of base types, which confuses it even more).

I would also add Enums to the list, sum types are superior in all respects.


Nothing beats pattern matching on Maybe for exception or null handling, or variations of such (Rust's Option). It's just so clean syntactically.


I’d argue that nullable type with refinements as in flow for example beats Maybe most of the time. You can’t make it simpler (one letter ? overhead) without special unwrapping constructs (refinements) just normal code with same guarantees.


Refinement typing is too complicated IME. Haskell guarantees that you can always pull out an expression into a function without changing anything, and vice versa, whereas with refinement types you can extract a function and find your code no longer compiles.


I was thinking just yesterday while casting an int32 to a uint64 that theres no guarantee the int32 wasn’t negative beyond my knowledge that the code was only ever incrementing it, and how most languages (that I have worked with at least) allow dangerous casting without enforcing any sort of error handling on the given operations.

I think this would fall into this category.


You should definitely have a go at rust where this conversion would have to be done explicitly via `try_from` [0] which may fail.

[0]: https://doc.rust-lang.org/beta/std/convert/trait.TryFrom.htm...


This classic contains a lot of them: https://www.destroyallsoftware.com/talks/wat


It's threads like this that remind me how little I know...


A lot of Haskell jargon is just simple everyday concepts dressed up with complicated neologisms.


You're not wrong that the terminology is needlessly confusing, but that also doesn't mean the ideas themselves are obvious or aren't powerful. The same happens in many academic spheres; good ideas are created, and then they're also dressed up in obscure language to make the creators feel like the concepts are as hard to understand as they were to invent.


Sure, if you consider precise mathematical terminology to be complicated neologisms.


Could you give an example?


Monad. It's just a very generic container. You can put things into it (unit/pure/return/bind) and you can do things within it (flatmap). Sometimes it's an effect or control flow container (Try, Future, Result, Either, Maybe, Option), sometimes it's just a List.

And then there's cata (catamorphism), which is the generic reduce/fold/accumulate/aggregate.

And then there's bifunctor, which is the general Either, Result, or List[(A,B)] thingie.

These concepts are very useful, but usually typed FP code nowadays are full of compromises in the name of typing. Typed FP code is constrained by what the type system allows.

This eventually boils down to everything becoming for-comprehensions / do-notations. Which is fine after you get the hang of it. But it's very easy to run into real world problems that require "hacks" or inelegant solutions, or forces the correct solution, which is very time consuming.


Monad has a specific set of mathematical rules, though, which have important properties which make them well-suited for some very specific tasks, including defining the semantics of programming languages.

I'm not aware of any "simple everyday concept", or anything in any mainstream programming language, that comes close to being equivalent to what monads are. You can find examples of specific monads being used (and misused) in many ordinary programs, because of their deep connection to programming language design, but they're never explicitly identified as such, partly because most languages can't conveniently make use of an abstraction like that.

Similarly, there's more to catamorphisms than what you describe. What you described is essentially what the generic mathematical concept of a catamorphism looks like in a specific context, but that's like saying that the term "vehicle" is just the simple everyday concept of a bicycle dressed up with a complicated neologism.

Something similar goes for bifunctor. `Either` is an example of a bifunctor, but it's also an example of a functor. Conflating the generic concept with specific instances will mislead you.

The point about this terminology, and the concepts it represents, is that it classifies these abstractions with precise rules, allowing the commonality across many different abstractions to be identified and exploited in well-defined, rigorous ways.

You don't find that in mainstream programming languages or the literature about them. The closest you'll come is things like object-oriented collection class hierarchies - Collection / List / Set / OrderedSet etc. That kind of classification is analogous to the sort of terms we're discussing, but is nowhere near as well-factored or mathematically rigorous.

> This eventually boils down to everything becoming for-comprehensions / do-notations.

What language are you thinking of? Sounds like Scala maybe. Unfortunately I think Scala was a mistake from the point of view of introducing functional programming to a wider audience. The object/functional mixture results in a kind of worst of both worlds that doesn't show off the benefits of either to best effect.

> usually typed FP code nowadays are full of compromises in the name of typing.

> But it's very easy to run into real world problems that require "hacks" or inelegant solutions, or forces the correct solution, which is very time consuming.

Perhaps you've had this experience from experimenting with functional languages, but it's not my experience from actually using languages like Haskell and OCaml in real projects for many years. I'm also very experienced with many mainstream languages - Java/Javascript/C/C++/Python etc. - and I find functional languages to be models of elegance by comparison. Statically debugging code by designing the types well ("make illegal states unrepresentable") is a huge timesaver when it comes to avoiding bugs and making significant changes to large codebases easier.

There's a lot of research on extending the power of type systems, which it seems like you may be thinking of, but for most ordinary purposes, you don't need to worry about that.


Of course, Haskell is right, the jargon is precise. That doesn't mean underlying concepts are not what everyday programmers use. The jargon has given proper names to the general concepts, it reified them, made them first-class citizens.

I'm not saying they are not useful. (On the contrary, they are invaluable for high level discussions about certain aspects of programming, including control flow, data structures, and programming techniques.)

Every async/await implementation is basically a monad with fancy syntax.

> Conflating the generic concept with specific instances will mislead you.

Yeah, agreed, no questions there. I'm just saying that the vast majority of programmers are nowhere near prudent enough to appreciate the strength of a correct/lawful system, plus they usually have other powerful concepts in their heads (after all there are many C programs that are sort of correct but not easily written with standard, total, and sound types, plus the required assumptions are not expressed in the source code). Also, another angle I think might help to convey what I'm trying to say is to look at how much jargon is needed to explain things like for-comprehensions execute the monad steps in sequence, but if you use mapN and an applicative then you can execute independent steps in paralell. Whereas it's just "race the futures" (or Promise.all, or whatever) in common programming parlance.

> Perhaps you've had this experience from experimenting with functional languages [...] > What language are you thinking of? Sounds like Scala maybe.

Yes, I don't know Haskell, I have a few years of Scala with some Scalaz experience. And maybe you're familiar with mypy and TypeScript, and it's obvious how problematic is to build the bridge between existing semantics and a correct/sound type system. (And of course, it's very much impossible, because you either have soundness or not, and so TS is unsound to be useful, and mypy is useless unless you rewrite many things.)

> [...] and I find functional languages to be models of elegance by comparison

I haven'got the hang of any pure functional language (yet?). I like to think it's not because of lack of trying. I see where concepts can be useful (eg. how TS and Rust would benefit from HKT, and how many people are waiting for a more expressive type system to create better abstractions, better libraries, better APIs), but to me all Haskell code looks like something like a dadaist poem with strange symbols sprinkled here and there.

(Eg I'm looking at the sample app for miso https://github.com/dmjio/miso#sample-application and I have no idea what is what, no idea about shapes, what's <#, what's >>, why div_ and button_ have underscores after them, why div_ takes an empty list, what's {..}? And sure, most of these can be found, but still a lot harder than even the most damned FactoryFactoryBean in Java.

And sure, I appreciate that lenses and the Transition interface helps with stuff, but that's simply a very small incremental ergonomic enhancment compared to what I wish.

Similarly, when I look at servant's documentation I wonder how come a such powerful language, but to model 'GET /user/:userid' it can't - or doesn't - use /user/:userid, but "user" :> Capture "userid" Integer. The type is nice, and whatever :> is, I'm sure that's nice too, but I'm not finding the elegance that TSed has - https://github.com/TypedProject/tsed#create-your-first-contr... )

So, all in all, I'm just a lazy party-pooper when it comes to Haskell (or FP in general). And mostly just because the pain of debugging TypeScript and the myriad "any"-s everywhere still seems the lesser one than trying to grok the the cryptic symbols.


> I'm just saying that the vast majority of programmers are nowhere near prudent enough to appreciate the strength of a correct/lawful system

That seems like a failure of education. Imagine if we put up with bridges that collapsed because "most engineers are nowhere near prudent enough..."

People use what they were taught to use. Programming languages form part of that teaching. Bad programming languages teach people bad habits.

> after all there are many C programs that are sort of correct but not easily written with standard, total, and sound types

For example? I suspect there's a kind of all-or-nothing mentality behind this. I would bet you can easily write the sort of programs you're thinking of in a good functional language, if you don't get hung up on perfection. It would still be a significant improvement. There are many examples of this - XMonad comes to mind, which you can compare to i3 which is written in C.

It seems to me that most of what you're saying boils down to simple unfamiliarity. I would suggest that you're suffering from a kind of inverse expert blindness. You discount the amount of learning that went into your ability to understand a phrase like "race the futures," and this leads you to reject systems that are different enough that you can't easily apply that existing expertise to.

Your difficulties in reading miso code without, apparently, consulting any documentation is a little silly, I'm afraid. What programming language, that is significantly different from the ones you already know, allow you to read its code without any background knowledge?

If you're actually working with the language, there are easy ways to answer all the questions you're asking. Short of gasp reading the docs, you can query the functions or operators you're interested in at the ghci prompt. For example:

    > :t div_
    div_ :: [Attribute action] -> [View action] -> View action
This helpfully points out that the first argument to `div` is a list of Attributes. If there are no attributes, that list would be empty.

That wasn't so hard!

You could also read the docs: https://hackage.haskell.org/package/miso-1.5.2.0/docs/Miso-H...

Or look it up in Hoogle: https://hoogle.haskell.org/?=&hoogle=div_%20package%3Amiso

> Similarly, when I look at servant's documentation I wonder how come a such powerful language, but to model 'GET /user/:userid' it can't - or doesn't - use /user/:userid, but "user" :> Capture "userid" Integer

It's expressing the route in the language rather than in an untyped string. :> is just the operator servant defines to separate route components. But here (and in some of your other observations) you seem to be conflating a decision that the servant authors made, with the language itself.

> So, all in all, I'm just a lazy ...

That's your choice. You should just know that your critiques are off the mark - they're the stories you're telling yourself to justify not having to learn.


Thanks for responding! I really wasn't hoping you'd think my rambling is worth responding to.

> Imagine if we put up with bridges that collapsed because "most engineers are nowhere near prudent enough..."

Most developers don't develop nuclear reactor control software.

And I'm saying this, not because your statement about education is wrong, but because there's a cost-benefit analysis going on, and traditionally correctness was too expensive. This is due to insufficient tools, a very strange mindset ("That seems like a failure of education."), plus the path-dependence of the real world (the whole history of computer programming somehow led to a lot more close-to-the-metal imperative code and programming languages than "here are the correct abstractions and let's see how far can we go with them" ones). Also, as far as I know we need dependent types to be able to easily model a lot of invariants that would help with many kinds of programming errors. (Of course validating safely composable components would go a long way - like that project targeting some subset of Rust, but that's basically what static analysis is. And there's a whole industry that tries to bolt-on this kind of correctness verification after the code has been written.)

...

All in all, I'm simply pointing out that the Haskell ecosystem is still not exactly approachable. You say it's because education, and people's choices, and other reasons. (Which is tautologically true.) But that doesn't help much. In my experience programmers get better when tools get better, and it makes sense to learn them, when it gives them sufficient value. (Correctness, productivity, ergonomics, performance, hipness and whatnot, and all of that on a reasonable time-horizon.)

This whole comment thread started out from talking about jargon, and about the value of having mathematically exact, precise (correct) concepts as definitions used for everyday programming, and my observation is that programmers and programming as an occupation don't have the need/want for such high level concepts and correctness, and on top of that, even if everyone wants better programming tools, somehow the obvious answer isn't rewrite it in Haskell. Maybe the gap is [stil] too big [in the average case]. Maybe the very strong static typing, and pure FP (and the basically necessary immutability) and jargon, and unfamiliarity is still too constraining, too "expensive" to learn and use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: