The downside of that (which can be provided by union types) is that `Maybe (Maybe X) == Maybe X` which may or may not be what you're after. The particular weakness is the interplay with generics where `forall t. Maybe t` depends upon the `Maybe` and the `t` not interacting. If you've got `forall t. t | null` then this type may secretly be the same as `t`---and if you handle nulls then it might conflate meanings unintentionally.
So you're saying that the one Nothing has a different meaning than the other Nothing, and that this is something that anybody would want? Or that we have a Nothing and a Just Nothing? I'm sorry, but I don't get how that would be considered good design in any context. Do you have any example where this would be considered sensible?
Having a situation where one handles both `Nothing` and `Just Nothing` in the same context should be rare.
But you might be writing some functions on some abstract data structure with a type parameter `a` (say it’s a graph and users can tag the vertices with values of type `a`). And (maybe just internally) there are situations in your algorithms where a value might be absent, so you use `Maybe a`. If `Nothing == Just Nothing`, your Users can’t use a maybe type for `a` anymore because your algorithm wouldn’t be able to distinguish between its `Nothing`s and the user’s.
In any concrete context, Maybe (Maybe A) could _probably_ be simplified to just Maybe A as we expect. Alternatively, we could be in a situation where there are two notions of failure (represented here as Nothing and Just Nothing) in which case we'd be better off simplifying to Either Error A where Error covers you multiplicity of error cases.
But while these are all obvious in a concrete context, what we often are doing is instead compositional. If I want to write code which works over (Maybe a) for some unknown, library-user-declared type `a` then I may very well end up with a Maybe (Maybe Int) that I can't (and mustn't) collapse.
As a concrete example, consider the type of, say, a JSON parser
ParserOf a = Json -> Maybe a
We hold the notion of failure internal to this type so that we can write
fallback : ParserOf a -> ParserOf a -> ParserOf a
which tries the first parser and falls back to the second if the first results in error. We might also want to capture these errors "in user land" with a combinator like
catch : Parser a -> Parser (Maybe a)
If we unwrap the return type of `catch` we get a Maybe (Maybe a) out of a compositional context (we can't collapse it). Additionally, the two forms of failure are distinct: one is a "handled" failure and the other is an unhandled failure.
Similarly, a polymorphic function may want to put values of an unknown type in a Map or MVar. From the outside it may be completely reasonable to call that function on Maybe Foo, which would mean a Maybe (Maybe Foo) somewhere internally and I would struggle to call that bad design.
I think the obvious implementation is `maybe (Left False) (maybe (Left True) Right)`
If we have a value then we clearly have both layers. If we don't have a value then we need to distinguish Nothing from Just Nothing by way of the book.
Imagine you're working on a compiler. You need to represent compile-time computed value of type Maybe Int (e.g. you are precomputing nullable integers).
You see 1 + null. So you have add: Maybe Int -> Maybe Int -> Maybe Int, that takes two precomputed values, and returns new precomputed value for the operation result.
However, you can't precompute Console.readInt().
For some expression, you can either be able to compute value at compile time, or not.
What is the output type of compileTimeCompute: Expr -> ???
I don't understand your example. What does compile-time computed stuff have to do with readInt()?
I get that it might be possible to do that, use a Maybe Maybe T. But it's like an optional<bool> in C++. It can be done, it's just not a good idea. So if you design your system not to allow that in the first place, nothing of value was lost.
If you have specific error cases that you want to communicate, like "what was read from the console didn't parse as an int" as opposed to "the computation didn't find a result", then using the two values "Nothing" and "Just Nothing" as the two distinct values to encode that is not a sound design. Either you have meaning, or you have Nothing. Nothing shouldn't have any meaning attached to it.