Hacker News new | past | comments | ask | show | jobs | submit login

Sort like monads I imagine as soon as you understand it you lose the ability to explain. I have given up "understanding" them and just try to understand when and where to use them.



That's OK! Do you think you really understand the natural numbers?


That's a good question, actually. I'm Not OP, I'm sure OP "understands" them as much as anyone does. But considering that the most common definition is an incomplete extrinsic definition by example, N = {1, 2, 3 ...}, I'd argue that in principle no complete extrinsic definition can be given :) That's more than a solipsism, because any number has inherent properties that make it different from all the others, even if these properties might be equal up to isomorphism with "n'th successor to zero", because then the system of isomorphisms is in question, begging the question ...

Whereas, if you know the intrinsic definition by the axioms, you know the definition, not the numbers. Big difference that is.

I'm keen on a distinction between numbers and forumals. If you take binary numbers and succ(), you have 0, 1 and an infinity of formulas. In my book that's only two natural numbers. We commonly change base to represent eg. 16 as 0x10 - or 1000 as 1k, requiring additional figures. Figure, number - potato, potato.


>I'm keen on a distinction between numbers and forumals.

What’s the operational advantage of this approach? I feel like it would wreck havoc with fundamental tools like mathematical induction..


The difference between f(x)=...=0 and x=2, e.g. is quite huge for a lot of college students. The distinction between constants, operators and formulas is quite explicit in algebra. It seems to be a natural distinction to make. One advantage, I guess, is that a formula can be wrong, but a number can't, which is why proof by induction works at all.


How would a proof by induction work with only two natural numbers?


Monads are (abstractions of) container types that allow you to map a function over the elements of the container, but return not just a new element but a entire new container.

    Just x => f x
    [x0,x1,...] => f x1 ++ f x2 ++ ...
    returnIO x => f x
What makes "magic monads" like IO useful is that you can feed internal state through them.

    thenIO (IO m) k = IO (\ s -> case m s of (# new_s, _ #) -> unIO k new_s)
    thenIO a b => c # forces a to be evaluated first to generate new_s
    # because b will refuse to preform any IO operations until it recieves new_s
    # this sequence is packaged up as c, which can be passed to other thenIO's
    # until it reaches a unsafePerformIO or the implicit not-so-unsafePerformIO underneath main
The 'state' (s,new_s,etc) of a IO object is just a placeholder to force in-order evaluation, but you could also store eg a Map of key-value pairs for a State monad.

(Note that this comes from someone who considers it a compilier bug if `* (char *)0 = -1` doesn't produce `mov byte[0], 0xFF`, so take the implementation details with a grain of salt.)


You shouldn't lose the ability to explain something when you understand it. There are things which require a significant amount of background knowledge to thoroughly understand, but that's a separate issue of retaining the ability to explain what they mean as you ascend the ladder of abstraction.

For example: I can say that a monad is an endofunctor which has been equipped with two structure-preserving transformations. This is correct, succinct and utterly tonedeaf unless I know that my audience has familiarity with the mathematical concepts of algebraic structures, categories of structures, the meaning of a functor, the meaning of an endofunctor, the meaning of a transformation and the meaning of "structure-preserving."

By being a little less terse I can provide a useful definition which is vastly more accessible. Start with the basics. A set A = {a, b, c} is a collection of objects. A function f: A --> B is a rule that sends one element a in set A to one element b in set B. Then we write f(a) = b, which is familiar from high school algebra. A set can consist of any kind of mathematical object - integers, rationals, reals, other sets, polynomials or even functions. Whatever you'd like.[1] Now we'll say a category is a collection of sets alongside all the functions that we can define on each pair of sets.

So let's take a category consisting of all possible sets of integers and all the possible functions that can be defined on any two of those sets of integers. Naively we can think of this category as a set of all sets of integers. Then the analogue of functions defined on sets is functors defined between categories. When dealing with functions, we can define a function on a single set so that f: A --> A means every element a of A is associated with another element of A. We can also do this with categories; this is called an endofunctor, and it essentially means it will send each object of our category to another object in the category. One way I could do this is by defining a relation that takes each set of positive integers in the category of integers and sends it to the exact same set consisting of negative integers. Each set {1, 2, 3, ...} will be sent to a set {-1, -2, -3, ...}; moreover, a function relating the set {a, b, c} and the set {x, y, z} will have a corresponding function relating the set {-a, -b, -c} to the set {-x, -y, -z} in on the other end of our functor.

So to recap, we have a category which consists of all possible sets of integers, and we've defined an endofunctor which maps any single object (set) in the category to another set within the same category - a functor defined on the category itself. Moreover, this endofunctor preserves relationships between respective sets at the function level. Now a natural transformation is a relationship between two different functors so that all the relationships that exist under the first functor also exist under the second functor. So with our current endofunctor, we might augment it with a natural transformation that transitions this functor from sets of integers to groups of integers, or rings of integers, or vector spaces of integers, etc. The respective definitions of those algebraic structures aren't germane to this explanation; basically what we're after here is the idea that we can take a functor between two like sets of elements in the same category of sets of elements and manipulate it so that if a in {a, b, c} relates to -a in {-a, -b, -c} in the same category, a will still relate to -a (and other relations on either side of the functor) after we've changed the functor to mean something slightly different.

Putting this all together: a monad a functor that sends every set in a category to another set within the same category and which has two natural transformations that "preserve" the relationships of the functor. What I've just explained is the mathematical sense of the word; the reason why this terminology is used in a language like Haskell is because it describes a very neat, abstract way of transforming one class of objects (e.g. strings) into another class of objects (e.g. integers). If we define a function for transforming any string into an integer and then we define two more functions for manipulating that function without invalidating its output, we have a functional monad. In functional programming this can be useful when we want to abstract away and compose functionality together.

Note that I'm not saying my explanation is perfect. It's greatly simplified. But it explains all the core concepts involved in the definition of a monad and it mostly does so from scratch.

_______________________

1. ...for this exercise. Please resist the temptation to bring up Russell's Paradox and ZFC.


As far as I understand it, quaternions are just matrixes for a 4-dimensional space. The 4-dimensional space is really just x/y/z/scale.


Since other people are giving their monad tutorials, I'll give my take here:

Haskell has a feature called type classes which allows you to overload functions. The definition of a type class gives a list of functions with their type signatures and names. When you write an instance of a type class, you must give implementations for the functions defined in the type class. All the functions that you define in the instance must match the type signature given in the type class definition.

Understand kinds is another critical prerequisite. Have you ever wondered what the type of a type is? Kinds are a simple construction that proves to be useful. Just as expressions have a type signature, a type expression has a kind signature. In its most basic form, a kind tells us how we can construct a type. We represent kinds by using asterisks * and kind functions ->. The asterisk is pronounced as "type".

The easiest way to understand kinds is by looking at a bunch of examples of types and type constructors. Monomorphic types such as Int and Bool have kind * . Type constructors are handled differently. An example of a type constructor is [] (list), which has kind * -> * . So list is a type constructor that takes in a type (which we represent with an asterisk), and returns another type. Therefore [Int] has kind * , since we applied the type Int to the list type constructor [], resulting in the type [Int]. Types constructors can also in some situations be partially applied, just like value constructors. Kinds are right associative, so the kind * -> * -> * is the same as * -> ( * -> * )

Here is a table of some types in Haskell with their kinds:

  +---------------------------------------+-----------------------+
  | Int                                   | *                     |
  +---------------------------------------+-----------------------+
  | Bool                                  | *                     |
  +---------------------------------------+-----------------------+
  | Maybe                                 | * -> *                |
  +---------------------------------------+-----------------------+
  | Either                                | * -> * -> *           |
  +---------------------------------------+-----------------------+
  | [] (list type)                        | * -> *                |
  +---------------------------------------+-----------------------+
  | -> (function type)                    | * -> * -> *           |
  +---------------------------------------+-----------------------+
  | a -> b                                | *                     |
  +---------------------------------------+-----------------------+
  | [Either Int Bool]                     | *                     |
  +---------------------------------------+-----------------------+
  | Either Int                            | * -> *                |
  +---------------------------------------+-----------------------+
  | (->) a                                | * -> *                |
  +---------------------------------------+-----------------------+
  | (,,,) (four element tuple)            | * -> * -> * -> * -> * |
  +---------------------------------------+-----------------------+
  | Mu where                              | ( * -> * ) -> *       |
  | newtype Mu f = In { out :: f (Mu f) } |                       |
  +---------------------------------------+-----------------------+

Now we are ready to look at the type class definition for monad:

  class Monad m where
    (>>=)  :: m a -> (a -> m b) -> m b
    (>>)   :: m a -> m b -> m b
    return :: a -> m a
On the first line, we give the name of the type class: Monad. This type class will be parameterized by a parameter named m. Based on how m is used in the function signatures, we deduce that m must have kind * -> * . This means that when we define an instance of the Monad type class, we tell what m should be. And m can be anything, as long as it has the kind * -> * ! In fact if we look at the table above, we can see that Maybe and the List type constructors have the required kind. If you've read any monad tutorials, these are often used as simple examples of monad instances.

The most important function in the monad type class is >>=, so let's focus on that. >>= is an infix function with two parameters, one of type "m a" and the other of type "a -> m b". The return type of >>= is "m b". So if we were to define a monad instance for m=Maybe, the type of that particular overloaded version of >>= has to be "Maybe a -> (a -> Maybe b) -> Maybe b". What should the definition of >>= be? Well it can be anything as long as the type signatures match up!

Take a look at the second parameter that has type "a -> m b". You can think of this as a sort of callback or continuation function. Typically the >>= function takes the first parameter (which has type "m a"), does some processing to unwrap the value, passes this value to the callback function and then takes this result and does some more processing before returning the result (which must have type "m b"). The >>= can call the callback function as many times as it wants (or maybe not at all). It could do anything, as long as the type signature matches up. There's one more detail that I've so far been ignoring: the overloaded functions need to follow some rules beyond the type signature constraint. These are called the monad laws, and you can find out more about them elsewhere by searching for "monad laws".

Why even bother with the Monad type class at all? It turns out that programmers have discovered many different programming patterns that seem to match up with these signatures. In fact, this particular pattern has become observed to be so common and useful that the authors of Haskell decided to provide some useful built in language operations to make them easier to use (do notation in Haskell). You can think of the >>= as a sort of programmable semicolon, where the overloaded bind operator is parameterized by a specific continuation function, but the overall program flow is dictated by the specific overloaded version of >>=.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: