Hacker News new | past | comments | ask | show | jobs | submit login
Programming with Categories (brendanfong.com)
342 points by kercker on Sept 2, 2020 | hide | past | favorite | 108 comments



As someone who respects functional programming (because it removes geniuses from competing in my space) here's a nice video https://www.youtube.com/watch?v=ADqLBc1vFwI

What is the beautiful monospace font in the pdf here http://brendanfong.com/programmingcats_files/cats4progs-DRAF...?


> here's a nice video https://www.youtube.com/watch?v=ADqLBc1vFwI

OMG. That might just be the funniest god-damned thing I've ever seen in my life. Thanks for sharing that!

On a separate note: does anyone know where this footage is originally from? Some WWII movie, I would guess?


https://en.wikipedia.org/wiki/Downfall_(2004_film)

Probably one of the best "WWII movies" (it's really only about the final days in the bunker) ever.


Just seconding that Downfall is brilliant. Totally off-topic now, but also extremely good is Das Boot.


Thanks. I will check that out.


I laughed a lot with the video too.

Jokes aside, a book like CTM shows both paradigms have their place, and can also co-exist in the same system.

Besides, this paper cleared all my doubts about high-order FP having a lot of (unexploited) potential:

http://conal.net/papers/compiling-to-categories/


> here's a nice video https://www.youtube.com/watch?v=ADqLBc1vFwI

This made me laugh way more than it should have. "Why don't you pattern match my fist all over your faces!"


"Don't worry, Haskell transpiles to Java now."

And the last line....



> The number of elements of a set X is called its cardinality because cardinals were the first birds to recognize the importance of this concept.

Alright, maybe I will keep reading this book.


> What is the beautiful monospace font in the pdf

t1xtt, from the txfonts package, freely available


Thank you! No ttf/otf though :)


It looks like a fork of Luxi Mono with slashed zero, which is available in ttf. And there is a more modern version of it with better character coverage -- Go Mono.


The last line. Oh god, the last line. I haven't cackled that loudly in months.


    Hitler: What's a monad anyway? No one who understands monads can explain what they are
    Underling (hurriedly): A monad is just a monoid in the category of endofunctors
This caused me to choke on my coffee.


> A monad is just a monoid in the category of endofunctors

A helpful analogy can be drawn by comparing two facts: a composition of something with its inverse produces the identity (a.k.a. unity, to use the Latin root), while a composition of something (e.g. a functor) with its "adjoint" (not quite the inverse) produces something similar that is better said in Greek.


And thus, another cycle is completed.


You just made those words up right now!


Oh, another Haskell can't do IO joke.


Yes, but this time it was a funny one. I laughed, and not just at AbstractSingletonProxyFactoryBean.

Gotta be able to laugh at yourself sometimes.


But the important takeaway is that if you don't like Haskell, you might as well be Hitler.


Thanks for replying. I gave up on the beginning because those jokes tend to always be the same.

(And to be fair, half of them are the same overused ones. But the others are good.)


"Thank god you haven't found Prolog..."


It's true, and it starts off with the very worst one. I almost closed the tab like 30 seconds into the video.


The beginning wasn't great, but it had some really hilarious lines later.


I think the joke was that Haskell can do IO but haskellers can’t


you are my hero for this comment.


When taught in January at MIT, a highlight was something I'd not seen elsewhere: someone called it the "aftermath" (3-pun). After the one-hour traditional-ish lecture (on video), the room was reserved for an additional hour.

When previously taught, people would remain afterwards to ask questions, discuss math, and chat. So this was an iterative-improvement formalization of that.

People would gather in front of the blackboards in fluid discussion clusters. Catalyzed by the three instructors and wizzy others, not all having to stay for the entire hour, but drifting off as discussion died away. They could show material they had pruned from the lecture, for want of time. Or got dropped as they ran over. Alternate presentation approaches they had considered, before selecting another. They could be much more interactive. One commented roughly "If I was tutoring someone, I'd never present the material this way". It was a delightful mix of catching the speaker after a talk to ask questions, a professor's office hours, a math major's lounge, an after-talk social, tutoring, an active-learning inverted classroom, hanging out with neighbors in front of the hallway blackboard, chalk clattering and cellphones clicking to snag key insights... It was very very nifty.

So, the book is nice. And lecture notes. And videos. But... the best part isn't there. Perhaps the next iterative improvement is to capture the aftermath on video, and share that too.

And as we look ahead, planning distance-learning and XR tools... maybe something like this is a vision to aspire too. The insane ratio of expertise to people learning is not something one can plausibly replicate in meatspace. But as conversations in front a virtual blackboard gradually become technically feasible, something like this might pay for the cost of it, with transformative impact.


This is the most valuable part of the in person learning experience. Free form discussion and building intellectual context around a subject. I experienced this in a community college environment. This should be encouraged in any learning environment.


This is what I sort of dreamed university would be like. But alas it wasn’t. Just lectures and students doing the minimum to get through the week and hit the bars.


wait, you are saying that at MIT you had to sit for a video?

Why are you paying for that nonsense?


[flagged]


Both your comments are really bad hot takes.


If you're interested in how Category theory and Algebra can inform the design of software, have a look at ZIO Prelude.

https://github.com/zio/zio-prelude

It's a brand new library for Scala that contains reusable mathematical structures. Still based on algebra and category theory, but it expresses them more or less differently than how they've been expressed in Haskell (and similar languages). For example, unlike Haskell (and Scala's own cats and ScalaZ), it doesn't present the "traditional" Functor -> Applicative -> Monad hierarchy. Instead, it presents the mathematical concepts in a more orthogonal and composable way.

One example out of many, you don't have a Monad. You have two distinct structures:

* Covariant functor, with typical map operation `map[A, B](f: A => B): F[A] => F[B]`

* IdentityFlatten which has a flatten operation `flatten[A](ffa: F[F[A]]): F[A]` and an identity element `any: F[Any]`

When combined together (Scala has intersection types), you get something equivalent to the traditional Monad.

The project is in its infancy, so it may still change significantly, though. Look here for more detailed explanation:

https://www.slideshare.net/jdegoes/refactoring-functional-ty...

https://www.youtube.com/watch?v=OwmHgL9F_9Q


Are there any useful types that have `flatten` but not `map` or `contraMap`?


If I understood John's motivation from the video, it's that this enables the Flatten structure to be inspectable, because the innards are not hidden behind an opaque closure (A => F[B]), as is the case in regular Monad.

Btw, this is a problem that the "Selective applicative functor" too aims to alleviate.

You can read more about the inspectability problem (and Selective) at http://eed3si9n.com/selective-functor-in-sbt

The context there is sbt, which is a build tool, but I'm sure the inspectability plays role in many other areas. Note, how he contrasts "Applicative composition" and "Monadic composition".


I've read a lot about Category Theory, and I'm amazed at the abstraction level that lets you compose with different mathematical domains (geometry, topology, arithmetic, sets, ...).

And yet, the current mathematics relies heavily on the ZFC set theory. Why is that ? (Is that assumption even correct ?)

From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?). That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.

According to wikipedia[1], some type theories can serve as an alternative to set theory as a foundation of mathematics.

It seems to me that the Category Theory fits the description. So why don't we see a huge "adoption" in math fields ?

Thank you in advance for your clarifications :)

[0] - https://en.wikipedia.org/wiki/Russell%27s_paradox [1] - https://en.wikipedia.org/wiki/Type_theory


ZFC does not suffer from Russel’s paradox, since it doesn’t allow a “set of all sets”. If it did, the search for new foundations would be much more widespread. New foundations are usually only considered seriously once they can be shown to be relatively consistent with ZFC or the slightly stronger but still uncontreversial TG set theory.

There are people working on categorical foundations, but the main reason for lack of broader popularity or drive is that most mathematicians don’t do work that is “foundational” in that sense. For example, if you’re an analyst, you generally don’t care exactly how your real numbers are built (dedekind cuts, Cauchy sequences, etc.). You only care that they satisfy a certain set of properties, you can define functions between them, and that’s about it. Most mathematical reasoning at this level is insensitive to differences in proposed foundations, except “constructive” foundations that don’t have the law of excluded middle (that are unpopular since they make many proofs harder and some impossible).

What is very popular in mathematics is using category theory at a high level; basically every field uses its concepts and notation to some degree at this point. New foundations are only really relevant to this program in that they may make automated proof checking easier, which is also not a widespread practice in mathematics.


It is not true that "basically every field uses its concepts and notation to some degree at this point." In particular this is false for mainstream combinatorics, PDE, and probability theory, to give a few examples.

In fact, I would suggest that most mathematicians don't care about category theory at all.


This all hinges on "mainstream". For example, in combinatorics, combinatorial species are a vast organization of the all-important concept of generating function. They were developed by category theorists and are most tidily organized along categorical lines. If you don't think this is close enough to mainstream, I can't dispute that. It's a value judgment.

There is often an undercurrent of category theory within a subject that maybe most people are not privy to. Anything to do with sheaves or cohomology (which I know factors into some approaches to PDEs) are using categorical ideas.

Every generation, it seems, has some contingent of serious mathematicians who consider category theory marginal in their field of interest. But every generation, that contingent grows smaller as more mathematics as practiced is brought into the fold. Maybe they're coming for you next :)


Respectfully, I disagree. The question of what's mainstream and valued by the community is empirical and can be answered by looking at what's published in the leading combinatorics journals. And anyone can check those out and see that categories are basically absent. So as a sociological fact, I maintain it's far from the mainstream.

Whether combinatorialists ought to elevate certain work is of course of a question of value, but it's also a different question.

Also, in no way are sheaves or (co)homology essentially category-theoretic ideas. It's possible to develop and use these ideas without mentioning categories at all (and e.g. Hatcher's introductory textbook does just this, although he mentions in an appendix the categorical perspective later). In general I think it's good to remember that homological algebra and category theory are not the same subject. Sure, I can develop a theory of chain complexes over an arbitrary abelian category, but most of the time you just need Hom and Tor over a ring. (Again, see Hatcher.)

Finally, I'm not sure there has been a serious uptake in category theory in the mainstream of some field of mathematics since, I don't know, at least 50 years ago? We've understood for a while now what it's good and not good for. This hasn't stopped people from trying to inject it in fields where it doesn't do any good (e.g. probability), but for that reason those attempts are mostly ignored.


> Respectfully, I disagree. The question of what's mainstream and valued by the community is empirical and can be answered by looking at what's published in the leading combinatorics journals. And anyone can check those out and see that categories are basically absent. So as a sociological fact, I maintain it's far from the mainstream.

I think it is also a reasonable interpretation to take "mainstream" as "pertaining to the main subject matter of the field". Anyway, I think it is the case that the mainstream of combinatorics or probability is yet so big that a particular researcher or even group of researchers can be comfortably in the mainstream and yet have never cared for or even heard of some other line of research that is also mainstream.

The founding paper of combinatorial species [1] has hundreds of citations including many in what I gather are top journals in combinatorics, and even some in the Annals of Probability. So, what are we to make of that? Some people who are serious enough about combinatorics or probability to get published in serious journals have read, perhaps understood, and maybe even taken seriously some of these categorical ideas?

In any case, I respect your viewpoint. In my youth I was a bit category-crazy, trying to use it to organize all of my mathematical knowledge. I'm much more prudent about it these days but I'm still an optimist that we will find more unifying ideas in mathematics through it.

[1] https://www.sciencedirect.com/science/article/pii/0001870881...


It may the case that combinatorics is large, with mathematicians focused on their own particular sub-specialities, but I still don't believe category theory can properly be construed as "mainstream combinatorics" in any real sense.

Regarding your claim about that paper being cited by papers in top journals, I checked the first five pages of citations in Google Scholar for combinatorics and probability journal papers.

It's cited once in an offhand way in the concluding discussion of "The Cycle Structure of Random Permutations." No categorical concepts are used there. Ditto for the "Independent process approximations" paper (except now cited in the introduction). Another one-sentence mention in the context of background literature appears in "Tree-valued Markov chains derived from Galton-Watson processes." Same for "A Combinatorial Proof of the Multivariable Lagrange Inversion Formula" and "Bijections for Cayley Trees, Spanning Trees, and Their g-Analogues."

There's a one-sentence mention with actual (slight) mathematical content in "Limit Distributions and Random Trees Derived from the Birthday Problem with Unequal Probabilities." But you don't need categories to prove the bijection they're referring to, from what I understand.

In none of these instances is the work used in a substantive fashion, and unless I missed something no paper features the word "category." It's getting cited because authors have a duty to survey any potentially related background literature.

(On page 6 I found "Commutative combinatorial Hopf algebras," which does use a functor form that paper. It was published in a journal that is decent, but very far from the top.)

So, I think we can reject the notion that Joyal's paper has seriously influenced the fields of combinatorics or probability.

I'm sorry to harp on this, but I see claims like yours about the importance of category theory thrown around a lot on here, and often I feel that they're clearly wrong. So I thought it would be good to provide some details this time around.


I should have demonstrated my picks. I didn't just look for citation but also looked for at least some nontrivial review of the results of the paper^.

I pick these two papers. The first is literally about adjunctions pertaining to combinatorial species whereas the second devotes an entire section to reviewing the theory and stating results that it uses in a way that I don't really understand. I'm going to read the first paper though because it's relevant to my interests :)

Rajan. The adjoints to the derivative functor on species

https://www.sciencedirect.com/science/article/pii/0097316593...

Panagiotou, Konstantinos; Stufler, Benedikt; Weller, Kerstin. Scaling limits of random graphs from subcritical classes.

https://projecteuclid.org/euclid.aop/1474462098

The fact that I have to go rummaging around for these examples kind of proves your point, doesn't it? I don't think category theory will lead to any fantastic new results in the fields we're discussing, but the bar is quite low for it to be useful as an organizational tool.

^ I searched for the word functor of course! :)


OK. The first paper is indeed literally about species and in a good but not top journal. But it's not connected to the mainstream of combinatorics in any way, in the sense that if you aren't a priori interested in species, there's no reason to be interested in the paper.

The second paper is interesting because it's in an excellent journal and about a problem not obviously connected with species. So I agree that it counts as a good example for your case. I also agree with your conclusion – it's an exception that proves the rule, so to speak. Most probabilists have no need for category theory, and most AoP papers don't use categories. (I feel that may literally be the only one?)

I also agree that, to the extent it's useful, it's useful as an organizing principle and not "substantively." I suppose this explains why I feel it's overhyped: mathematicians care about solving problems, and the insights that solve the problems ultimately have to come from some problem-specific observations.


> most mathematicians

Agree, but only on the level on which they do not care about the abstract math (algebra, topology, etc.) in general. As soon as you step into the territory of the abstract math, especially where different disciplines blend, such as homology and cohomology, category theory (and its diagram language) helps a lot to clarify things. (Incidentally, a lot of this stuff is now part of the "applied math" as well, having found its way into theoretical physics, for example.)


I'm not sure I'm comfortable characterizing the fields where category theory is useful as "abstract math." Modern PDE is plenty abstract, for example. Probably it's better to say that the usefulness of category theory is proportional to the problem's distance from algebraic topology and algebraic geometry.

I also am reluctant to characterize theoretical physics as "applied math." I haven't seen anyone who calls themselves an applied mathematician use category theory in a substantive way (where here I am thinking about numerical computing, mathematical biology, and so on).


I have only an MA in math - but I have the impression mathematicians can be arranged on spectrum between ad-hoc theorem provers and giant machinery builders, between those like Paul Erdős and those Alexander Grothendieck.

The thing to keep in mind is that a mathematical structure can be expressed as instance of any number of more general mathematical structures ("mathematical machinery"). The structures that useful, however, are those that are "illuminating", a somewhat vague criteria but one which generally include a structure facilitates and unifies proofs of important theorems. Wiles' proof of Fermat's Last Theorem showed the value of many forms of this mathematical machinery [1], including category theory.

At the same time, I think things like Godel theory of cutting down proofs and Chaitin's Omega constant give a suggestion that the some number of "important" theories within a given axiomatic system will have long proofs that don't necessarily benefit from the application of a given piece of mathematical machinery, whatever that machinery.

And think that's related to efforts to apply category theory to programming via functional programming. I feel like it's an effective method for certain kinds of problems but that you get a certain problematic "it's the best for everything" evangelism that doesn't ultimately do the approach favors.

[1] https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_...


> From what I've learned so far, the set theory suffers from Russel's Paradox

So-called "naive set theory" does, but my understanding is that this paradox (and others like it) sparked a crisis in mathematics that led to the creation of ZFC and others. ZFC is not known to be inconsistent (due to deep and technical reasons, I can't claim positively that it is consistent), and it was designed to avoid the paradoxes that plagued naive set theory.

Russell's type theory was another early contender for a formalism. For whatever reason (I'm not aware of all the details), Zermelo-Fraenkel set theory (later extended with the Axiom of Choice) won the popular mindshare. Personally, I'm more a fan of the Elementary Theory of the Category of Sets (ETCS) [1], which is indeed drawn from category theory. But it's equivalent in deductive power to ZFC, so what you pick mostly only matters if you're doing reverse mathematics. (This is what really answers your question, I think.)

Russell's type theory and modern type theories are distinct (and there is no single "type theory"). I'm led to believe the commonality is primarily with the usage of a hierarchy of universes, so that entities in one unverse can only refer to entities in a lower universe.

[1] "Rethinking set theory", Tom Leinster: https://arxiv.org/abs/1212.6543


but my understanding is that this paradox (and others like it) sparked a crisis in mathematics that led to the creation of ZFC and others. ZFC is not known to be inconsistent (due to deep and technical reasons, I can't claim positively that it is consistent), and it was designed to avoid the paradoxes that plagued naive set theory.

That was my understanding as well. Axiomatic Set Theory, with ZFC, doesn't suffer from the same problems as Naive Set Theory, which is why it was developed. Or so I've read.


> And yet, the current mathematics relies heavily on the ZFC set theory. Why is that ? (Is that assumption even correct ?)

Set theory is the de facto modern standard because it has straightforward notation and sufficient power to prove the things most mathematicians care about, both algebraically and analytically. Other foundations exist, it's just that most mathematicians work at a level where set theory is more of a communication and notation tool than a foundation for which the nuances matter.

> From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?).

No, naive set theory suffers from Russell's Paradox, but ZFC does not. ZFC was defined in order to avoid Russell's Paradox.

> That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.

Russell's type theory, yes, which predates ZFC. But this has nothing to do with programming languages, which weren't a thing when this problem was being considered.

As for why category theory isn't used as a foundation - that's not really what category theory's "charter" is about. As user johncolanduoni explained, category theory is more of a framework for describing things with convenient algebraic abstractions than it is a foundation for mathematics. Category theory as practiced today is mostly done at a higher level than the foundational set theory.


IANAM, but...

1. I thought mathematics was moving more towards a category theory based foundation, as opposed to the set theoretical stuff.

2. Isn't it in part because we have over 120 years of results and proofs that are based on set theory, and people aren't just going to throw that out for something newer? Granted, Category Theory has been around in some form since about the 1940's, but it's still newer than set theory.

3. Russell's Paradox hasn't stopped people from using Set Theory to achieve useful results, and the axiomatization of Set Theory and the advent of ZFC is why (or at least part of why) that was possible, no?


1. It's not. The work on "category-theoretic foundations," say HoTT, is (sociologically speaking) a highly niche topic. Most professional mathematicians who work on the foundations of mathematics do so in the framework of set theory.

2. ZFC is fully adequate for all the mathematics 99% of professional mathematicians do (and probably more than adequate in terms of strength). Also, all the mathematics 99% of mathematicians do is insensitive to foundational issues, so if you swapped ZFC for another axiomatization of similar strength, no one would notice.

3. I don't believe ZFC was the first to avoid the paradox, but yes, it does not suffer from Russell's Paradox.


> So why don't we see a huge "adoption" in math fields ?

Because most math fields work at a higher level, and whether they consider set theory or category theory foundational doesn't matter. For instance, the defining property of ordered pairs is: `(a, b) = (c, d) iff a = c and b = d`. {a, {a,b}} is a (set-theoretic) model for ordered pairs, but it's not the only model. As long as a model exists, the question of "which model you're using" isn't relevant.


> And yet, the current mathematics relies heavily on the ZFC set theory. Why is that ? (Is that assumption even correct ?)

Not really. Most working mathematics is done without any particular foundational context (in fact, it's probably done in naive set theory). If you asked a mathematician, they'd tell you they were using ZFC, but only because everyone does; in practice they don't know or care what foundations they're working with.

> From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?). That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.

Naive set theory suffers from that paradox. ZFC is a minimal "patch" that avoids it (through the rather ugly hack of an axiom schema).

It shouldn't surprise any programmer that most people working in the field would rather use a bodge that lets them use all their existing theorems and avoid having to learn anything new, rather than a new foundational paradigm.

People who care deeply about foundations are working on things like HoTT. But it's just not that important to day-to-day working mathematicians.


ZFC Set Theory avoids Russell's paradox by having a few fundamental axioms that preclude it. It's actually mentioned in your wiki link in the `Set-theoretic responses` section.

ZFC is a bit messy sometimes, but it's actually kind of nice in its simplicity. I am a big fan of TLA+, for example, and it's really kind of beautiful how easily you can define incredibly precise invariants by using the set theory predicate logic.


I once asked a logic professor I'm friends with and he spent fifteen minutes ranting about large ordinals and reverse mathematics and non-absolute logics. Apparently there is some incredibly weird stuff at the fringes of logic that we only really know how to handle in ZFC.

This is purely secondhand, of course, and I forget almost all of the details. Contemporary set theory is scary scary stuff.


I think the current fundamental structures are influenced by the Bourbaki group, however it has gaps. Category Theory is likely able to fill those gaps and I guess it will end up as the foundational layer of both mathematics, computer science, and pretty much everything else. David Aubin wrote about this in https://press.princeton.edu/books/hardcover/9780691118802/th... (VI.96) Some discussions too here https://math.stackexchange.com/questions/25761/introduction-....


It may be worth explicitly noting that category theory and set theory are not mutually exclusive. You can use set theory as the foundation, and build category theory on that, and then use category theory to do all that interconnection of fields and whatnot. To me it seems like the most natural way to do it, because set theory is simple and intuitive (kind of), whereas category theory is complicated and extremely abstract.

(Of course there is a lot of stuff in set theory that is counter-intuitive when you get to infinite sets, but I still think that's better than what you have with categories where they're just abstract algebraic objects and there isn't any intuition in the first place.)


Category theory isn't a type theory (in the sense that you can have the category of categories, which correspond to different type theories).

It's true that you can get away from Russell's paradox in category theory, but that's because categories are not sets and have different properties (there's less you can say about them because they're more general).

Type theories, particularly homotopy type theory, are becoming popular among some mathematicians, but not because they escape Russell's paradox (I'm not sure that they can say anything more about sets than set theory), but because they are more useful when constructing automated proof systems.

Edit: also, category theory is used a lot in very cutting edge math, like algebraic topology.


> That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.

I don't think that's right. The original invention of type systems in programming had nothing to do with Type Theory. The motivation wasn't from any theory. The point was to tell Fortran whether to treat a value as an integer or a floating-point number, which affected storage format and a bunch of other things. (Before Fortran, there was floating-point arithmetic, but the type was still just a computer word. The programmer had to keep track of what format it was in.)


"We will assume no background knowledge on behalf of the student, starting from scratch on both the programming and mathematics."

This is a fantastic "side effect" of the fact that category theory isn't built on any other mathematical knowledge. You don't even need even any arithmetics for that.


At a meta level, Category Theory requires some comfort with abstraction, which really only comes with a mathematical education. So while it may stand apart from much math, it relies on your strong mathematical foundations.


As so many undergraduate math textbooks say, "No background is assumed beyond sufficient mathematical maturity."


Tautological sufficiency/necessity relativities for an absolute measure are a pet peeve:

“How much salt should I add?”

“Oh, not too much.”

Practically guarantees a withering glare from me.


That's a really great analogy, actually. You can't tell a brand-new cook "salt to taste," but you can tell a cook with an intermediate level of experience "salt to taste" even if it's a recipe they've never made before. You can't impart the experience in words, pictures, or symbols. You can't add a chapter zero that gets them there. But the right kind of experience will get you there pretty quickly.

I admit it's kind of frustrating that it can't be boiled down to a list of discrete things you need to know, but if I had to explain the difference between me before I had "sufficient mathematical maturity" and me after, I would explain it in terms of habit and confidence and other squishy things that aren't mathematical at all.


Giving a precise value for this doesn't work in any case. "Salt to taste" is inherently qualitative.

Most pizza dough recipes quote around 5g of salt for 250g flour. Personally I prefer double that amount, and this is the case with many recipes.

Perhaps specifically with salt there is an insane amount of paranoia about blood pressure. In many ways it feels as irrational as Korean worries about fan death.

On the other hand, giving a "5g of salt" value is a good starting point, to avoid undersalting due to paranoia or oversalting due to inexperience.


What they really need is someone to tell them to add less than they probably need, then taste, add more, taste, add more, and expect to over-season and under-season some dishes as they get experience. Salt quantities in recipes have to be short, sometimes way short, so you're stuck learning the process with or without the quantities. I've never seen this explained in a cookbook, at least not in a way that made an impression on me.

Edit: A piece of advice that has stuck with me for a long time, long after I forgot where it came from, is it's a mistake to look for the state in between "not salty enough" and "too salty." That state doesn't really exist, especially when you're feeling nervous! Instead you should look for the overlap where you can perceive the dish as alternately "not salty enough" and "too salty," like that ambiguous drawing that your brain can resolve to either an old woman or a young woman[0]. That advice really works for me, but my wife, who seasons like a pro, thinks it's nonsense, which I think illustrates how subjective the process is and how it isn't information you can impart but rather experience you have to guide someone toward.

[0] https://en.wikipedia.org/wiki/My_Wife_and_My_Mother-in-Law#/...


Which is kind of ironic, because students take classes exactly because they feel 'immature' with respect to that subject.. Honestly, most of my smoothest educational experiences with hard topics assumed some immaturity on my part, and that that was OK


People don’t generally take category theory because they don’t really understand how proofs work or how to read definitions. The maturity required is about being able to cope with proving things and following proofs based on definitions which will probably seem somewhat bizarre at first and unmotivated at first. The immaturity you seem to talk about is people taking category theory because they don’t know category theory but that’s different and not what is meant by mathematical maturity.

That said, a background in mathematics helps with category theory. Things like group theory, topology (particularly algebraic topology), Galois theory and set theory can be useful in motivating a lot of category theory. I’m yet to see much of a strong motivation from programming (where is there a functor that isn’t an endofunctor?)


When I first went from CT as an implementation detail of typed FP to studying CT on its own, I was very aggravated by this point: "where is there a functor that isn’t an endofunctor?"

Over time it has fallen away in two directions.

In one direction is that though you are always stuck in a category with types as objects and functions as arrows, that doesn't stop you from subdividing. This became more intuitive to me as I encountered more and more categories that are basically just Set with extra structure (just as the one we program in is basically Set with bottom). If you start putting extra structure on your arrows (that is something that carries around a function along with some extra proof-relevant structures, like say a way to show that some zero element in the domain under the function equals the zero element in the codomain and that multiplication is preserved under the function, now you have monoid-homomorphisms) then you end up in the general case with something that looks a lot like a binatural transformation of profunctors, which pretty cleanly encodes "exo"-functors between different sub-categories of the category with all types as objects and functions as arrows.

In the other direction is realizing that even imprecisely, there's quantifiable value in observing functors to and from the category of types and functions, say one between whiteboard diagrams and code, or between a specific problem domain and code, etc etc. If you have some other space with composable relationships that you want to preserve when you write code to correspond to it, or you have some other representation you want to produce based off some given codebase while preserving the structure of the code taken as input, you can gain a ton of conceptual leverage out of identifying a functor.


Isn’t an endofunctor a morphism from/to the same category? So a functor would be any morphine from/to different categories, and that wouldn’t be an endofunctor?

Endofunctor: A -> A, for category A

Functor: A -> B, for categories A and B


More or less. "Morphism" is defined by the category it lives in, so a functor is a morphism between categories in the category of (small) categories. (Insert technicalities about size concerns and Russell's paradox.)

In particular, a map between categories that does not preserve composition is not a functor. It is important that F(f;g) = F(f);F(g).


In the typical functional programmers view of category theory, there is only one category of interest so all functors are endofunctors. My main issue is that category theory the thing being studied is categories plural and so I feel like the language of category theory is being used to only ever talk about a single category.

It feels like doing “group theory” entirely with the symmetric group of the integers. While it’s true that the group is very general and has interesting properties, the focus of group theory isn’t single groups but rather the relationships between groups.

Compare this to something like algebraic topology or Galois theory where you have a Galois correspondence (a functor) between objects you’re interested in and groups.



I think I got more appreciation of abstraction from physics and programming than what mathematical education I had; I'm weak at maths -- experimental physics doesn't need much :-/ -- but I do understand that weakness, and am quite comfortable with abstraction.


This is almost every upper division undergrad math class. It was always fun watching people squirm when they pulled out some useful fact from their past 14 years of math education and then got told they had to prove it before they could use it.


If only it were limited to facts learned from math education. For example, there’s the Jordan curve theorem (https://en.wikipedia.org/wiki/Jordan_curve_theorem), which I guess most four-year olds ‘know to be true’ from their experience with coloring books.


Yeah, but from that same Wikipedia article:

> It is easy to establish this result for polygons, but the problem came in generalizing it to all kinds of badly behaved curves, which include nowhere differentiable curves, such as the Koch snowflake and other fractal curves, or even a Jordan curve of positive area constructed by Osgood (1903).

So to some extent, the reason why such an "obvious" statement requires a complicated proof is because our everyday notions of what a "closed curve" is are much more restricted than what we consider in mathematics. This is kind of common in maths, especially in fields with a lot of visual intuition.


to get super meta, one could say that the opposite is true: arithmetic requires categories


Except this isn’t really true in the common sense meaning (schoolchildren do arithmetic fine without knowing about category theory) or in the formal sense you’re trying to get at (there are formal axiomatic foundations which arithmetic can be based on which do not need category theory. A simple proof is by causality: arithmetic was successfully formalised before category theory was invented)


You should read the famous book by Linderholm, Mathematics Made Difficult


Videos of the lectures from last year: https://www.youtube.com/watch?v=NUBEB9QlNCM



These guys wrote "An Invitation to Applied Category Theory." It's an awesome book, and I'm super excited for these lectures.


For those who might not realize: IAP is the period between semesters at MIT, roughly most of January. So this is is a quick, accessible introduction, not a heavy semester-long slog.


In my experience, Monad, Applicable, and Monoid are probably the only ones I'd use in Haskell, and maybe none of them in languages without good inference and general support.

Pretty wild ideas, though. Fair chance they'd be more confusing than using more specifically named instances, but solid ideas where the class instance documents that you're using the pattern, instead of describing the preferred interface.


You've definitely used Functors or Semigroups as well, you just didn't realize it.


Here is my problem with these ideas in programming: if you recognize that some common construct is in fact a semigroup or functor, does knowing this actually buy you anything?

I suppose it might help sometimes when designing an abstraction, to guide you to some nice properties, such as easy composition.


While playing around with a problem at work involving Markov chains and graph connectivity, I found it useful to know that I could write (in Java) a generic method that performed "exponentiation by squaring" on semigroups.

So, in addition to being able to raise numbers to powers I could also use it on matrices whose elements belonged to a semiring.

That is, I could use the same code on a matrix of doubles with "times" and "plus" (for the Markov chains), as well as a matrix of booleans with "and" and "or" (for the graph connectivity).

(Of course, I could have used special purpose libraries, but these were small problems and it was fun. :)


> if you recognize that some common construct is in fact a semigroup or functor, does knowing this actually buy you anything?

At least in Haskell one thing it means is you can use lots of new helper functions.

> guide you to some nice properties, such as easy composition.

Yep, which gets you code re-use for one.


I think it isn’t often said that one reason a lot of these classes work in haskell is the lazy evaluation (or arguably the pure functions allowing the compiler to inline things and skip evaluation).

You can write monoid and foldable interfaces in java but in haskell finding the head of a list with a monoid and foldmap is constant time (and the first or last element of a tree is logarithmic) but a java implementation would be linear.


There is sort of a "jump" from those to the ones less oriented around (->) but truer to the math, but I can assured I have in fact used https://hackage.haskell.org/package/categories (a prime example of the latter sort) in production and it was genuinely useful.


I wish semilattices got more play. They're so ubiquitous when talking about distributed systems. I remember a keynote on eventual consistency in databases that could have been replaced with "make your merge operation the join of a semilattice."


Any resources for semilattices that you do like? I'm also finding them mentioned around CRDT & distributed systems threads.


My all-time favorite is the original LVars paper: https://users.soe.ucsc.edu/~lkuper/papers/lvars-fhpc13.pdf


> Conclusion ... monotonicity serves as the foundation of deterministic parallelism

That's what the recent CALM Theorem [1] paper uses as the first theorem:

> Monotonicity is the key property underlying the need for coordination to establish consistency, as captured in the CALM Theorem:

> THEOREM 1. Consistency As Logical Monotonicity (CALM). A problem has a consistent, coordination-free distributed implementation if and only if it is monotonic.

Unsurprisingly, this paper is reference 32 in CALM. Maybe I should check out some other papers referenced here... Thanks for the recommendation!

[1]: https://cacm.acm.org/magazines/2020/9/246941-keeping-calm/fu...


I read Birkhoff's "Lattice Theory" many years ago, then wandered down some idiosyncratic path, so, no, I'm sorry. I really don't have anything.


For a humorous use case of types, nothing has beat Aphyr's (of Jepsen fame) "Typing the Technical Interview" here: https://aphyr.com/posts/342-typing-the-technical-interview

If you haven't read it, take a minute...


Wow, what a combo! I'd be interested in anything from each of the lecturers, but all 3 at the same course, is amazing.

Category Theory could be the rosetta stone of many things(this is relevant https://arxiv.org/pdf/0903.0340.pdf).


What an odd coincidence that this was posted and on my YT subscribe page: https://www.youtube.com/watch?v=NUBEB9QlNCM - Just posting the video for others to see.


David Spivak and other folks at Azimuth Forum[0] have been great at providing high quality discussions on ideas in this course and others. Many thanks.

[0] https://forum.azimuthproject.org


Good point and strongly agree. John Baez [0] also merits mention as the originator of Azimuth and the creator of the Applied Category Theory course [1] on the back of Fong & Spivak's paper [2].

[0]https://www.azimuthproject.org/azimuth/show/John+Baez [1]https://forum.azimuthproject.org/discussion/1717/welcome-to-... [2]http://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf


Is this Spivak related to the author of the famous Calculus book?


David and Michael Spivak are not related


This doesn't work at all with HTTPS Everywhere. Just get a page about DreamHost site not found.


Will the course be taught live again and if so will those geographically elsewhere be able to attend?


Does it compile to WebAssembly VM? Any compilers on Android or iOS? Will it work with Docker? Is it deployable on AWS or Azure?


the second i saw haskell i started saying eternally, "ABORT ABORT!"

You are going to have to revive jesus to come up with a functional language that simplifies coding over a procedural one. Especially over an object oriented one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: