Hacker News new | past | comments | ask | show | jobs | submit login

After learning PGMs, I find that I've almost completely eschewed first order logic for my own personal everyday reasoning. Arguments based on logical formulations require that the propositions are not leaky abstractions, and for most problem domains (i.e. not physics), there are going to be so many exceptions that I find very few cases where I can rely on first order logic. The softness of PGMs, and ideas like "explaining away" [1] come in quite handy. And after learning some of Pearl's (and others) formulation of causality as graphical models, I understand much better why counterfactual reasoning is so error-prone.

Further, PGMs have the advantage over deep networks in that they are highly explainable, and you can go back and look at the chain of reasoning. For some problem domains, this part is more important than prediction accuracy.

[1] http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html#explainaway




The power of logic is that a few well chosen domain specific clauses can reduce the problem dimensionality dramatically.

If you are building a robot, even if the mechanics are not really newtonian, modeling the system mechanically can get a model much closer to the underlying manifold, reduce training set size and improve generalizability. So I don't think the old way of doing things should be thrown out. They got pretty near the right answers, and we should use newer methods to just fill in the gap between theory and practice.

E.g. pre-train a DBN using an analytical model and later adjust it on real data.


Satisfying a complicated constraint is not much easier than sampling from a "thin" manifold.


Very curious to know

  1. Where you learnt PGMs
  2. How you made it part of your 'personal everyday' toolkit
Always interested in improving my thought processes...


I learned mostly by doing, through undergraduate research, and then in graduate school. This was far more effective than lectures or books, though those are very helpful for getting started. I had one class that covered an overview of lots of the technology, and was lucky enough to have access to preprints of Koller & Friedman's book as a reference to fill in any gaps. I also read Judea Pearl's book on Bayesian Networks fairly early on.

As far as everyday reasoning, it made me somewhat more skeptical of long chains of A --> B, !B therefore !A, type of thing. It's easy enough to model this type of logic as a special case of PGMs. And the causal stuff is extremely useful for making me skeptical of arguments of the sort "If we did X, then Y would happen," and also how and when correlation is causality. Don't have any pat examples though, it's just something that infuses my thinking, such as learning about biological evolution.


I don't know how he learned, but I studied it through the very demanding, and worth every second, course from Coursera:

https://www.coursera.org/course/pgm


I read somewhere that this is one of the hardest Coursera courses.


It's definitely the hardest one I've taken there. Most of the difficulty comes from the density of the lectures. She moves fast and takes it for granted that you're piecing everything together as you go. You're probably not, but at least you can go back and watch it again if necessary!

Hinton's Neural Network class was very challenging for me too, mostly because many of the concepts were unfamiliar to me. But again, I could re-watch whatever I needed to in order to get it.


Indeed, it is demanding, but fascinating and very well taught.

Too bad they haven't offered it since 2013. I didn't finished it by them for personal reasons :c/



15 - 20 hours a week is pretty demanding!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: