Starting with PGMs would kill 99.9% of aspiring ML practitioners. Classes related to PGM at Stanford and MIT are considered to be some of the most difficult ones. I'd rather recommend to start with something they are enthusiastic about and once they become sufficiently advanced, to naturally learn (H)PGM.
Exactly. I’m talking about how to orient yourself, PGMs are hard, sure but they don’t have to be outrageously hard. I would argue that if you do understand naive bayes (and it’s naivety) and you understand priors, then PGMs are just like rules to a game called “make a diagram of your posterior”. That’s not all there is obviously, and that’s kind of my point; you can do a lot with a little bit of knowledge, and then you can slowly climb that ladder for a long time and the more you learn, the more you can apply. Starting with an ad hoc approach (here are all of these classes in scikit learn with .fit() functions, lets just memorize their docstrings) isn’t “bad” per se, that knowledge is important, but it will not take you very deep, and you won’t be able to stray very far from those methods without being out of your depth and running into trouble.