You can say RL is not hard when you manage to teach all aspects of the SOTA RL algorithm (https://arxiv.org/abs/1707.06887) to your class such that they are able to answer any question about it (not just implement it). Good luck teaching metric spaces to code monkeys.
You guys are doing good work teaching tensorflow and algorithms/models researchers are coming up with, but are slapping those same researchers by disrespecting what they're working on now. Some humility would be wise.
Not sure why the exact details of state of the art research is relevant here. Obviously that definition of RL is dumbing it down, as I'm sure Rachel knows, but the point is simple - teaching a computer to do something specific a human can without explicitly telling it whether something is good or bad, but rather have it learn on its own.
The latest research in RL isn't getting us that much closer to AGI. We can't plop a robot into the real world and tell it to use RL to learn everything.
Before 2012, you also couldn't run a system to classify among 1000 classes. Just because RL isn't there yet, doesn't mean it's not worth doing.
The reason we're discussing hardness of RL is fast.ai's narrative of "you don't need math for AI" and "AI is easy". Sure, implementing and applying AI is easy, and you just need to learn tensorflow, but doing even a modicum of novel research in RL requires a tremendous background in all kinds of math. I appreciate what fast.ai is doing to democratize as much of AI as possible, but that doesn't need to be at odds with other people prioritizing RL research.
fast.ai's narrative is "AI is easy for what you probably want to use it for". There are a ton of awesome applications that are enabled by the level of AI taught in the course. However, Rachel's article is about how AGI is actually really, really hard. So much so that we have no idea how to get there and can't predict when it will happen. So instead of fearmongering about AI, we should instead be encouraging everyone to do awesome new stuff with AI.
You guys are doing good work teaching tensorflow and algorithms/models researchers are coming up with, but are slapping those same researchers by disrespecting what they're working on now. Some humility would be wise.