Hacker News new | past | comments | ask | show | jobs | submit login

Ok, I admit I hadn't looked up the actual numbers before I posted that comment, but the Snell paper wasn't the last word on that line of work, it was followed by https://arxiv.org/abs/1703.05175 (which doesn't have miniImagenet results). And there may have been further work in that direction that I'm not aware of.

You're right that Reptile is the simplest recent algorithm in the meta-learning literature, but I would argue that's basically my point, they started from somewhere pretty ambitious (lets learn a learner, or at least an SGD update rule), and ended up with learning an initialization that can be updated well with a few steps of SGD.

[EDIT]: I also prefer Matching/ProtoNets style work as being simpler to deploy, since you don't need to retrain to add new classes. Maybe one day Meta-learning will be SoTA, but there's a lot of world class researchers on it, and the approaches keep tending away from actual meta-learning IMO, so my money is on the matching approach. Though my money is on integrating with data stores in general and not needing to squish everything into weights, so I'm a bit biased here.




I think it’s more about learning a variety of tasks. And I like the emphasis on getting at higher-order derivatives with only first-order methods, which as an abstract idea has a variety of applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: