Not to be a pedant, but I think the DeepMind paper is actually an example of one-shot generalization, but not learning. From the paper:
> Another important consideration is that, while our models can perform one-shot generalization, they do not perform one-shot learning. One-shot learning requires that a model is updated after the presentation of each new input, e.g., like the non-parametric models used by Lake et al. (2015) or Salakhutdinov et al. (2013). Parametric models such as ours require a gradient update of the parameters, which we do not do. Instead, our model performs a type of one-shot inference that during test time can perform inferential tasks on new data points, such as missing data completion, new exemplar generation, or analogical sampling, but does not learn from these points. This distinction between one-shot learning and inference is important and affects how such
models can be used.
Absolutely. One shot learning is the cutting edge research towards building more human like AI. However its still in early phases. We are trying to make transfer learning directly usable to people trying to solve problems which is proven today. Hopefully we will be able to do the same with one shot learning.