I don't think so. The neural network in Google+ was trained on labeled images and now finds similar objects in unlabeled images.
The technology discussed in that article is about deducing the existence of a common feature, in this instance a cat, from a large collection of unlabelled images.
It may be the same tech (roughly). Use the same approach for all but the last layer, then use traditional backprop to learn the last layer and fine tune the connections in the lower layers.
Mostly unlabelled then, which means you can learn to generalise over a huge number of images but learn labels on a smaller set.
Yep. I don't know if that's what is actually being used here, but that is pretty much how they did it with the same system:
"We applied the feature learning method to the
task of recognizing objects in the ImageNet
dataset (Deng et al., 2009). After unsupervised
training on YouTube and ImageNet images, we added
one-versus-all logistic classifiers on top of the highest
layer. We first trained the logistic classifiers and
then fine-tuned the network. Regularization was not
employed in the logistic classifiers. The entire training
was carried out on 2,000 machines for one week."[1]
Basically you learn features in unlabeled data, then identify the features your trained net is recognizing with labeled data. When you run over g+ images, you then only tag with features you're sure of past some threshold of certainty.
http://www.nytimes.com/2012/06/26/technology/in-a-big-networ...