> when it came to working with real networks. Compare
my understanding is that that no one knows what that SNARK thing was, he built something on the grant, abandoned it shortly after that, and only many years later he and fanboys started using it as foundation of bold claims about his role in the field.
> “Multiple simultaneous optimizers” search for a (local) maximum value of some function E(λ1, …, λn) of several parameters. Each unit Ui independently “jitters” its parameter λ1, perhaps randomly, by adding a variation δi(t) to a current mean value μi. The changes in the quantities λi and E are correlated, and the result is used to slowly change μi. The filters are to remove DC components. This technique, a form of coherent detection, usually has an advantage over methods dealing separately and sequentially with each parameter.
The link has been already provided above (opus cit), it's directly connected to the very question of gradients, providing a specific implementation (it even comes with a circuit diagram). As you were claiming a lack of detail (but apparently not honoring the provided citation)…
(The earlier you go back in the papers, the more specifics you will find.)
That claim was never made, but by you. The claim was, Minsky had practical experience and wrote about experiences with gradient descend (aka "hill climbing") and problems of locality in a paper published Jan. 1961.
On the other hand: who invented "hill climbing"? You've contributed nothing to the question, you've posed (which was never mine, nor even an implicit part of any claims made).
Well, who wrote before 1952 about learning networks? I'm not aware that this would have been already main stream, then. (Rosenblatt's first publication on the Perceptron is from 1957.)
It would be nice, if you contributed anything to the questions you are posing, like, who invented gradient descent / hill climbing or who can be attributed for this? what substantial work precedes the writings of Minsky on their respective subject matter (substantially)? why was this already mainstream or how and where were these experiments already conducted elsewhere (as in "not pioneering")? Where is the prior art to SNARC?
This is ridiculous. Pls. reread the threads, you'll find the answers.
(I really don't care about what substantial corpus of research on reinforced learning networks in the 1940s, which is of course not existent, you seem to be alluding to, without caring to share any of your thoughts. This is really just trolling at this point.)
I think you perfectly understand that we are in disagreement about this, my point of view is that your "answers" are just fantasies about your idol without grounding into actual evidence.
Minsky is not my idol. It's just that it's part of reality that Minskys writings exist, that theses contain certain things and that they were published at certain dates, and that BTW Minsky happens to have built the earliest known learning network.
my understanding is that that no one knows what that SNARK thing was, he built something on the grant, abandoned it shortly after that, and only many years later he and fanboys started using it as foundation of bold claims about his role in the field.