Hacker News new | past | comments | ask | show | jobs | submit login

So you may like better,

> “Multiple simultaneous optimizers” search for a (local) maximum value of some function E(λ1, …, λn) of several parameters. Each unit Ui independently “jitters” its parameter λ1, perhaps randomly, by adding a variation δi(t) to a current mean value μi. The changes in the quantities λi and E are correlated, and the result is used to slowly change μi. The filters are to remove DC components. This technique, a form of coherent detection, usually has an advantage over methods dealing separately and sequentially with each parameter.

(In “Steps”)

:-)




can you provide link, and what conclusions you derived from this text if your interest is meaningful discussion?


The link has been already provided above (opus cit), it's directly connected to the very question of gradients, providing a specific implementation (it even comes with a circuit diagram). As you were claiming a lack of detail (but apparently not honoring the provided citation)…

(The earlier you go back in the papers, the more specifics you will find.)


You didn't give me any links.

And what are your conclusion from citation? You are claiming again that Minsky invented gradient descent?


For the link and claims, see the the very comment you initially answered to.


That claim was answered: Minsky didn't invent gradient descent.


That claim was never made, but by you. The claim was, Minsky had practical experience and wrote about experiences with gradient descend (aka "hill climbing") and problems of locality in a paper published Jan. 1961.

On the other hand: who invented "hill climbing"? You've contributed nothing to the question, you've posed (which was never mine, nor even an implicit part of any claims made).


Ok, Minsky "pioneering" is his writing about something invented before him. Anything else? :)


Well, who wrote before 1952 about learning networks? I'm not aware that this would have been already main stream, then. (Rosenblatt's first publication on the Perceptron is from 1957.)

It would be nice, if you contributed anything to the questions you are posing, like, who invented gradient descent / hill climbing or who can be attributed for this? what substantial work precedes the writings of Minsky on their respective subject matter (substantially)? why was this already mainstream or how and where were these experiments already conducted elsewhere (as in "not pioneering")? Where is the prior art to SNARC?


> Well, who wrote before 1952 about learning networks?

steps which you referred not from 1952.

> Where is the prior art to SNARC?

We don't know what was the SNARC so can't say if there was prior art.

Any other fantasies? :-)


This is ridiculous. Pls. reread the threads, you'll find the answers.

(I really don't care about what substantial corpus of research on reinforced learning networks in the 1940s, which is of course not existent, you seem to be alluding to, without caring to share any of your thoughts. This is really just trolling at this point.)


> you'll find the answers.

I think you perfectly understand that we are in disagreement about this, my point of view is that your "answers" are just fantasies about your idol without grounding into actual evidence.

What is your goal in this discussion?


Minsky is not my idol. It's just that it's part of reality that Minskys writings exist, that theses contain certain things and that they were published at certain dates, and that BTW Minsky happens to have built the earliest known learning network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: