Hacker News new | past | comments | ask | show | jobs | submit login

As a complete outsider, I don't understand what's special about the "optometrist algorithm." As described in the Nature article it's just hill climbing using humans as the evaluation function.

Isn't it basically the same thing they were already doing but more granular?




Basically nobody was using automated gradient descent / etc because of the proclivity of these algorithms to get stuck on a boundary. The problem is the boundaries are not well defined. One example might be a catastrophic instability. If it gets triggered it has the potential to damage the machine. But the exact parameters in which the instability occurs are not well known. So with this algorithm you mix the best of both worlds: the human can guide away from the areas where we think instabilities are, the machine can do it's optimization thing. It's pretty simple overall but enables a big shift in how experiments are run.

Edit to add: these instabilities often look just like better performance on a shot-to-shot basis, which makes the algos especially tricky. Using a human we could say "this parameter change is just feeding the instability" vs "oh this is interesting go here"


I am still very skeptical that a human is really that good at avoiding the problem areas, although they might be marginally better. Plus, they don't seem to claim that anywhere in the paper, instead, they just rated shots as either "better" or"just as good", ie., a local evaluation which won't let you avoid such areas, which of course is a judgement that requires more knowledge than just the conditions in the neighborhood of the current reference.

The only thing I think that can lead someone to your conclusion is they can judge based on a host of criteria, not just a pre-defined set of criteria--may be that's what you meant. Of course, intuitively, changing your criteria midstream would lead to bias in your judgement, I'd think, but that may be the real innovation here, that is hard to do without a human judge in the mix.


> I am still very skeptical that a human is really that good at avoiding the problem areas

Why? Humans have a much richer modeling apparatus than any computer does right now. We can draw on a very large and yet almost fully tuned to reality set of possible models simultaneously. You can estimate the number of available models as whatever number of neurons you have, in combinatorial. We also have machinery for searching that entire model space simultaneously and testing against a continuous stream of megabytes of data in realtime, in order to find good fits.

Existing AIs wouldn't even know where to start. They can apply infinite models, but have no grounding in reality, and no way to choose amongst them. The AI doesn't even have an intrinsic sense of space, seeing has how it lacks a body. It's a very fast worker that can get things done when you give it very specific instructions, but it has no real ability to understand what it is doing or why it would want to do something different.


Remember that the human won't only be thinking "better" or "just as good". They almost certainly can't avoid thinking "If I say 'better' here, what direction will that drive the algorithm in?" They don't just learn how to drive the plasma, they're learning how to drive Optometrist as well.


To be clear: there are no gradients here (right?) This is just 0th order hill-climbing with a human assist.


how does one climb a hill with no gradient? [serious question]


You can climb a hill without knowing the gradient, so long as you can compare two points in terms of height. You randomly move in some direction, then compare the new point to the old point, go to whichever of them is higher, and repeat.

This sounds like what the experimenters are doing. Perhaps the GP was alluding to "first order hill climbing" as evaluating the gradient in every direction and climbing the steepest one, but the "0th order" version is also usually considered hill climbing and is better for some classes of problem.


That is exactly what they're doing. See the section on Exploratory Technique, second to last paragraph. As I said above, the possible innovation here is they can change midstream the criteria one uses to decide what is a "better shot".


is it picking a new configuration at random, or does it still have to be "close" to the last configuration?


It still has to be close by some metric to be considered hill climbing. The article doesn't make it clear, but I suspect a lot of the insight in the algorithm is how the computer chooses two similar sets of inputs that differ in an "interesting" way.


last clarifications, sorry.

Some manifold has a goodness function defined on it, described by a (totally ordered?) relation provided by the observing scientist.

The goodness function is assumed to be (continuous/differentiable/continuously differentiable?) with respect to some metric, and the computer picks a random coordinate within some small distance of the last coordinate in the metric, and then asks the human to order them?

I don't think this is hill climbing, and my simple reasoning for that is that I don't believe the first assertion. The expert is almost certainly behaving non-deterministically. In fact, I believe that each time the expert is presented with the "same" pair of coordinates, he is more likely to yield a different ordering.

That said, I could be reading this wrong.


The naive way to do calculus. Use secants to approximate the tangent. I think it's called finite difference.


Finite differences, you estimate the gradient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: