Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
halflings
on Oct 30, 2017
|
parent
|
context
|
favorite
| on:
The quest to evolve neural networks through evolut...
Gradient descent can also jump out of a local minimum if the learning rate is large enough, so they're equal in that sense.
But it does make sense that a random walk would be more efficient in very high dimensional problems!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
But it does make sense that a random walk would be more efficient in very high dimensional problems!