Hacker News new | past | comments | ask | show | jobs | submit login

The analytical solution wouldn't involve any training, you'd solve a system of equations where position of the finger is equal to where the goal is constrained by the equations which tell you how an arm moves.

The advantage of the RL approach is that it doesn't need to know how an arm moves but then involves some training.




Sorry, I meant compare the RL version as it trains to the analytical version.

It's certainly neat that inverse kinematics can be learned from zero knowledge, but I would have a hard time trusting it to operate a real arm in an industrial setting.


You'd be entirely right not to trust it as it is in an industrial setting. There's been some research around safe exploration that would add additional terms to the reward function to do things like punish flailing around and such but I haven't experimented with those techniques myself.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: