Hacker News new | past | comments | ask | show | jobs | submit login

Integrating AI with a subsumption architecture sounds interesting but could still not work. You would be abstracting a lot of details (as well as potential action) from the training process, so the overall result you get still might not work. Training on everything might actually yield a better result.

Does anyone know how iRobot (Brooks' company) does it?




The AI can still drive the car normally, the only thing that changes is it's prevented from running into another car at full speed. Which is no worse than having drivers override the system forcing a disengagement.


Yes, as long as you add a huge penalty to the training algorithm for triggering any underlying safety system I see no problem in that. (Otherwise I can see a completely mad car that believes everything is safe as it never crashes even when reckless )


I believe it is subsumption based with random walk added on top: bumper hit means turn around and wheel dangling means don't go farther. I think the more recent models have more intelligence added on top but still has the lower level systems acting as safe guards. Of course it doesn't run into the problems you mentioned because it doesn't have any learning (at least not one of the earlier models I had).


iRobot builds more than just roomba, I was wondering about their more advanced military robots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: