Hacker News new | past | comments | ask | show | jobs | submit | nicrusso7's comments login

yeah the current motor model takes the desired position as input and convert it to torque (applying some 'noise' trying mimic the real motors).


Maybe the cost of the robot..? It wouldn’t be cheap I guess.. but yeah definitely a great application of robotics.


Well done! Btw I do agree with you regarding the knowledge transfer on the real robot platform. The open source design is built to work with very cheap servos and the alinement is not quite easy too.

As you, I was totally new to RL when I started (almost 1 year ago) - I’m a DevOps engineer during the day :) The goal of this project for me is learning ML more than build a product - that’s why the limited focus on hw implementation. Anyway, there are examples on the web on how to run the policies on the real world robot (even using ROS) - maybe in the future I’ll digging in this topic as well!


This would require a very good image recognition software! Anyway yeah I do agree that agricultural robots are quite interesting and profitable at least in the near future.


I would have thought that level of image recognition was already here, even close to being packaged up in Python packages. You think we're not there yet?

With half decent radios, the brains of the bot could be stashed back at HQ, which would really improve the amount of computation available.


Very honestly, I’ve started looking at CV since I’ve integrated the robotic arm in my model (a couple of days ago :)). I‘m actually searching paper/project on it - My next step is the “open a door” task


Brains are really not the limiting factor, and yeah, the raw materials for doing this kind of recognition are pretty mature. You'd want some kind of server in any case, but I bet the CV could feasibly be done onboard.

The physical manipulation part would take some work for sure, and the robot itself would be fairly expensive to make, but it's all pretty achievable. You need it to do more than a BobCat while costing less.

It's not "Uber for $sector" but it can be done.


Oh well I can definitely add a vac-env on my todo list :D


thank you very much! any feedback is welcome!

Re the maze, I've got just the terrain so far - I'm working on the gym environment.. I was thinking to write 2 different versions of it, one in which the robot has a map of the maze and navigate it using a path finding algo (probably A*) and another version without any map (I'll probably need a lidar for this).


Some time ago I tried to use rt-rrt[1]. It should be quite suited for your setup, or at least more than A* i think. Actually when I get back to your project I will try to implement it inside your framework.

[1] https://github.com/ptsneves/rt-rrt


Awesome! Any PR is welcome. In the meanwhile I’ll have a look at your rt-rrt implementation.


Congratulations on your multi-millon dollar startup.

Are you hiring?


thanks! well, good question. I think it's a mix between ML (probably not just simulations but also directly learn from real-world experiences) and more classic approaches. There are some very interesting examples on the Google AI blog like this one (https://ai.googleblog.com/2020/04/exploring-nature-inspired-...)


Thanks!


I've already printed and assembled one, the creator posted 2 videos in his thingiverse repo https://www.thingiverse.com/thing:3445283


Well the very first attempt this guys made (https://arxiv.org/pdf/1804.10332.pdf#subsection.6.1) was let the model learn from scratch (open loop, large bounds feedback). An agile galloping gait emerged automatically also in my simulation (https://github.com/nicrusso7/rex-gym#galloping-gait---from-s...), even if the gait was 'noisy'


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: