Here's my interpretation of the result (posted in the hope that it will aid others):
The authors are interested in solving motion planning problems with both fixed and dynamic obstacles. They do this using a combination of offline pre-processing and online search.
During pre-processing a general-purpose PRM (read: state-space graph) is constructed using only information about the movement capabilities of the robot and the location of fixed obstacles in the robot's environment.
At run-time the location of dynamic obstacles is detected and all edges from the PRM which would result in a collision with these dynamic obstacles are pruned away. The remaining problem is easy: just find a shortest path in the remaining graph, from the start location to the goal position.
Anyway, it's this online collision checking operation which they implement in and parallelise with custom hardware.
Neat.
I wonder if in their experiments with software-only planners they also made a distinction between off-line preprocessing and online search? The paper doesn't seem to say. I would hope the comparison is apples-to-apples.
I think we'll look back at movies like "The Terminator" and its slow lurching robots with a wry sense of humour. Instead the future is going to give us robots that can move so fast that they're just visual blurs.
As part of Mr. Lee's good neighbor policy, all Rat Things are programmed never to break the sound barrier in a populated area. But Fido's in too much of a hurry to worry about the good neighbor policy. Jack the sound barrier. Bring the noise.
This is incorrect. There is usually a huge delay before any motion happens. The speed of that motion is independent of the planning speed. If you have to replan during a move then yes the two can become intermingled, but if you watch most ROS based planning today what you see is a good 30 seconds of compute before the robot moves at all. Sometimes it can be multiple minutes!
It's very easy to go from FPGA "source code" to a foundry and get a chip with exactly the same thing implemented, and at a cheap price if your scale is large enough.
I said the same thing a year ago when I had FPGA synthesized verilog and decided to go get a real chip made. Closing timing along with optimizing for PPA for an ASIC vs an FPGA is a royal PITA. And that's just on the front end design work... A core on a FPGA gives you no real idea when it comes to placement or all of the other hurdles of the backend flow of a real ASIC!
We ended up re doing everything (which overall was a good thing, as we made a lot better decisions from the beginning by redesigning), but that's show biz.
This used to be Altera's "Hardcopy" advantage: an easy gate-array version of their FPGAs could (in theory) be made because: 1. the gate array uses the same gates as the FPGA, just the interconnect is customized. 2. Use Primetime compatible timing constraints for the FPGA design to begin with.
While obviously better than just the FPGA, it is still very inefficient compared to a real ASIC. Altera and Xilinx both stopped offering their programs in this area due to the fact that it as it is not really cost effective to have the same transistor level mask set and then have to buy all new masks on top once you get to sub 28nm processes.
As for primetime, we use Synopsys Synplify Pro for FPGA synthesis, and while it does a better job than Altera/Xilinx's tools, it does no where near as well (and works very differently) compared ot physical aware synthesis from Design Compiler or RC/Genus.
Primetime is just the timing checker.. where the syntax for sdc files originated I think.
We found that Synplify Pro is not quite as good as the vendor tools (xst/quartus) for FPGA synthesis, but I've not compared them recently.
Actually there used to be a version of DC for FPGAs, but it was not good at all. I think it was not as prepared to duplicate logic or flops as compared with the FPGA specific tools.
You can do both CPU and GPU, but time and power consumption are both order-of-magnitude worse than using dedicated hardware. The robotic arms are also used untethered in factories, where having a computer near them is inconvenient.
The article describes results for using a Xeon CPU. The linked paper cites other papers where a GPU was used.
This is a much easier problem to solve. There are certainly different ways in which objects can collide though, so that would make the problem more interesting and complex.
The authors are interested in solving motion planning problems with both fixed and dynamic obstacles. They do this using a combination of offline pre-processing and online search.
During pre-processing a general-purpose PRM (read: state-space graph) is constructed using only information about the movement capabilities of the robot and the location of fixed obstacles in the robot's environment.
At run-time the location of dynamic obstacles is detected and all edges from the PRM which would result in a collision with these dynamic obstacles are pruned away. The remaining problem is easy: just find a shortest path in the remaining graph, from the start location to the goal position.
Anyway, it's this online collision checking operation which they implement in and parallelise with custom hardware.
Neat.
I wonder if in their experiments with software-only planners they also made a distinction between off-line preprocessing and online search? The paper doesn't seem to say. I would hope the comparison is apples-to-apples.