Hacker News new | past | comments | ask | show | jobs | submit login
Thousand-robot swarm self-assembles into arbitrary shapes (robohub.org)
108 points by hallieatrobohub on Aug 14, 2014 | hide | past | favorite | 47 comments



That locomotion system is really interesting. Using three rigid legs and vibrating them individually to achieve directed motion. Does anyone have any more information on how that works?

Update: I found a paper that describes something similar. It is vastly improved by a vibrating table — Does anyone with access to Science know if Harvard used one? http://wiki.polymtl.ca/nano/images/a/ae/C-2007-NW-AIM-ANguye...


I wrote about these (actually, a lot of the prior art) on Hizook: http://www.hizook.com/blog/2011/09/08/infrared-remote-contro...

They're little more than a vibrobot (ie. a hexbug nano) with dual vibrating motors that preferentially steer and move. It's easy and elegant. They make for a great teaching toy!


Oh so it's like a BEAM bot? Only this seems decidedly not analog, as per what I remember BEAM was supposed to be. That reminds me to go get a soldering iron, I've always wanted to build a solar roller.


They seem very slow, given that the videos are 40x. That's not frustrating for students?


This work comes out of Harvard's Wyss Institute for Biologically Inspired Engineering, which was not on my radar before today. Apparently just last week they released some even more impressive work (in the sense that I would not have thought it possible) on a self-folding mobile robot: http://wyss.harvard.edu/viewpressrelease/162/robot-folds-its...

The video of the robot self-folding (and walking away!) at about 1 minute in blew me away, highly recommended to anyone who missed this story last week.


If you want to keep up with robotics maybe a bit better, I believe the folding robot was spammed to /r/robotics last week or the week before. I'm not trying to be critical or anything, just letting you know that that link was quite popular on a popular robotics forum ;)


Next step would be to optimize the placement algorithm I think. The 'K' example is the post obvious. You could have robots start creating the lower right quadrant and correct their position as the rest of the group gathers.


There is something very powerful when something about a robot's behavior looks animal, at least from the perspective of an animal. It feels like proof we're going in the right direction.

Another really good example is a video of one of those boston dynamics animalbots losing its balance on ice and then regaining its footing. It's hard not to anthropomorphize and ascribe an emotion. Even worse when it's deliberately kicked off balance by a human. Bad person! Poor robot.

These things move like organic life.


I'm speaking through my hat, but I believe many of those behaviors are inspired from the animal kingdom, which could explain why the result looks animal.


That is pretty cool, sort of combines two fun disciplines, robots and cellular automatons. So the art installation here is swarm robots that use pre-programmed neighbor visibility setups but I'm going to have to think longer about ways you could do this with minimal localization information.


I notice the video says "The algorithm for self-assembly allows for the provably correct formation of any simply-connected shape". I wonder what goes wrong if it's not simply-connected. Or is it just that they can't prove that it works in this case?


>Charging was initiated by sandwiching all the robots between two conductive surfaces.

sideways - i see a completely automatic Prius(Volt/etc.) plugin just parking between 2 flexible plates sticking from the parking wall.

Anyway, feels like Episode 1 is upon us.


"Each robot has simple capabilities, and is susceptible to many errors. To compensate for this, they must work together."

Just like people.


That's really pretty impressive work. Will be interesting to see the fruits of their labours at smaller (and larger) scales.


I kept rooting for a flood fill algorithm.


Minimalistic software is one thing. But a minimalistic hardware system like this is quite a feat. Chapeau!


Extremely impressive. Going to be wild to see where this area of research goes in the next 1-3 years.


Wired has a good write-up, too:

http://www.wired.com/2014/08/largest-robot-swarm-ever/

The quote at the end, about needing a larger table to handle a bigger swarm, made me think of the movie Jaws: "We're going to need a bigger boat."


If we can get these babies down to nanoscale size, will we have invented Transformium?


I think people who are unimpressed are underestimating the potential power of this kind of work... this could be the very beginning of a real life holodeck, where trillians of nanobots self assemble.


Remember Replicators from Stargate? Be ready.


> and maybe even art

hahahahahaha.... oh scientists....

http://www.youtube.com/watch?v=lZ6pehGKdW4


I feel like this research garnered attention only because (1) it uses little robots and (2) it came from Harvard. A computer simulation could have demonstrated the effectiveness of the underlying algorithm just as well. I also suspect if this same project had been done at a state school far fewer people would notice.


Using the robots is a big step forward. Just because you can do something in a computer doesn't mean you can do it the same way in real life. The networking problems and dealing with mechanical errors are new issues that will need to be tackled when we start dealing with even bigger large-scale physical bot swarms... hopefully ones with more realizable use cases.


Well put--there's a world of difference between yet another cutesy software sim of robots (of which I've been guilty!) and an actual hardware implementation. If you haven't actually dealt with things like shitty odometry, derpy wifi signals, packet corruption, motor driver noise screwing up logic, or any of a dozen other things, you probably don't appreciate just how tricky this is to get right.

If you'd like to see more about a lab doing real swarm stuff on real (buggy) hardware, check out the Multi-Robot Systems Lab at Rice:

http://mrsl.rice.edu/


Could you achieve some level of realism by introducing errors randomly at various input/outputs? Likely it wouldn't be as good as a hardware PoC, but I would expect it to be good enough.

Is this something used in simulations? I would think so.


So, I've been meaning to write a new sim for in-browser swarm manipulation and programming, taking into account things like message loss and noise in odometry readings, right? Seems quite reasonable on the face of it.

The problem is that you get genuinely weird stuff like IR comms just not working inside a torus of say 18 inches, but farther or closer working fine. Or issues where, when the motors both kick in at the same time, maybe a pin gets held a little lower for a little longer than it needs to be, and that causes an unrelated glitch (for a better story, look at http://www.quora.com/Software-Engineering/Whats-the-hardest-... ).

For any neat sim you make (and there are many!), people will inevitably just say, "Well that's all well and good, but have you tried it with robots?" Failing to do so consistently leads to a lab culture where you simply don't have the expertise in-house to do meaningful debugging, design, or research, because everybody's busy playing with their toy sims.

You'd think that that code would be still useful, but given the general quality of academic code and policies about reproducibility (which is to say, lol), pure sim work just disappears as a waste of money.


That's a good explanation and clarifies any doubts, thanks. =)


Using real robots is more convincing. It's easy to hide a simplifying assumption in a simulation. We say simulations are doomed to succeed.

If these guys used 20 or 100 robots instead of 1000, it would not have made the news. Plenty of similar work has been done with small numbers of robots. However, This is still novel and a decent contribution. Pushing for scale has its own problems. Also, I have no problem with a bit of glamour and flash: it gets people interested and makes the sponsors happy.

Bona fides: This was my area for a while.


A computer simulation could have demonstrated the effectiveness of the underlying algorithm just as well.

In theory, there is no difference between theory and practice. In practice, there is.


More people notice Harvard than state school is duh. It's called prestige and is self-fulfilling. (Best people are attracted which fuels prestige)

Agree wrt cute little robots. But eventually science requires you to quit messing with simulations and do something in real world.


And that is the best algorithm they could devise for the robots to self-arrange? It looks like they thought about it for five minutes, and then somehow managed to get published in one of the premiere science journals---and what they did has nothing to do with science, it is an engineering problem.


The underlying algorithm is terrible. It took over 11 hours to get 1,000 robots into a vary rough shape. However, actually building something that works in hardware is hard, Bulding a better algorithm is far less so.


It took 11 hours because of locomotion method of the Kilobots - they use vibration to move. Without at least looking at the paper, and probably studying it in depth, it would be hard to determine how good the algorithm is.

It is definitely nothing revolutionary, however bot movement is not externally controlled. They figure out their own relative positioning and where they need to go. Moreover they do that only based on short range led communications. So they communicate with small number of other bots at the same time (I think one, but I am not 100% sure), as opposed to lets say using central router to communicate all at the same time.

TL:DR Sensors, movement systems, and precision are shit on purpose, so important part that distributed algorithm actually worked.


I'm pretty sure it's externally controlled to some extent. Otherwise, how would they know what kind of shape to build, and even more importantly, how to scale it - to do that, you have to know the total number of robots, something which cannot be discovered locally.


The article answered those questions:

First, all the robots are put together in an unformed blob and are given an image of the desired shape to be built. Four specially programmed seed robots are then added to the edge of the group, marking the position and orientation of the shape. These seed robots emit a message that propagates to each robot in the blob and allows them to know how “far” away from the seed they are and their relative coordinates. Robots on the edge of the blob then follow the edge until they reach the desired location in the shape that is growing in successive layers from the seed.

The algorithm had to account for unreliable robots that are pushed out of their desired location or block other robots performing their functions. Nagpal’s team overcame this challenge by implementing strategies that allowed robots to rely on their neighbours to cooperatively monitor for faults. They also avoided relying too heavily on exact positioning within the shape boundaries.


Maybe I'm not understanding the quote correctly, but I see no explanation of how they figure out the correct scale (i.e. the size of a shape - obviously, with 4096 robots, they could make the shape twice as big).


I was pointing out that it's not externally controlled. If you want a full understanding of the algorithm, you'll probably need to read the paper. The supplementary material and the full paper go into this in detail. The supplementary material is a pdf, which I can email to you if you want to see it.


Why all the robots follow the same path even though is not the optimal one? Why no robot is trying anything new? Where is the exploration/exploitation side of the strategy? Where is the additional information each robots is contributing to? Why aren't there groups of agents taking over a part of subproblem ?

If they don't use any centralized decision making then it should have taken them much less time.

The way the robots arrange in shape depicts the way a human would do it. A realistic, real time, no centralized decision making algorithm would move all the robots at the same time. Yes, the stabilization time, until all robots find their rightful position based on peer-peer communication, would take a while and that would have been acceptable. This thing over here looks a lot a lot like hard coded, human minded approaches to a trivial problem.

If the real advancement was the robot mechanical movement then they should have said so. The algorithm they have could have been done by any CS background high school student.


You are relying on the popular-press description of the algorithm to assess its sophistication. If you would like to read it, I can send you the supplementary pdf which explains the algorithm in detail.

The algorithms do look straightforward compared to machine learning approaches, but that's a good thing! I'm guessing you're a machine learning person? I believe this approach comes more from the control theory side of automation. Simple-looking algorithms that achieve complex phenomenon is interesting. The authors also provide proofs that the algorithms will, in fact, achieve the goal.

I also want to point out that they are under hardware constraints which preclude approaches which require large amounts of computation and memory. From the supplementary material:

The Kilobot robot has strict limits on its available memory: 2K RAM and 32K for program memory, which includes the code for the bootloader/wireless programming, robot move- ment, and communication libraries. The full self-assembly algorithm takes approximately 27K of program memory including 241 bytes for the shape description and scale factor. The Kilobot also has limits on message size; we optimize by combining messages from the various primitives, placing all the data in a single 7 byte message.

In general, I find it dangerous to make conclusions of the novelty of research from the popular press description of it.


I don't want to downplay what was involved as I think it's a cool project. However, from a real world perspective:

If the algorithm requires everything to be in one clump it's not robust even if it's 'provably' correct.

It requires specifying a small set of special bots which is external control.

It scales while a bird flock can respond quickly even at 100,000 birds and a simple algorithm try and calculate how long building a shape would take with 100,000 bots.


All good points, but you are explaining why this research is not a landmark development. Most research, however, is iterative. Get a simple step working, which contains a bunch of simplifying assumptions. Then slowly try to remove those assumptions. That takes time and effort, and we publish each of those steps, and call them "novel".


Mind shooting me that PDF? I had a colleague that worked with the kilobots last year while we were preparing our own paper. :)


Sent. (I only said this here so you can check your spam folder in case it gets flagged. An email from an address you've never seen before saying "Here's that thing you requested", with an attachemnt, probably looks like spam to most filters.)


Muchas gracias!


> The underlying algorithm is terrible.

How do you arrive at that assessment? Compared to what? Is it at the state of the art or not?


I believe I've read this story somewhere before.

http://www.amazon.com/Kill-Decision-Daniel-Suarez/dp/0451417...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: