The pessimist (realist?) in me can't avoid remembering Vernor Vinge's "A Deepness in the Sky" novel, where the perfect surveillance device is a network of "dust computers".
"See, there's mites around all the time. They use sparkles to talk to each other," Harv explained. "They're in the food and water, everywhere. And there's rules that these mites are supposed to follow. They're supposed to break down into safe pieces... But there are people who break those rules [so the] Protocol Enforcement guys make a mite to go out and find that mite and kill it. This dust - we call it toner - is actually the dead bodies of all those mites.
Toner: dead bits of nanomachines. From The Diamond Age, by Neal Stephenson.
That may be so, but imagine when they get cheap enough that e.g. NSA installations must be kept at ridiculous cleanroom conditions because every speck of dust may be conducting counter-surveillance, or automatically dump footage onto Youtube.
Perhaps at some point we'll be able to harness weakly interacting particles, like neutrinos, to such an extent that we can just fire a beam through a building and image the entire contents including magnetic domains or memristor structures or crystal lattices or whatever we're using for data storage then.
Actually, you can see that scene in another brilliant SF novel: David Marusek's "Counting Heads" (even better that "A Deepness in the Sky" in my opinion)
Ehhh. Marusek makes several rather large conceptual leaps in order to make his plot hang together: notably, both human clones and intelligent machines have no civil rights, at all, to the point that either can be killed at the will of their owners.
Most ludicrously, there's no hint of political opposition or a protest movement. At no point does any character say anything like "Hey, wait, maybe clones are humans too?" or "Maybe sentient machines should have rights?"
If there's anything Sci-Fi has taught me, it's that sentient machines will either be our salvation or our doom, and in both of those cases, treating them like they have no rights isn't good for our health in the long run, so to do so is stupid.
As for clones, considering how often our bodies replace all our cells, you aren't remotely close to the same person you were even a year ago, which proves that it's our minds and experiences/memories that make us who we are. With that in mind, and knowing many other people would agree, the idea that everyone would be ok with a clone slave force is absurd. Maybe if they were brainless chunks of lobotomized flesh incapable of learning and totally empty of sentient though, but otherwise?
I'm reading through Ian Bank's Culture series currently, and enjoy the way he treats it.
In a post-scarcity culture, where energy and information are more or less the only resources, what argument is there against agreeing to give sufficiently advanced AIs rights?
If you haven't read Charles Stross' Accelerando [1] yet, there's a character early on that advocates for AI rights so that maybe they will fairly treat the humans who made them later on.
It doesn't have to be for "our" benefit (who's that? not us personally, we won't be there). It could be for their own benefit - same reason slaves were freed.
Love that book, it also reminded me of his book/short story Fast Times at Fairmont High where they use small devices like this as breadcrumbs to keep them connected to the grid (a mesh network).
Ahh, my bad, he did have a short story called Fast Times at Fairmont High that was related to/in the same universe as Rainbows End. I've read both. Great author.
From Wikipedia [0]:
> Vinge's 2006 novel, Rainbows End, set in a similar universe to Fast Times at Fairmont High, won the 2007 Hugo Award for Best Novel.
That's what came to mind when I read this sentence of the article:
Numerous specks of technology could be discretely placed to invisibly monitor a home, business, or personal device.
I wonder what the people working on these technologies think of pervasive surveillance, and their stance on the gradual loss of privacy trend, because that's essentially what they're enabling.
If you watch the video, at least one person working on this specifically calls out surveillance as a potential application. He doesn't explicitly say much about what their opinions on it are, but it seems they're definitely aware of what they're doing.
I'm hugely impressed by this tech, and I can't wait to see what happens when these devices (or similar) can automatically become a dynamic cluster, but the term "smart dust" will almost certainly come back and bite these companies in the near future.
PR isn't my thing, but "smart dust" has a whole heap of potential negativity when said by the general media, or heard by the general populous. Just because phones, cars and cards can be "smart", most people want their dust to be dumb.
The article is pretty light on the technical details about the actual processor architecture.
I think their choice of naming is unfortunate, confusion with ARM's Cortex-M3 is quite likely. Or are you supposed to pronounce this "M to the third" or something?
Update: D'oh, of course it's "M-cubed". I knew that, I blame not being a native speaker for not thinking about it. :)
Thanks! No, it's definitely not ARM, that was my point.
It seems to consist of "8-bit CPU, a 52x40-bit DMEM, a
64x10-bit IMEM, a 64x10-bit IROM". Those are some seriously small memory sizes, a total of 64 instructions (the IROM holds the code) really isn't a lot to play with.
Though if the silicon process technology is as good as they claim, then scaling that up a little bit (to be on-par with conventional 8-bit micros) won't be too bad. It will be interesting to see if they can maintain the kind of leakage they're talking about with a smaller process size.
They usually pronounce it "M-cubed", which is also unfortunate because the university started some sort of research program (also called "M-cubed") a while after that name got engrained
I didn't even consider that this might not be ARM. There's Cortex M0, M1, M3, M4 you'd be silly to call a product M^3 if it wasn't related. If not it would be lawsuit fodder because it certainly sounds too related to be coincidental.
Smaller, lower power computers are great. But the problem is, and remains, the power consumption of the RF channel. It's all well and good that your processor consumes microamps, but one second of WiFi traffic at a reasonable transmit power takes hundreds of milliamps. Smart sensing system's power consumption is dominated by wireless communication energy costs.
> It's all well and good that your processor consumes microamps, but one second of WiFi traffic at a reasonable transmit power takes hundreds of milliamps.
Early cell phones took an exceptional amount of power as well. Not simply because they were earlier tech, but because the nearest tower might be 20 miles away.
Mesh networking is going to be the new black if Io(miniature)T really takes off.
From the pdf walterbell linked to,section "System Overview":
>...two ARM® Cortex-M0 processors are located in separate layers with different functionality as follows:
>The DSP CPU efficiently handles data streaming from the imager (or other sensors), thus is built in 65nm CMOS (Layer 3) with a large 16kB non-retentive SRAM (NRSRAM).
>The CTRL CPU manages the system using an always-on 3kB retentive SRAM (RSRAM) to maintain the stored operating program, and is built in low leakage 180nm CMOS.
For ROM, the CTRL CPU just always keeps its SRAM powered.
I can't seem to find the CPU frequencies, but I would imagine they are very low, as a previous slightly larger version in 2010 had a Cortex M3 working at 1MHz max[1].
Sorry, I just epiphanised (don't worry, I'll clean it up later).
My "amazing thought":
One day, these things will be like Lego. You may care that your younger sister ate one, or the dog buried that part in the garden, but you won't "really care". not like a computer, or a phone!
If standards for mesh networking and cluster computing become a tiny bit better (and more widely adopted), these things are the internet of things. Almost everything else is obsolete.
Self powered, independant micro-modules that can somewhat autonomously join and depart from processing pools mean that upgrading your "home computer" means buying more Lego (or a new table, or light-bulbs, or a car) and moving it within range of the other stuff in your house. There will be no computer, just interfaces to your local compute cluster, which you probably won't personally own much of, due to shared processing power agreements with your neighbours and friends.
I know none of this is original, and I even get that this is the "big goal" of the IoT, but It's actually happening, and we get to see it happen! (along with a wonderful new bag of problems relating to super-distributed trust, cost, control etc.)
Sorry, over-excited, taking myself off to bed now.
I also think these are incredibly cool! I want to be able to play with these and pick them up at the store like an arduino. The internet of things is hardly internet connected Nest home systems, though of course those count. There are a lot of hackers (in the maker sense) who would eat these things up and come up with amazing uses that academia simply doesn't have the manpower to supply. I wish hardware wasn't so hard to bootstrap, so we could get these things out en masse quickly.
Dust-computing is upon us. All hail the holy grey goo!
Seriously though, I wonder what the setback is with using energy harvesting in general - or are we just at the beginning of the wave of energy harvesting revolution?
I think that in most cases, using a rechargeable battery is cheaper - and cheaper leads in most areas of embedded systems.Also most embedded systems need power for something like motor/lcd/relay/etc and energy harvesting cannot supply enough energy for that - and the issue of wireless connectivity of such networks is only partially solved and not yet mature i think. And even the micro-controllers are only half way there.
Also the places that could use energy harvesting are sensors networks in industry and building, and those are quite conservative industries(for good reasons) and they greatly care about reliability, and also the field of sensor networks for industry is not fully developed with applications like predictive maintenance are quite new. Same goes for the medical industry , where such sensors could be valuable.
Stretching this a bit, when someone marriages this with the MIT things that auto-assemble... we'll risk getting some sort of Stargate's Asurans race, which are based on nanites. Self-assembling nanobots [0].
Of course this is just sci-fi... but nonetheless...
In the linked paper, they describe their choice of 180nm as optimal for power. Given the tools they used, they could make an even smaller computer with 130nm or 90nm technology. But the power draw would be higher according to the graph. Or they could add more memory instead of a smaller size.
The CPU is absolutely tiny, not much larger than the temperature sensor.
The pessimist (realist?) in me can't avoid remembering Vernor Vinge's "A Deepness in the Sky" novel, where the perfect surveillance device is a network of "dust computers".