I remember reading this article in Scientific American at the time. It seemed like it had a lot of things right even then, although it also seemed to contain some hopelessly off base ideas as well. One of the interesting big but close misses here is the idea of "tabs". You see this in the Star Trek: The Next Generation (and related) TV series as well from the same time frame. This idea that ubiquitous computing would mean that we would have lots of computing devices that each were not very sophisticated. The reality is that it's actually shockingly easy to make very capable computing devices (e.g. smartphones, tablets, etc.) and for the most part you only need one of each variant (e.g. an ebook optimized one, a tablet, a smartphone, etc.) instead of many.
Also note the sharp contrast between the concept of "scrap computers" that have no identity whatsoever and the reality of today where everyone's mobile computers (phones, tablets, e-readers, laptops, etc.) are all intensely personalized. A lot of these errors are based on trying to naively map pre-digital behaviors and use cases onto a world of ubiquitous computing, rather than imagining entirely new methods of interaction and use. Why bother duplicating hardware to maintain two separate pieces of information locally when you can simply have two files open in different tabs in an editor, or two google docs, or emails, or what-have-you?
> for the most part you only need one of each variant (e.g. an ebook optimized one, a tablet, a smartphone, etc.) instead of many.
"need" sets a very low bar for criticism. You can argue that you don't need pretty much anything.
The question is really whether there is a way for many device-instances to have a sufficiently large increase in cost-benefit ratio compared to using a single device-instance.
For example, I find it plausible that multiple tablets could be quite useful if there was a good way to coordinate the displays and interactions between them (e.g. for proof reading and arranging material). Such benefits would require changes in the OS UI, probably.
This is totally doable and could have been done 10+ years ago. Except it didn't happen, because unfortunately, the problem isn't technology. It's business.
Software vendors have no incentive for making it easy for people to exchange data between devices and software they have no control over. "Ubiquitous computing" gives them at best little to no business value, nowhere near enough to justify effort to make their applications support it, and they see anything that makes it easier to extract data from under their control as a business threat.
These days, big companies like to create cloud platform to enable a limited "ubiquitous computing" for themselves - that is, you can work on something on multiple devices, as long as you're using their specific platform and have Internet connection always on.
The technical building blocks need to happen at OS level and they could, but OS vendors won't bother either, knowing that applications won't make proper use of it. Commercial applications will try to maximize the amount of data they suck in, and minimize the amount of data they let out. It's fundamentally the same reason we don't have universal APIs for websites, why websites fight so hard against people who try to make them interoperable (see also, the Google Duplex HN thread).
I really want to see ubiquitous computing happening, but I can't see how it's going to, given that the software industry will reject it even if it was handed to them by OSS people ready and working, on a golden platter.
”Software vendors have no incentive for making it easy for people to exchange data between devices and software they have no control over”
That isn’t needed to get the multi-device UIs for “multiple tablets could be quite useful if there was a good way to coordinate the displays and interactions between them”; the devices could all be from the same manufacturer. For example, games where not all players get the same information (e.g. scrabble, cluedo, many card games) could have the shared UI on a tablet, with players using their phones for looking at their cards/stones.
Things can change given time. There have been changes that seemed impossible from a business standpoint that nonetheless ended up happening because circumstances changed.
This is a prescient article given the state of computing at the time it was written. In reading it I also wondered if this explains Steve Jobs insistence of using the name iPad (he was pressured to change the name because the default reference of "pad" was to something rather different). But, the author really blows it at the end (the last two paragraphs) when he claims that ubiquitous computing will mean the decline of the computer addict and of information overload.
I was reading the paper published on Project Jaquard and this post was cited as one of the inspirations. Furthermore, it was also said that Project Jaquard fulfils some of the predictions made in the paper.
Calm IMO is most often a byproduct of long learning process and craft. This brings know how. In these days of impatient ever shifting grounds, you only get stress.
Also note the sharp contrast between the concept of "scrap computers" that have no identity whatsoever and the reality of today where everyone's mobile computers (phones, tablets, e-readers, laptops, etc.) are all intensely personalized. A lot of these errors are based on trying to naively map pre-digital behaviors and use cases onto a world of ubiquitous computing, rather than imagining entirely new methods of interaction and use. Why bother duplicating hardware to maintain two separate pieces of information locally when you can simply have two files open in different tabs in an editor, or two google docs, or emails, or what-have-you?