In the shots over his shoulder you see him using the original mouse with his right hand, and a "chord keyboard" with his left. He calls it a "keyset". That's a set of buttons, basically function keys, except they can be "played" in combinations. The chord keyboard was also an I/O device on the Xerox Alto, but AFAIK wasn't used again.
I notice also, things didn't go as expected; he makes quite a few mistakes and very charmingly apologizes. But the fact that you can make mistakes and smoothly recover from them, was an important part of the demo.
For context, by 1968 the concept of interacting directly with a computer in what we today would call command-line mode had been around for several years. The DEC PDP-1 was released in 1959; the IBM 1620 in 1960; Multics had started in 1964; the IBM 360 series began deliveries in 1965; all allowed an operator who had the privilege of being in the "machine room" to enter commands and get responses at a typewriter. The PDP-1 was alone (I think?) in having a 2-dimensional vector-based screen, but it was used for graphics (including the original Space War), not for text.
Overwhelmingly, the paradigm for working with a computer was to prepare a "job" consisting of commands and blocks of data, all in punch cards, which was run as a "batch", producing output to print a/o disk or tape. If you made a mistake in setting up your job, you found out about it later, and could only "edit" by re-punching some cards, and re-running it.
So when Englebart showed smooth interaction with data, editing and changing data ad-hoc, and making mistakes and correcting them ad-hoc without drama, on a two-dimensional screen -- that alone was a revolution. Hyper-linked media, hierarchical lists, that was icing.
Twiddler chording keyboards were used by wearable computing people in at least the '90s. Looks like some Twiddler brand devices are still marketed. https://twiddler.tekgear.com/
Wonder if the chording keyboard was an extension of stenotypes
>The stenotype keyboard has far fewer keys than a conventional alphanumeric keyboard. Multiple keys are pressed simultaneously (known as "chording" or "stroking") to spell out whole syllables, words, and phrases with a single hand motion. This system makes real-time transcription practical for court reporting and live closed captioning. Because the keyboard does not contain all the letters of the English alphabet, letter combinations are substituted for the missing letters.
I messaged his daughter Christina a few years back about what influenced his work:
Here are two key links to his thinking on what's missing in today's information technology that would be very important to integrate in a seamless environment:
Many previous posts, of course, but most of the threads have been a bit less interesting than you might expect from the classicness of the topic. Some of the better ones were:
My mind was blown when seeing connections [0] between Stewart Brand and many of the folks whose work deeply shaped my thinking about computing and systems. In retrospect, the influence is obvious and it’d be surprising if worked any other way. Which makes me wish CS programs included more courses covering history and philosophy of computing. I think we spent a a week or two on it —- Alan Kay even did a guest lecture —- but I still feel theres tremendous value in focusing on why we are doing what we do.
Good one. For an even earlier history, one should read 'The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture' by Brian Dear. The author spent decades researching and interviewing the pioneers. The PLATO system had chat rooms, instant messaging, screen savers, multiplayer games, flight simulators, crowdsourcing, interactive fiction, emoticons, e-learning (like MOOCs) etc.
While I'm at it, here are Engelbart, Nelson and Howard Rheingold chatting casually over dinner. I really love this very much. These are two of my favorite bookmarks.
It's not surprising about previous comments: there's not much you can meaningfully add after watching it. It floored me the first time, and I've been (just a lil' bit) disappointed about modern computing ever since!
One thing I occasionally like to think about -- either in the context of writing a program, or just driving down the highway when my thoughts are wandering from controlling a two-ton vehicle going 65 MPH -- is, What would a computer on an alien spacecraft look like?
Not for how it would be different because aliens are aliens, but because you're presumably looking at something that is the result of thousands of years of iteration and refinement. If you're the Asgard from Stargate, you presumably aren't just doing OS updates and re-skinning your UIs and throwing away your old frameworks every three to five years for the fun of it. At some point you presumably figured out how your software should work, wrote it right, iterated on it until it was essentially bug free, and only extend it as needed within the same set of frameworks, languages and UI paradigms your civilization has been using for hundreds of years.
They would just lose track and not understand the lowest level.
I mean, look at us now - we're kind of analogous to super-AIs created by cells, and we try to program cells, but we can't really understand every detail; we try to manage them instead.
At some point the aliens would just have their lowest levels of abstraction frozen due to losing the ability to manipulate them. And take it for granted, like we take our brain cells or muscle cells for granted.
Possibly nothing like anything we would recognise except at the most basic level I think.
It’s possible to imagine their equivalent to a GUI is entirely unlike ours, flat screens work for us because of our visual cortex and narrow field of view/depth perception framework essentially been tricked.
A UI for a something intelligent with a compound eye or ten eyes or whatnot could be very different.
Not to mention if we give said aliens hundreds of years of computers, they would likely arrive at something akin to a neural interface, I mean a computer as a direct expansion of your cognitive ability has to be the end point in user interfaces, you don’t even have to think about how to translate what’s in your head into something the computer can understand parse, you manipulate it as easily as thinking of a pink elephant in a tutu, it is just there.
It would also likely be heavily influenced by their psychology.
What would be fun is trying to create a router between our systems and theirs down the line.
We’ve had electric computers for a blink in time compared to the wheel, lot of places to go between a Ryzen 3950X and computronium.
> I mean a computer as a direct expansion of your cognitive ability has to be the end point in user interfaces,
William Gibson hints at something similar to this in the graduate work of a character in the short story “Dogfight”[0] from the excellent collection “Burning Chrome”.
I think before that, the act of programming will be supplemented by AI and will become more descriptive than procedural.
I think about SQL, where you typically describe the shape of the data and relationships you want, rather than how to get it. The database then creates a query plan... effectively a “program”, compiles it, then executes it.
I think programming will become more like that, on its way to what you describe.
I'm familiar with Burning Chrome, I read it when it came out (after 9yo me figured out how to get to the library on my own and ask the nice lady behind the counter how to order books in - my mum disliked the former part but was impressed with the latter - she was even handed, I got grounded for a week but since I spent most of my time in my room reading I realise now that was superb parenting).
I really liked the novel "Constellation Games" (c.f review at https://boingboing.net/2013/02/20/constellation-games-debut-...). A federation of alien civilizations pays a visit to earth, and a game developer decides to investigate their cultures by reviewing the aliens' video game systems.
Frameworks and standards, definitely. But bug free software written do to exactly the right thing won't happen much. We're moving faster and faster, I don't think that's going to change. What seems to be happening more is designing around the fact to avoid disasters.
Judging from the rest of engineering disciplines, people figured out the right process that yields reliable results- we don't have too many civil engineering failures, exploding steam boilers or houses burning down because of overheating/arcing in the mains wiring anymore.
Although the computer geeks are probably inclined to think: yes, but what we are dealing is more complex than that- I don't think there is a big difference between knowing how to safely connect two wires together, and how to safely copy one string to a buffer. Electricians took time to arrive at a consensus (but they did it), and I believe software engineers will too.
At some point the technology matures and progress becomes incremental. Hundreds of years from now, there’s no reason to suppose our software would need to change much. We will have figured out what works best for humans over a variety of situations.
Take light bulbs, pretty much perfected, so much that they had to make them worse to keep selling. And then suddenly LED-lights appear, and the whole thing starts over again only faster. The best predictor of the future is the past, I see no reason to assume evolution will slow down.
Since we're dealing with fiction, from a Deepness in the Sky:
>The Qeng Ho's computer and timekeeping systems feature the advent of "programmer archaeologists":[1] the Qeng Ho are packrats of computer programs and systems, retaining them over millennia, even as far back to the era of Unix programs (as implied by one passage mentioning that the fundamental time-keeping system is the Unix epoch):
>>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
>This massive accumulation of data implies that almost any useful program one could want already exists in the Qeng Ho fleet library, hence the need for computer archaeologists to dig up needed programs, work around their peculiarities and bugs, and assemble them into useful constructs.
So to answer your question, a mess of standards going back millennia with glue-code everywhere. I blame the separation of documentation and code for the coming dark age.
One of the many interesting things about the Zones of Thought is that all of those problems with software you describe are explicitly there to protect the likes of humans from what can exist on the other side of the singularity.
Whenever code is on screen I pause and google it if it’s recognisably not total gibberish - have done for years, it’s always fascinating when it’s plausible and funny when it is not.
My most recent one was Netflix’s Close, though that one was gibberish, it was just a html form with nonsensical attributes in that case.
I'm not surprised this has been shared many times before, and yet I'm glad people are still discovering it. My journey to Engelbart's demo came through reading "The Dream Machine" book about JCR Licklider, which itself is an incredible computer history record.
You’ve got something there. The last time I saw something, in tech, that felt “magical” was the first time I saw an iPhone. The intro video, from Jobs, is something I’ve watched several times over the years. It’s a masterpiece of marketing and storytelling.
Malwarebytes throwing up the following warning AGAIN: Website blocked due to Trojan.
Shortly after my original post about this the site was accessible and no warnings.. now I am receiving the same warning again from Malwarebytes. Not sure whats going on but thought I would let folks know.
Instead of files and mouse cursors we have a vastly different set of components that all work together, and of course they integrate with the modern features like ApplePay and QR code scanning. Oh yeah and there is videoconferencing... just like in his demo :)
I notice also, things didn't go as expected; he makes quite a few mistakes and very charmingly apologizes. But the fact that you can make mistakes and smoothly recover from them, was an important part of the demo.
For context, by 1968 the concept of interacting directly with a computer in what we today would call command-line mode had been around for several years. The DEC PDP-1 was released in 1959; the IBM 1620 in 1960; Multics had started in 1964; the IBM 360 series began deliveries in 1965; all allowed an operator who had the privilege of being in the "machine room" to enter commands and get responses at a typewriter. The PDP-1 was alone (I think?) in having a 2-dimensional vector-based screen, but it was used for graphics (including the original Space War), not for text.
Overwhelmingly, the paradigm for working with a computer was to prepare a "job" consisting of commands and blocks of data, all in punch cards, which was run as a "batch", producing output to print a/o disk or tape. If you made a mistake in setting up your job, you found out about it later, and could only "edit" by re-punching some cards, and re-running it.
So when Englebart showed smooth interaction with data, editing and changing data ad-hoc, and making mistakes and correcting them ad-hoc without drama, on a two-dimensional screen -- that alone was a revolution. Hyper-linked media, hierarchical lists, that was icing.