I had the pleasure of visiting Dynamicland and hanging out with Peter Norvig, Nicky Case and many others. There's a few things that stuck out for me:
- The community! Most people I interacted with were pretty inter-disciplinary. Some came from an education heavy background, others from a more programming language background. Some from interface design, others from physical installation or video game background (and everything in between). Everyone had their own degree of skepticism but were excited by the discussions that were happening (a sign of a good research culture!).
- The representations: What excites me the most about Dynamicland is that it's an environment for fostering unique representations of ideas. The cultural forces & ideas (social programming, remixing, visible state / code at all times, 3d environment, etc) baked into the space encourage the experimentation & creation of new ways of representing complex ideas. Some scattered examples here - https://twitter.com/dynamicland1?lang=en
- Removal of artificial barriers: When programming on a computer these days, there's so many barriers to doing simple things. The amount of code that exists to do virtual actions in a virtual world is MASSIVE, and acts as a huge barrier. This is something Bret talks about at the end of his interview on Postlight (https://postlight.com/trackchanges/podcast/computing-is-ever...). Because code / programs in Dynamicland are embodied in the room, you don't need code to move a dialog box or a slider around. You just moved it yourself. This is super powerful and it means your code can focus on the actual computation itself, not on virtual UI movement. Eventually, Dynamicland will have robotics to automate moving of objects, but this already is a great start.
- Moving around: Moving around, even if it's just around a table, is AWESOME. We're so used to sitting (or standing) at a desk and staying still that we don't get to take advantage of embodied intelligence. We think in SO many different ways, and our "thinking" doesn't just happen in the head. Our arms, legs, stomach, and feet all contribute to the thinking process. Combine that with multi-sensory representations that live across all of these channels and you can explore a thought & idea space SO quickly and uniquely. This is so hard to describe and under-rated. This link (http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...) and this talk (https://www.youtube.com/watch?v=agOdP2Bmieg) attempt to do these ideas justice but it's quite hard to transfer this context!
Is it possible to visit Dynamicland as a random interested person? I'm on their mailing list and couldn't make the open house last spring - maybe they'll have another one at some point.
I have no idea to be honest. I just gave a nominal amount, think it was like a hundred dollars. I would happily give a LOT more if I had stronger financial means / if I thought it would actually have an impact.
It seems like they're mostly seeking funding from larger institutions, as crowd-funding such a large research effort isn't really feasible or sustainable.
When I visited Dynamicland in January, I was building on a spreadsheet-like program next to my friend Omar who was building a map-based interface. Just by virtue of sitting next to each other, we were able to keep apprised of what the other was up to. At some point Omar needed a way to input a number to control the zoom of his map. My spreadsheet program had plenty of numbers, and ways to manipulate numbers, so we slid over one of my pieces of paper and it immediately worked to zoom into his maps. Omar decided that he’d prefer a slider-based number-input, and after it was built, he slid it over to my side of the table, and we used a multiplication operator I had built to expand the slider’s range. Again, it just worked. At Dynamicland you get composability and interoperability “for free.”
I jumped around a skimmed a bit, so maybe I missed it, but are they prrogramming for the room itself only, or in a framework/library that allows for this, or just in a similar language? Is it all just Javascript and a well defined hierarchy of objects that can be manipulated?
How do two people, working on two separate programs, share implemented functions "for free"? The sliding across the desk portion is irrelevant (well, it's cool, but for this specific question it's no different than sharing a gist link I think), what I'm wondering is the details.
Ah, thanks. For those following along, the details (from what I've found so far) seem to indicate a language they've developed at Dynamiclab for this, Realtalk is used and embeds the idea of the whole platform (papers that can be both sources and targets of information/code) into itself, and how they interact with each other.
So, yes it's a shared language, and yes, it's a shared framework (the language embeds the framework concepts as first class entities in how it functions. That makes sense, and it also explains how some output display function (which would be fairly generic in this context) would be easily shareable, and by the nature of the platform, also immediately shareable if done in a certain way.
It is similar in concept to how Javascript ona webpage has access to the dom, and the bindings for the dom can be expected, so you can write something that transforms a <table> in some way, and expect it to function similarly is other tables are provided. But it might be even more accurate to say it's like CSS, where CSS and the DOM are so closely linked that (at least from the perspective of CSS, if not HTML elements in specific) there is no interop layer, CSS is meant to apply to an HTML document, so it's designed with that in mind and the interop layers are for the most part nonexistent.
So, in that respect Realtalk is sort of like CSS (with more programmability, I think) for this environment of sheets of paper that support input and output. Very cool.
At least when I visited (which was about 6 months ago I believe?), it was a fairly basic set of extensions on top of Lua. As in, they've added a couple keywords that I believe they just RegExp for (its pretty simple to confuse the language parser) - the language itself seems to not be the focus (yet?) of the project.
Again, I only used it once (and I made a DynamicLand compiler in DynamicLand! Video here: https://twitter.com/i/web/status/963497112284512256 ), but if I recall correctly, the main way to "share functions", was to actually just load someone's code and copy it. Partially because the language/editing seems very new still, and perhaps partially to encourage this sort of "real world sharing" vs. virtual sharing. I recall just "forking" files a lot and making new copies of things (kind of like "Old" folders I used to make before version control).
"Light is not a device we charge up and carry around in our pockets. Imagine how much dimmer that world would be: people carrying flashlights, shooting small cones of light wherever they go. It would be a small, lonely, personal world, a world where we only get to see one thing at a time, a world where one of your hands is always full with an electronic gadget." Nicely said
I think it's more accurate to say light is not only a device we charge up and carry around in our pockets. I do in fact always carry a flashlight in my pocket and use it fairly regularly and many people use their smartphone as a makeshift flashlight from time to time even if they don't carry a dedicated flashlight. I imagine the extended analogy would likely hold: if Dynamicland type technology becomes common I expect it would co-exist with more traditional computing as well as other newer tech like VR/AR.
Sounds terrific. However, it feels like a lot of what they have achieved is marketing existing ideas that were already great, with improved packaging, but in a way that ultimately isn't practical.
Because truly plug-and-play components are an awesome idea. However they are not a new idea at all. As far as I can tell the real reason they are not more popular is because us programmers are worried we will be accused of being users if we use tools that are interactive. See Visual Basic 6 (which was an amazing system). Of course programmers do not realize this psychological issue exists and will not admit it is a possibility.
But in the context of Bret Victor's lab with the projectors and little pieces of paper, that puts plug and play components in a different category that makes them more palatable.
Also, projector based AR is much more convenient for people to demo than HMDs, but also not very practical for widespread deployment. But AR is amazing in general and this allows them a friction free way to demo its power with components.
But I think that multiplayer real-time collaborative interactive component based AR programming should definitely be a more common thing. It combines the advantages of all of those techniques. I also think though that all of those things are useful in lesser combinations. I believe a big part of the reason at least some of those things are not used more or more effectively is cultural or psychological rather than technical or practical.
The thing is the more you take advantage of components and interactivity the less you use the complex colored encoded text. That means programmers can spend a significant amount of time snapping together components and configuring them. Unfortunately programmers are not able to recognize this as programming. We have a feeling the more we do it the more we may lose our special incanter status and be classified as users.
> us programmers are worried we will be accused of being users if we use tools that are interactive. [...] Of course programmers do not realize this psychological issue exists and will not admit it is a possibility.
It would be nice if you explained what you even mean by this instead of just claiming that programmers who don't agree are in denial.
What programmer is embarrassed to also be a user? And programmers use interactive tools all time. REPLs, debuggers, editors, GUI design tools, and live-reloading web servers all qualify in my book.
Also keep in mind that when you stack abstractions on top of each other, the whole stack gets more brittle. So if you want me to use your better idea, it shouldn't abstract over the thing you're trying to replace.
To give an example: if you want to replace hierarchical filesystems you shouldn't build a thing that hides them from me, you should build a thing that doesn't use them at all. (This isn't a hard rule, sometimes exceptions are worth the trade-off.)
Understandably, this means changing things is a LOT more work than you're probably prepared for. Don't assume that we're only stuck in the past for bad reasons.
if you need any example of this type of behavior just look how "normal programmers" view labview programmers. most people completely snub their nose at labview "users" for not being real programmers and engineers (don't tell me they don't because i experience this constantly). in addition, labview's capability is often misunderstood by people who hide behind a wall of self-imposed bias. many times people are actually unaware of what is possible with labview simply because they don't care to know because it's not the real, hardcore, in the weeds type of stuff they were taught in school. meanwhile, people using labview to program windows, real-time high-performance, real-time embedded linux, and fpga systems (including cameras, sdr, etc.), all with the same language, are able to accomplish quite a lot. i fully concede that labview has many downfalls, but my criticism is based upon actual use of the language and tools and not via some uninformed bias. because of that, my criticisms don't necessarily overlap with those of the uninformed. and this is all relevant because when i tell people i use labview, you can just see any interest in my programming work leave their face, as i've become a simple tool user in their eyes.
this is a specific example regarding labview, but these types of biases exist everywhere, where engineers blind themselves to what is possible by the opacity of what was possible.
I wouldn't want to write a long letter on a piece of paper never mind software, it's not because I'm addicted to a fake status symbol it's because "reality" in the form of solid objects has limitations that software doesn't eg "a delete option". And the rest of vim. At Stone point you have to encode complex rules, and having a slider framework linked to moving a rock in front of a camera isn't going to help you write a compiler.
Can I build a Dynamicland clone in my area? That'd be excellent. For all the press this thing gets (from time to time), I still see neither designs, plans, nor code that I can use. Is this because Bret Victor shares ideas and expects that if the world finds them valuable, they'll implement their own? Or is there another motivation to hold this close?
Maybe someone has to build an Open competitor so that we can address problems mentioned by other commenters. Maybe I'll do that here in my hometown and share it with everyone.
Shut Dynamicland down and focus all Bretts and the rest of the teams energy on building these collab interaction ideas into AR.
Dynamicland has 2 components - the paper and projector idea which is bad, and the interactive OS and systems which are fantastic. Replace the paper and projector with AR glasses and you have a winner.
AR allows exactly what you want - real time dynamic collab 3d interactions. Paper does not. It's 2D, flat, limited, with occlusion. You've just taken 2D screens and reversed the light source.
Dynamicland is a sunk cost. Drop it fast. Move to AR.
I was thinking the same thing. Does this technology offers anything more than AR? It seems the answer is no, unless you really don't want any glasses on your face.
Brett has a thing with actually holding what we are interacting with.
He believes that the sense of touch and how our fingers evolved isn't being used by any computing medium yet..
Further like the article mentions, immersing into total virtual worlds shall leave our bodies immobile
Finger tracking and finger haptics both have good working prototypes in AR.
I'm talking above about full body standalone/wireless headset AR (like magic leap) not "point your phone camera at a QR card" AR. Full body is active and immersive. You can jump, run, turn around, and it tracks and models the 3D space around you in real time. It really is something special, and Bretts talents in dynamic collaboration are exactly what is needed in this new tech field.
Instead he's huddled under projectors pinning paper QR codes to chipboard like it's 1980 when he could and should be pushing the frontier of interaction and design where his work can make a huge difference.
But I'm not talking about VR, I'm talking about AR, which is basically enhancing the real world with digital imaging. This is the same what the room is doing with projectors. But AR has more benefits, such as being mobile, projecting anywhere you look at, etc. You can QR code any object you want.
Essentially, he asserts that computing (or computer science even) isn't a real field (like biology or physics). Most people can't name the early computing pioneers whose work they build on (not true in Physics or Biology, we celebrate the pioneers) nor are they familiar with the work that was done in the Xerox PARC days.
It's gotten so bad that a lot of computer science research just assumes that what we have now is what's going to remain forever. In the 70's, all kinds of interesting ideas and experiments were tried.
Hardware implemented VM's, direct manipulation programming, OS-free computing environments, highly re-configurable computing a la FPGA's, and hundreds more.
Nowadays, we think that creating custom ASIC to run machine learning algorithms quicker is innovative and novel.
All this to say that many of the ideas in Dynamicland aren't new. They're rooted in ideas that are decades old. If you look at Bret's papers he uses as references frequently, you'll notice how many of them are over 10 years old - http://worrydream.com/refs/
my jaw literally dropped at 3:12. https://youtu.be/laApNiNpnvI?t=192
I still can't figure out how they managed to get the camera quality/resolution in '91. I'm guessing that it's just an image rather than conversion to text if it's a paragraph.
[edit] what happens is copy and paste from a book on the desk to the computer clipboard. where the opposite corners of the physical screenshot are defined by two fingers creating a rectangle screenshot area
This seems to be the next Xerox Parc! I tried to find some explanatory Videos but wasn’t successful. Has anyone some links to this highly intriguing experiment?
Physical computing sounds like an incredibly limiting environment, like using FrontPage rather than HTML, CSS and JavaScript. How do you do anything abstract on a platform like this? How do you i integrate any two arbitrary pieces of software? How do you apply basic programming practices such as functional (as opposed to copy/paste) reuse? Or do you end up modifying code the 95% of the time the abstraction hinders rather than helps?
I really hope this isn't how we teach programming to the next generation, because it will severely limit their understanding of what makes programming great.
I wish I could work with people who are as interdisciplinary as this team. As an interdisciplinarian myself (psychology, busses, computer science and game-design) I need a place to belong and shine creatively.
The Sony Xperia Touch ought to be able to do this. That's a nice projector-based touch system.
Possible application: interactive restaurant menus. Some places have tried handing out tablets, but you have to have someone hand them out and retrieve them. The system has a camera, so have it recognize open space on the table and people in seats. Present people with menus projected on unused table space. Plus you can offer customers games while they wait.
This looks amazing. Such a refreshingly different take on things, and such a beautiful -- and realized! -- positive vision for how we might interact with technology and each other. Brilliant.
I’m not understanding how this works. What actions let you create a new table in Postgres or drop an index? Are there ways to install new software like pip?
As far as I understand Dynamicland is not a general purpose OS that tries to do everything. If you want to manage PostgreSQL, the shell is a pretty good way to do that.
Dynamicland looks like a real world IDE, where a lot of the interface consists of physical objects.
It's more like a Mathematica notebook than a bash shell.
> At Dynamicland computing is social like cooking is social. It’s also physical like cooking is physical. You’re not seated in front of a single-person screen, but walking around an open space, using a range of tools.
Ya lost me. I hate cooking specifically because of the physicality, of the different tools, etc. If I liked cooking and all that shit, I'd be a chef.
This works great when a small, gifted and creative community has tight controls on the process.
But how will it work when non-creative people take the technology and make it bland — and infuse it with ads, because that is the only viable business model that tech seems to be able to come up with.
This is always a risk with inventing something new. Their FAQ hints at how they plan to mitigate some of the negative effects - https://dynamicland.org/faq/
They want to mirror the Carnegie library model as much as possible. They're also non-profit and not interested in building a product that can be purchased (instead they think of this technology as infrastructure available to almost everyone, like the internet or running water).
Does dynamicland have any kind of permissions / access system, also with regards to pieces of the room, or the whole room, or is root access to the system also controlled physically, like a conference room projector?
no, on purpose. I single person can crash the entire system (it reboots quickly). A single person can spam the entire facility. This is, at least the moment, intentional.
i don't see the big difference other than just virtual interacting, other than lots of clutter; and as
miki123211 said..accessibility seems like it would be quite an issue which is very important to me
Really? Red thin text on white background? I'm sorry, I will not read it. Is this the purpose of this site? What's the sense in writing something normal people cannot read?
And yet it gets to the 1st page on HN, no one complains, and my comment will get huuuge number of negative points, I know, as usually when someone posts a comment about this kind of bad design of a page/author people worship.
I liked the red text. It's a very well designed layout, and the color choice - while arguably questionable - gives it personality. Not every site needs to look the same.
The fact that it has a lot of upvotes shows that aparently many people don't have issues with the presentation.
Anyway, my browser has a "reader" mode that lets me make any website readable with a single click, so I've stopped complaining about hard-to-read aticles. Better to focus on the substance of the article...
It's nice and all, but this is a nightmare for accessibility. Not everyone can read paper or even move around. With this design, coding around those limitations is not easy. In the current model, everything could be done 100% accessibly. Why it isn't is a completely different problem. When everything is inherently physical, though, accessibility is not just about the code, it involves much more parts, such as automated devices to move things around. Considering how expensive devices made for accessibility are (a braille display is usually well over $1000), that would be a huge setback. I think the beautiful thing about computers and technology is that you're not tied to any physical medium and that information can flow freely between devices that can represent it in different ways. Now, there can be ten programmers working on a codebase, where one person uses a normal Windows box with a lot of GUI editors, another one uses Linux in text mode with vim, a third one has a Mac Mini with a screen reader and no monitor attached (my also blind friend programs like that), and the fourth uses switches, eye movement sensors or voice-recognition technology because they're physically disabled (see https://youtu.be/YRyYIIFKsdU?t=501). In other words, when everything is purely digital, you can consume and manipulate information in any form you like and in any form the computer can work with. It can be on a smartphone screen and with your finger, on a TV with a remote, through a voice-controlled speaker or a laptop with a keyboard, the choice is yours. The information is purely digital and not tied to a particular medium. However, in Dynamicland, the information is mostly physical and it's the computer that needs to process it, not the other way around. That makes consuming the information through a different medium than it was originally presented with very difficult. It would be possible for a computer to describe a Dynamicland environment for the blind, but it wouldn't be easy for a blind person to write the code on Paper. In a normal computer environment, such a person can just use a different tool (AKA a screen reader instead of a display) and work on the level of their sighted peers. However, a dYnamicland environment makes such things impossible.
I recognize the limited Value of Dynamicland as an exercise environment, maybe like Scratch. Even in that form, special considerations would need to be put so that disabled people have an alternative way of doing the same exercises.
Dynamicland makes a lot of other things, not related to accessibility, harder too. For example, how do you deal with larger codebases or distributed teams? How do you quickly collaborate on one piece of paper while being in two different places? Very simple to do for a Google doc. Almost impossible to do here.
The idea is kind of interesting but it would be only worth it in a pre-internet age. In 2018, people expect realtime collaboration with people in different parts of the world and with Dynamicland, that's not going to happen.
I wrote to Dynamicland's email address about this months ago, and even re-sent the email, but never got an answer.
I suppose it's inevitable that if this vision of the future takes off, we'll be second-class participants. But I guess it's too much to ask the majority to not work in a way that would be great for them, just because it's a new challenge for a few of us. We'll figure it out.
I think even sighted people will realize it's not great at all when they actually start doing it. It kind of would be, if it was introduced in the 80s, where there was almost no internet, not for home users anyway, and where laptops were mostly in the early adopters phase. This basically undoes the progress we've made during the last two decades. The internet has allowed us to collaborate remotely, and smartphones have enabled us to take the internet with us, anywhere we go. Dynamicland makes those things almost impossible to achieve.
Similar accessibility problems already exist with the current interface of keyboard, mouse, and screen. If you have a physical disability making it impossible for you to type, use a mouse, or, are blind or colorblind, you require specialized software or hardware.
There is no reason why Dynamiclands can't similarly have specialized hardware and software for people with various disabilities.
Plus, a more natural/human interface might be more _accessible_.
No, not true. Now, information is kept in purely digital form, and the screen and keyboard are just ways to represent or manipulate it. The information is digital, so you can also represent and manipulate it using speech or switches. I'm writing this comment with a real keyboard, but my screen is completely dark. I use a screen reader that reads what's on the screen. That's only possible because your comment and the post itself is digital, stored as ones and zeros, and the screen reader can interpret it. In short, the digital text is the main representation of the post, while the letters on the screen or the synthesized speech are merely derivations. With Dynamicland, the information is represented on Paper and paper is the main medium of information. That means that if you want to write code, you can't write it using a keybord or speech or switches or eye-movement sensors or a terminal connected through a modem that dials up a computer somewhere, you need to be there, hold a pen and type it on the paper. Paper is the medium that really stores the information and it's not easy to quickly add code to paper. Of course, I could carry a portable printer that would accept a sheet of paper, let me make alterations using the familiar keyboard/screen reader interface and print them out, but then the collaborativeness becomes much harder.
In other forms, paper should be a representation a user might choose, not the main form of storage and manipulation. Same for other physical media like the colored poker chips and pebbles mentioned in the original post.
One of the goals of Dynamicland is to foster the creation of new representations that take advantage of + live across more sensory modalities. Bret calls them modes of representation, but there are similar phrases in learning & psychology literature.
Most representations are haphazardly created (math notation, periodic table of elements, etc) by (often) singular experimentation. Representations are poorly researched and understood (there's no branch of science dedicated to them). If representations were studied more closely, we could get much better at deliberately creating representations that are more humane. When the scientific method is applied, we can then apply the engineering method to create new things from those principles. Bret's talk is a great, accessible overview of these ideas from his context (https://www.youtube.com/watch?v=agOdP2Bmieg).
To vastly oversimplify things, one way to improve accessibility is to parallelize representations that live in just 1 or 2 sensory modalities to all of our sensory modalities. This way, math notation could be made accessible to & equally powerful to the blind (as it is to those who can see).
I empathize (a little, it would be naive for me to assume I can 100% empathize) with your reaction along the axis of accessibility. What Dynamicland looks like right now isn't what it's going to look like later. Right now, you may see Dynamicland as text boxes but now distributed in a room. In 50 or 100 years, there may be no text AT ALL and you can use the senses available to you to interact with information.
To paint another perspective, I would argue that current computing environments are VERY in-accessible for many people. Laptops and desktops only tap into our visual and symbolic channels, with some tactile feedback (but quite minimal, we know where "X" is on the keyboard b/c of spatial reasoning not b/c the X key gives us specific/unique tactile feedback). Humans who thrive in spatial and kinesthetic channels are debilitated on a modern computer.
> Humans who thrive in spatial and kinesthetic channels are debilitated on a modern computer.
Interesting; this isn't something I know anything about. Is it not feasible to develop alternative hardware and software that adapt our symbolic notations for these people, while still enabling those of us who practically require symbolic notations to collaborate?
In general, we're just not great at processing symbolic representations. We haven't evolved to do so (unlike spatial, visual, auditory, etc senses for matching representations) and have only manipulated symbols for a few hundred years. I mean true abstract symbols, not hieroglyphics or other old symbols that still mapped to language / were still quite visual.
I believe it could be more compelling to figure out how to re-represent things instead of just scaling up symbolic representations. That helps make it more powerful and accessible, for everyone.
A good example here that Bret mentions in his talk is Roman numerals -> Arabic numerals. Or even typing paragraphs of text to Arabic notation for equations (y = x^2 + 1 used to be PAGES of written text). People used to think only super educated elite mathematicians could grasp algebraic ideas. But after they were re-represented, we discovered almost all 8 year olds could grasp the ideas!
Actually, if the tools become real tools (instead of a page representing a layer, some 3d printet thing with a specific look and feel) this could, theoretically, become accessible for the blind. People with moving impairments will have it much harder, though.
- The community! Most people I interacted with were pretty inter-disciplinary. Some came from an education heavy background, others from a more programming language background. Some from interface design, others from physical installation or video game background (and everything in between). Everyone had their own degree of skepticism but were excited by the discussions that were happening (a sign of a good research culture!).
- The representations: What excites me the most about Dynamicland is that it's an environment for fostering unique representations of ideas. The cultural forces & ideas (social programming, remixing, visible state / code at all times, 3d environment, etc) baked into the space encourage the experimentation & creation of new ways of representing complex ideas. Some scattered examples here - https://twitter.com/dynamicland1?lang=en
- Removal of artificial barriers: When programming on a computer these days, there's so many barriers to doing simple things. The amount of code that exists to do virtual actions in a virtual world is MASSIVE, and acts as a huge barrier. This is something Bret talks about at the end of his interview on Postlight (https://postlight.com/trackchanges/podcast/computing-is-ever...). Because code / programs in Dynamicland are embodied in the room, you don't need code to move a dialog box or a slider around. You just moved it yourself. This is super powerful and it means your code can focus on the actual computation itself, not on virtual UI movement. Eventually, Dynamicland will have robotics to automate moving of objects, but this already is a great start.
- Moving around: Moving around, even if it's just around a table, is AWESOME. We're so used to sitting (or standing) at a desk and staying still that we don't get to take advantage of embodied intelligence. We think in SO many different ways, and our "thinking" doesn't just happen in the head. Our arms, legs, stomach, and feet all contribute to the thinking process. Combine that with multi-sensory representations that live across all of these channels and you can explore a thought & idea space SO quickly and uniquely. This is so hard to describe and under-rated. This link (http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...) and this talk (https://www.youtube.com/watch?v=agOdP2Bmieg) attempt to do these ideas justice but it's quite hard to transfer this context!