Similar accessibility problems already exist with the current interface of keyboard, mouse, and screen. If you have a physical disability making it impossible for you to type, use a mouse, or, are blind or colorblind, you require specialized software or hardware.
There is no reason why Dynamiclands can't similarly have specialized hardware and software for people with various disabilities.
Plus, a more natural/human interface might be more _accessible_.
No, not true. Now, information is kept in purely digital form, and the screen and keyboard are just ways to represent or manipulate it. The information is digital, so you can also represent and manipulate it using speech or switches. I'm writing this comment with a real keyboard, but my screen is completely dark. I use a screen reader that reads what's on the screen. That's only possible because your comment and the post itself is digital, stored as ones and zeros, and the screen reader can interpret it. In short, the digital text is the main representation of the post, while the letters on the screen or the synthesized speech are merely derivations. With Dynamicland, the information is represented on Paper and paper is the main medium of information. That means that if you want to write code, you can't write it using a keybord or speech or switches or eye-movement sensors or a terminal connected through a modem that dials up a computer somewhere, you need to be there, hold a pen and type it on the paper. Paper is the medium that really stores the information and it's not easy to quickly add code to paper. Of course, I could carry a portable printer that would accept a sheet of paper, let me make alterations using the familiar keyboard/screen reader interface and print them out, but then the collaborativeness becomes much harder.
In other forms, paper should be a representation a user might choose, not the main form of storage and manipulation. Same for other physical media like the colored poker chips and pebbles mentioned in the original post.
One of the goals of Dynamicland is to foster the creation of new representations that take advantage of + live across more sensory modalities. Bret calls them modes of representation, but there are similar phrases in learning & psychology literature.
Most representations are haphazardly created (math notation, periodic table of elements, etc) by (often) singular experimentation. Representations are poorly researched and understood (there's no branch of science dedicated to them). If representations were studied more closely, we could get much better at deliberately creating representations that are more humane. When the scientific method is applied, we can then apply the engineering method to create new things from those principles. Bret's talk is a great, accessible overview of these ideas from his context (https://www.youtube.com/watch?v=agOdP2Bmieg).
To vastly oversimplify things, one way to improve accessibility is to parallelize representations that live in just 1 or 2 sensory modalities to all of our sensory modalities. This way, math notation could be made accessible to & equally powerful to the blind (as it is to those who can see).
I empathize (a little, it would be naive for me to assume I can 100% empathize) with your reaction along the axis of accessibility. What Dynamicland looks like right now isn't what it's going to look like later. Right now, you may see Dynamicland as text boxes but now distributed in a room. In 50 or 100 years, there may be no text AT ALL and you can use the senses available to you to interact with information.
To paint another perspective, I would argue that current computing environments are VERY in-accessible for many people. Laptops and desktops only tap into our visual and symbolic channels, with some tactile feedback (but quite minimal, we know where "X" is on the keyboard b/c of spatial reasoning not b/c the X key gives us specific/unique tactile feedback). Humans who thrive in spatial and kinesthetic channels are debilitated on a modern computer.
> Humans who thrive in spatial and kinesthetic channels are debilitated on a modern computer.
Interesting; this isn't something I know anything about. Is it not feasible to develop alternative hardware and software that adapt our symbolic notations for these people, while still enabling those of us who practically require symbolic notations to collaborate?
In general, we're just not great at processing symbolic representations. We haven't evolved to do so (unlike spatial, visual, auditory, etc senses for matching representations) and have only manipulated symbols for a few hundred years. I mean true abstract symbols, not hieroglyphics or other old symbols that still mapped to language / were still quite visual.
I believe it could be more compelling to figure out how to re-represent things instead of just scaling up symbolic representations. That helps make it more powerful and accessible, for everyone.
A good example here that Bret mentions in his talk is Roman numerals -> Arabic numerals. Or even typing paragraphs of text to Arabic notation for equations (y = x^2 + 1 used to be PAGES of written text). People used to think only super educated elite mathematicians could grasp algebraic ideas. But after they were re-represented, we discovered almost all 8 year olds could grasp the ideas!
Actually, if the tools become real tools (instead of a page representing a layer, some 3d printet thing with a specific look and feel) this could, theoretically, become accessible for the blind. People with moving impairments will have it much harder, though.
There is no reason why Dynamiclands can't similarly have specialized hardware and software for people with various disabilities.
Plus, a more natural/human interface might be more _accessible_.