Hacker News new | past | comments | ask | show | jobs | submit login
Dynamicland (dynamicland.org)
446 points by Pulcinella on Dec 19, 2017 | hide | past | favorite | 117 comments



Super exciting Bret Victor publicly launched this project! On a related note, A professor at MIT, Hiroshi Ishii, has done a lot of work in tangible user interfaces around 2003-2005, you can see more work here: http://tangible.media.mit.edu/projects/

Definately the visions/goals are differently motivated, but interesting to see more examples of adding tangible control to computing.


More overlap than you think!

In his talk - https://vimeo.com/115154289 - Bret has a footnote reference to Radical Atoms group (Hiroshi's group)


Awesome! I forgot about that mention. You should see what that research group is doing now. "Radical Atoms" where the material is dynamic and embodies computation. Now they are applying it to food... super fascinating!


This looks very interesting, but the most confusing part for me is how code appears on these printed pages. Do you have to type it in computer and print to run, using physical space just to organise, not actually develop? All the examples show these pieces of paper already being there, just moved and interacted with.

I'm very curious if DynamicLand code can be created without touching keyboard, and actually printing new sheets by just moving and interacting with other sheets, making it actually self-hosting.

Also, their examples are very visual and quite complicated (car aerodynamics flow). Doing this from scratch might be quite challenging, unless building blocks are very smart.

---

Sidenote: I'm surprised to learn that code printed on these sheets is Lua-based. Lua does not have a lot of syntax, but does have some, and I would think that any language that is meant to be created in physical space should aim for the most minimal syntax, almost none if possible. Lisp comes to mind, as I can image a physical representation of S-expressions much better than Lua code.


I'm not sure I understand why one would expect that any language that is meant to be created in physical space should aim for the most minimal syntax.

Regular folks who don't program in their daily life connect best with things they can read: presenting them with lisp is a bit like presenting them with mathmatical notation: even if those that are marginally familiar with them will say it's simple, it's exactly too difficult to be accessible for people who have _no_ familiarity with them at all. You actually want that verbosity of added syntax, because it lets people recognize something in what is inherently alien to them.


You don't want syntax because you don't want Magnetic Poetry kits, having to arrange very special "structure words" in very special places, and hope that "lines" make sense.

Don't think about (Lisp (notation)) -- think about the fact that the entire syntax is "this is enclosing box" (can easily be done with paper by putting smaller sheets on larger ones) and "this is the order of items within the box" (very natural to re-arrange these blocks). Lisp doesn't have lines, in that sense, and lines don't make a lot of sense in physical world.

Just to be clear, I'm not disagreeing about the fact that Lisp is a bit complicated to be first programming language _when writing text_. But its design is great for physical manipulation.


Awesome, genuinely novel -- really interested to see where this goes:

A primary design principle at Dynamicland is that all running code must be visible, physically printed on paper. Thus whenever a program is running, its source code is right there for anybody to see and modify. Likewise the operating system itself is implemented as pages of code, and members of the community constantly modify and improve it.

That said, the pages of code physically in Dynamicland are not in a git repository. The community organizes code spatially — laying it out on tables and walls, storing it in folders, binders, and bookshelves.

https://dynamicland.org/faq/


I'm reminded of an old bash.org quote:

    <erno> hm. I've lost a machine.. literally _lost_.
    it responds to ping, it works completely,
    I just can't figure out where in my apartment it is.


perhaps I'm missing something, but wouldn't large-scale adoption like they're projecting be really hard on the environment?

Dynamicland introduces a 1:1, persistent mapping between virtual and tangible. won't this kill a lot of trees?


Yeah I noticed this as well. It will work well as an integration into existing spaces, or some built specifically for this reason (as mentioned on the website).

The space and resource requirements definitely stifle it for ubiquitous application but hey, it's an early tech and it'll certainly improve in time.


Sounds intriguing, but after reading through the website for 5 minutes, I still have no idea how this is supposed to work.

From the pictures, it looks like they've set up some kind of system to translate certain physical movements into changes in the projected images. It seems like a non-technical user can only make superficial changes within a limited set of pre-defined behaviors. But from the description it sounds like there's a lot more to it than that...


Right now, each piece of paper in Dymanicland is a program. You put a piece of paper down and it "runs" - the metaphor breaks down at this point though. With a little bit of code, anything in Dynamicland can become "alive" and interactive.

For me, the impressive part of Dymanicland is how physical everything is. Want to see how something works? Find the paper that the code is printed on. Need help understanding some code? Bring somebody over to the wall, table, or couch where the code is. You can annotate the paper, draw diagrams on it, cut it, fold it, tear it, put it in a book to help others learn how it works. Want to edit some code? Point an "editor" token at the paper program you want to modify, then start changing the code.

Dynamicland isn't a room with a computer, it's a room that is a computer.


> Want to see how something works? Find the paper that the code is printed on.

> Want to edit some code? Point an "editor" token at the paper program you want to modify, then start changing the code.

If you edit a paper program, then won't the source on the paper be obsolete? Will that be indicated somehow? If I'm looking to see what a piece does, how can I tell if the source code on it can be trusted?


> If you edit a paper program, then won't the source on the paper be obsolete? Will that be indicated somehow?

When a program is changed, Realtalk will print a red line over the lines that have been modified or removed. It looks a lot like the output that you'll see from a "git diff" on a file.


> Realtalk will print a red line

You mean like, from the projector?


The projector is just a means of getting dynamic media. You could imagine future technologies involving e-ink paper that's as cheap as wood-pulp paper. What if we had e-ink embedded in paint? What if all physical objects could be embedded with computability?


Yes, sorry. It will project a red line on the paper.


That's good, then. Having the code right there isn't very good if you can't trust it, but if it at least indicates what's not to be trusted, that's much better.


What happens when the school bully rips up my piece of paper?


To the down voters, it was a genuine question. If the programs are pieces of paper, either they are just toy problems or otherwise you care about losing the paper.


You've got to fight

for your right

to compute


> want to copy the code? Oh...


But... when does it run? How do you trigger it? How do you step through it with a debugger? How does one program call another program? This seems like a lousy metaphor.


I'm looking for more details as well, the idea fascinates me. Each program looks to be a code printout, so I'm not quite grasping how changing code works with just an "editor token".


The "editor token" is also a program that Realtalk recognizes. It's just a small program that says "I am an editor named X" - you then point the token at the code you want to edit and then use a text editor to make those changes. I've seen people use laptops, iPads, and other Realtalk programs to make the edits.

The important thing to keep in mind is that everything in Dynamicland is, well, dynamic. Right now, editor tokens are just normal Realktalk programs printed on paper, but in the future the might be a different physical object. Right now you edit the coded using a text editor, but in the future you might use a pen to write code directly on paper, or perhaps you'd re-arrange physical objects that represented blocks of code, etc ...


The printed code is sort of just a documentation -- the system isn't, for example, scanning it visually to decide what to execute. Rather, each card encodes a unique ID, and the running code lives on the network at that ID, so you can point your editor at the card, read the ID, and see and edit the code at that ID. The printout is just the code that was at the ID at the time of printing.


Would be cool if they projected the code onto the card rather than printing it—that way it doesn't get stale, and more re-use.


Ah OK, I had to scan the page a few times to realize it wasn't a visually interpreted form of Scratch.


I think the color dots are some hash/key to a program. Whenever the camera sees the code in its FOV, the system executes the program. Also, the color code seems to serve as location tracking as well, so the program is projected on the physical paper.


I think you can make some more sense of this if you read the piece, and watch the video on laser socks [0]. That was a game designed at the CDG lab that is the pre-cursor to Dynamicland.

[0]: http://glench.com/LaserSocks/



I've spent a decent amount of time looking at Dynamicland-related things, and it seems consistently grandiose + vague. Maybe it's just that the material they're putting out is more for investors than other hackers. Not sure. The language rubs me the wrong way though:

  Dynamicland is a communal computer,
  designed for agency, not apps,
  where people can think like whole humans.

  It's the next step in our mission to
  incubate a humane dynamic medium
  whose full power is accessible to all people.
Just take "designed for agency, not apps" for example. It's a nice rhetorical technique to contrast apps with agency like this, suggesting that apps somehow deprive one of agency and that Dynamicland can restore it—but it feels pretty disingenuous if you think about it for a minute. (There's a section on the page also titled "Agency, not apps", which brings me no closer to feeling that the use of 'agency' is justified.) In actuality each platform (traditional computing, Dynamicland) has tradeoffs in terms of what one's agency is applicable toward. I can see no reason to think Dynamicland is an increase in agency over traditional computing rather than a lateral (and perhaps complementary) change. I agree it's nice to use your body rather than just fingers, but unless I can do the same things with the 'alternative medium' (and there is no material I've found suggesting a level of generality even in the same realm as traditional computing), I'll complement my computer use with other physical activities and face the reality of fingers for input until a true alternative comes along.

I agree the problem they are addressing is epic, and their language is suitable for describing it, but from what I'm able to glean about the project, the solution so far is on the level of 'neat,' and maybe important for introducing new users to programming—but, if that's what it's for, just say it! Instead we see that the aim of the project is to 'reinvent computing for the 21st century'. So am I going to do my taxes with this? Am I going to use it write novels? To design 3D models? To conduct research? To build artificial intelligence? To discover new medicines? To call an Uber? Is there any reason to think it might attain the level of generality where it could at any point be an actual alternative to contemporary computing? Or are we supposed to take this on faith because of the people involved? How could this compete with similar systems designed in AR? It seems to me like there's more 'agency' in AR since a system like this could be replicated within it, but with AR we'd likely have a number of alternative 'physical computing platforms' like it (Of course not soon, but they're talking about a timeline of at least 50 years for this project).

It's definitely refreshing to be able to interact with 'real' objects ('physical' would be less rhetoric-laden), but I feel like the novelty and suggestive language here may tempt readers into forgetting how much power was gained through computation specifically because it didn't require fiddling with physical objects. It's a big deal, and I'm pretty sure that a legitimate alternative computational medium will require deep theoretical breakthroughs, of which I've seen no suggestion here.


There may well be something of value here, but if there is, this website doesn't communicate it. I'm strongly repulsed by the way this project is presented. It reads like a quasi-scam hatched by a pretentious art student. I wish they'd spared a paragraph to say what it actually is.


> write novels?

That would be one the most obvious thing to do with such a system, as writing a novel used to be a very spatial task, and current text editors are not good at helping writers organizing their novel, identifying patterns, etc.

> AR

This is AR, but with projectors —which is not novel in itself, as it has been explored in labs, during prototyping events like Museomix and by digital artists for several decades now. HP even tried to commoditize the technique with the Sprout all in one computer.

You are right that the next step is most probably to do without projectors and replace them with AR glasses.

But the manipulation techniques, the collaborative aspect need to be developed and iterated, be it with or without AR glasses, and my understanding is that’s the goal here.


Are there some example applications created at Dynamicland that we can look at? i.e. what has actually been done with this platform to date? (Not sure applications and platform are the correct terms to use, but this concept is totally new to me)



Fascinating stuff there!!


Realtalk is the most exciting computer system that I've used in a long time. The wonder and joy of using Realtalk is very much like the wonder and joy I had at using Linux and Genera for the first time.

If you ever get a chance to visit Dynamicland, do not pass it up.


Could you elaborate on how Realtalk works? There's a lot of information on Dynamicland, but not much on Realtalk. The gist is that it's like OOP, but physical. But I'm curious as to the exact details of how the printer example (https://dynamicland.org/faq/) might work.


This is a cool project. But their opinion on the open source status of their systems seems pretty entitled, arrogant even. There are a lot of questions left unanswered, and the implications of their text is disingenuous at best. Claiming their system is "beyond" open source. Yet I can't find any way of experimenting with this system myself. They fail all the tests of truly free open source software [0], let alone even the sniff test of being merely open source (can I read it?).

This software is no more liberating than a "free" windows product given and taught to kids in a hope to encourage vendor lockin. They use words like "lift all people" or "agency not apps": nah, only those people who live nearby in one of the most expensive places in the world. The system is confined to their magic building, will they charge fees? are only those wealthy enough to travel and live near one be able to experience it? can anyone set up these buildings (around 2040 no less), or is there a franchise fee or agreement that is required? where is the code (scanners are a thing; what about the bootstrap code running on the servers)?

They claim to be basing the model off of the internet. Decentralized agreed upon protocols. How can I talk to their system if nothing is published? If there is no code to read? How in the world is that an open decentralized protocol? How can people iterate - provide feed back - when they don't even have RFCs or any published material at all. They are promoting exactly the walled garden behavior that is the antithesis of the open model of the internet.

The whole thing is suspect. I realize they are a non-profit, great. But that doesn't mean they support the ideals of a public commons like a free and open source project. It doesn't mean donations to this project will truly be helpful to everyone. Yet they are "beyond" open source. It reeks of arrogance and entitlement to claim such things. It rubs me the wrong way. Especially because it seems so cool.

[0] For those who forget: (0) to run the program, (1) to study and change the program in source code form, (2) to redistribute exact copies, and (3) to distribute modified versions.


> Yet I can't find any way of experimenting with this system myself.

Given that it's a physical space... you go to the physical space. You can pick up the source code for the entire thing there - it's self-hosting, so you can read it like any other object in Dynamicland. You can also copy it physically, using a scanner.[0] Actually finding a suitable physical space and building a community might be the the hardest part of replicating it.

It's also a very early project - the first Dynamicland space opens next year. I'm sure we'll learn more about it, but the entire point of it is that it's not online - it's a physical space and a community meeting in real life, and if you want to learn about it and maybe try to replicate it, you'll have to go there.

[0] https://pbs.twimg.com/media/DJc2esBVwAAnj4X?format=jpg


I'll just quote myself, since you seemed to ignore these counter points to your post the first time around, and failed to answer them

> are only those wealthy enough to travel and live near one be able to experience it? [...] where is the code (scanners are a thing; what about the bootstrap code running on the servers)?

A project can't claim to be "beyond" open source by having printed (unless they are using a printing press, that code was digital at some point) pieces of paper, especially while only being available to a couple of - by absolute standards - wealthy people.


One of the coolest things I've ever played with. It's hard to describe how antiquated engaging with computation through 5.5" or 15" rectangles is when you've seen a whole building that is a computer


I love the description of Dynamicland as “Hypercard in the World”[1].

[1] From the Laser Socks write-up — a great example of a game that came out of Dynamicland: http://glench.com/LaserSocks/


A small technical point: "Hypercard in the World" was one of the prototype systems that came before Realtalk, which is the operating system and protocol that runs Dynamicland.


Reminds me of Bret Victor's (ostensible maker of the iPad) post on interaction design:

http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesi...


You should be happy to hear that Bret Victor is behind Dynamicland https://dynamicland.org/about-us/


Something I really dislike about the way he presents his work is that he is intentionally really really vague about what his contribution was to those various products, in his presentations he often shows a few slides "I worked on this, this and this", where the iPad and watch get shown..

Like, what did you do? Design the home screen? Design the vision of the machine itself? It's ambiguity taken to its highest extreme.

I really respect his work and publications though, no doubt that he made important contributions to the products he shows on his slides.


See his details on his work at Apple here: http://worrydream.com/#!/Apple Spoiler: he can't talk about or show any details for some number of years. From his notes there and his other research, it seems would have been a proponent for skeuomorphic design at Apple, which lost out after 2011 in favor of flat buttons and icons.


The skeuomorphic design "battle" is a myth. Look at the iphone settings icon today: there are no gears in your phone. Skeuomorphism lives on, we just believe what The Submarine tells us.


This seems like an excellent toy, a great tool for education, and possibly a good UI for future end-user systems that mostly live in the cloud. But in no way something that could be used to build those systems.

Almost everything that's good and powerful about software comes precisely because it's non-physical. It's probably just a failure of imagination on my part, but I can't see a path where re-introducing a bunch of physical representations of virtual things makes things better for those trying to create, maintain, or understand complex systems.


Brett Victor has done numerous experiments to try to improve the way we can visualise data or be more creative with it, combining the freedom of drawing on paper with the reproducibility of software.

I think what they are doing at DynamicLand is trying to similar experiments as Douglas Engelbart back in the 60s to try and find ways to bring more of the other innate skills we have into a computer that is not via a keyboard.

So like the mouse, it might seem like a toy but it has the potential to be combined with pure programming.

Also especially with neural networks that are learning how to code, we may be able to give more physical input to a neural network and it will produce the program.

All this is looking far into the future, so they don't know what the final outcome is either, but they're experimenting to find what works.


Arguably then it’s not for you. I’m someone who comes from a background of the visual arts and always found the monolithic seeming walls of text that most programs come in very daunting until I found things like Processing where you can do things and have a quick, iterative visual approach to what you want to make.

I’m sure there were people who were against the concept of syntax highlighting when it began for similar reasons but I would hazard a guess that most people who code regularly find it invaluable.

I grant you that this probably won’t be useful for say, kernel programming, but it is also extremely new and an experiment. The worst thing we can do is ossify our ways of thinking and not be open to new and experimental ideas.


Since the programs are printed out and stored in filing cabinets, it'd be too bad if something cool happening at one Dynamicland instance couldn't be easily shared with another instance somewhere else. We know that computing would be better if it was shared in a physical space with real-world social circles. But not everybody has those: some people are disabled, or rural, or outcasts. The same issue comes up with experiments with local meshes and P2P gossip for new social networks.

Not as many people meet strangers in chat rooms or games these days, but it'd be sad if the baby of connecting people anywhere was thrown out with the bathwater of big social panopticons and glass screen prisons.


why don't you just share it and print it as need at other locations? the data is all readable


I can see this being very useful for Software Engineers, although I agree with you not directly for building systems. Scrum boards, build statuses and project statistics are good candidates for this system.

I can also see it being useful as a very general _and_ powerful prototyping tool, one of the demos shows someone making an input controller from a physical spring and a colored piece of clay.

When it becomes so easy to integrate your environment with the computer, I can only begin to imagine what's possible.


Their FAQ seems to disagree with you:

Is Dynamicland open source?

Dynamicland shares many core values with the open source movement and in some ways goes beyond them.

A primary design principle at Dynamicland is that all running code must be visible, physically printed on paper. Thus whenever a program is running, its source code is right there for anybody to see and modify. Likewise the operating system itself is implemented as pages of code, and members of the community constantly modify and improve it.

That said, the pages of code physically in Dynamicland are not in a git repository. The community organizes code spatially — laying it out on tables and walls, storing it in folders, binders, and bookshelves.

https://dynamicland.org/faq/


The parent made a good point, and your comment is completely unrelated although it purports to be an argument against it.


Reminds me of the introduction to Structure and Interpretation of Computer Programs, which says nearly the same, that problems in computer science are unbounded by physical limitations (barring the hardware it's run on).


I tend to agree. Physical fetishism seems like a retreat from the core values of computing.


I think 'core values' is a question for whoever is using the personal computer.

Given that this comes from research directly connected to Alan Kay, he at least is well aware of the values of the people who pushed computers from simple calculating machines to things that we create with.

They also repeatedly refer to the research of Douglas Engelbart and his desire for computers to augment humans. So they are at least trying to bring forward ideas from a time when the core values of the personal computer were being considered.


I've failed to grasp it enough to explain it to a 5 yr old, but I think i love the idea. I've always cringed at giving a 10 yr old a laptop which simply becomes a conduit to a flash-based gaming in the browser. Activity based programming should be the foundation for years before they touch a PC imho. Its collaborative and fun - so it seems.


At least a laptop can become more, the opportunity to delve deeper in the system or learn how to code your own applications is always there if the curiosity should ever arise.

Tablets on the other hand... literally just consumption devices, even creative applications in them are locked up and siloed unable to inter-operate.


Inspiring to see successful startup founders re-investing in the community:

    Andy Hertzfeld
    Mitch Kapor
    Patrick Collison
    Drew Houston
    Alex Payne
    Josh Elkes
    Carlin Wiegner
    Notion
    UCSF


One guest, after spending time at Dynamicland, held up his smartphone and shouted, “This thing is a prison!”

Holy crap. This is really, really cool. Actual magic.

But most importantly — programs are real. You touch them. You see them everywhere — they can only run when visible. You can change anything and see what happens. No black boxes.

I think I'm in love.

The computer of the future is not a product, but a place.


Maybe I've missed something. I'm not sure what's magical about someone shouting that their smartphone is a prison.

Do you see yourself using this as an alternative to your current computer/phone? For the things I use computers for, it seems that Dynamicland would not be applicable, so I'm having a hard time understanding why some are finding this so exciting.

(Don't get me wrong, I think it looks neat, and the problem it's addressing is important—but it does not appear to be an actual general-purpose alternative computing platform. And I've seen/read most everything put out related to the project if I'm not mistaken. Feel free to suggest me new things if someone thinks it has general-purpose computing capabilities.)


It is a general purpose computing platform, with lots of utilities for programming in a physical space. All of the surface area becomes a display, you can render using inches instead of pixels.

In it's early stage there are still many utilities and performance improvements it needs before it can replace your MacBook pro, but it is tangibly exciting to actually work collaboratively rather than stare at a screen at the same time as someone else is staring at a different screen.

The magic moment is in how different it feels to compute with your whole body in a social environment. It really does make a screen feel like a cage that has trapped your mind.


>before it can replace your MacBook pro

It will never replace my MacBook pro - I don't want to have to set up a board game every time I sit down in the park to work on something! It's a cool educational tool for young kids, but we're going to have to wean them off of fun tactile interfaces and on to text eventually. (The biggest problem with visual programming environments is that they force the programmer to solve large-scale graph problems in order to make their programs not look like a mess - this wouldn't be solved by escaping the screen.)

Hello, humans in 20 years when this comment is being used as a humorous example of what people thought in 2017. ;)


It'll never replace your Macbook Pro as a professional developer, but it might very well allow people to create new things with computers without having to be a professional developer. In this way, compare Dynamicland to Excel or Lego Mindstorms - except that it might be more, in the same way that Excel allows people to solve business problems they never could've before, and Mindstorms allows people to create machines they never could've before.


Is text really the end-all of human-computer interaction? Can you not imagine future programming involving teaching the computer what you want it to accomplish?


Teaching is hard. It involves good communication, breaking up what you want to convey in smaller parts, organizing these parts according to the model of the learner, checking that everything is understood by asking questions and doing it all again to correct the misunderstandings.

We can infer that having to teach a computer won’t be easier than straight on telling it what to do with a formal language and getting feedback. The hard part is designing the problem space.

Of course better, continuous feedback would greatly improve the ease of programming.


> You can change anything and see what happens. No black boxes.

From the reverse: if something happens, how can I tell which piece of paper somewhere in the room made that happen?


> No black boxes.

Other than the ones hiding behind the projector in the ceiling you mean?

Kind of like the Wizard of Oz I suppose...


One of the ideas I heard people at Dynamicland mention is to have Realtalk project highlights over the dependencies (this would be possible because Realktalk is self hosting, all the code that runs Realtalk is printed on paper)

In practice, when I wanted to see how something worked, I would just ask somebody in the room and get pointed to where the dependency was.


Things like the Google home and Amazon echo are a step in that direction. Using the internet just by asking questions out loud are a novel way to interact with a computer. Can't wait to see where this goes.


Voice control is in the right direction but Home and Echo are in the wrong direction: "You can change anything and see what happens. No black boxes."


"Canadianwriter" responds to "canadian_voter", can't help but notice it :p


I'm instantly reminded of "The Mother of All Demos, presented by Douglas Engelbart (1968)" https://www.youtube.com/watch?v=yJDv-zdhzMY



There seems to be this difficulty in visual-based programming:

1. If you want to maximize composibility, then you want the "units" (here, the units would be the sheets of paper they're using) as general and uniform as possible. Think lego blocks.

2. If you want to maximize expressivity, then you want the "units" that encapsulate (probably turing complete) code. Think flowcharts or diagrams where each descriptor references a method, or a library, or eve an entire subsystem.

The problem with #1 is that it's extremely difficult to build large, complex projects that way.

A problem with #2 is that the "visual programming" aspect eventually gets revealed to be merely a stepping stone to "real programming" of writing in code.

This project looks to be an example of #2. Am I right? If that is so, does that mean that creating complex projects would naturally lead to larger and larger amounts of code on a single sheet?


I'd personally really like for code to go away / not need to use it. It's an important bootstrapping step, but really only the hardware of the 60s and 70s more / less needed text, files, and source code.

With the physical environment of Dynamicland, "code" can be re-represented using physical objects, and we may not even need it. That's a really exciting idea!


Yes! Coding != Programming

You can still have programming without text-based coding.


A good stepping stone to exploring this space would be a useable complete system in which you layout high level concerns such as architecture, use-cases, and user-interface behaviour using a "system 2" style, and can dive into code just for the details. A separation of which concerns are built using declarative modelling (the "system 2" stuff), infrastructure built using a normal imperative programming language, and the actual business logic when it can't be generated entirely from models being implemented in a functional programming language with a strict subset of the procedural code.

And most importantly (and probably the most work) the user interface for this would have to be extremely polished and well-thought-out.


Exciting project! Looks like it could lead to some real advances in human-computer interaction. Large working area(desk, walls or the whole floor) and automatically tracking and digitizing written notes and diagrams sounds very cool. Maybe this and voice recognition will be the future of HCI if someone manages to iron out the kinks and make it seamless.

But programming by putting A4 sheets side by side? I only see that being useful as an education tool.


I have nothing constructive to say except that i love it.


Very inspiring work! Would love to try it out.


> We will be hosting open house events, classes, and studio hours at Dynamicland Oakland in 2018. Please join our mailing list [1] for announcements.

1: https://tinyletter.com/Dynamicland


There are a lot of comments here about how Dynamicland is a bit confusing to understand, and the website is light on details.

The latter is true - it would be nice if there were academic style papers describing the system, or reference source code, etc. - but I understand that right now, their time is mostly dedicated to building+fundraising, and the goal of this website is to have something rather than nothing online.

That being said, Dynamicland inherits a lot from many previous subfields in computing.

If anyone wants to get deeper, and try to wrap their head around what Dynamicland aims to be, here are what I think a few starting points:

- Alan Kay's STEPS paper, the goal of which is to have a fully implemented OS + base libraries etc in a few thousands of lines of codes at most, by using modern patterns and abstractions.

http://www.vpri.org/pdf/tr2011004_steps11.pdf

- Bret Victor's "Seeing Spaces" is pretty much the precursor to Dynamic Land

http://worrydream.com/SeeingSpaces/

- Ishii & Ullmer's seminal paper "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms" (which precedes Dynamicland by 20 years, but I think many core goals are shared)

https://trackr-media.tangiblemedia.org/publishedmedia/Papers...

This paper kickstarted the whole "tangible interaction" subfield of Human Computer Interaction. If you wish to dive even deeper, I recommend:

- reading Ullmer's PhD thesis, of which the previous paper is essentially a TL;DR

http://alumni.media.mit.edu/~ullmer/thesis/full-noblanks.pdf

- checking out the proceedings of TEI, the ACM conference for Tangible & Embodied Interaction (sadly you'll need an ACM digital library membership, but if you are a computer scientist it is very much worth it and you should get your job/school to pay for it if you can).

https://dl.acm.org/event.cfm?id=RE271

I have no ties to Dynamicland besides having visited it, but I have done my graduate studies under one of the aforementioned authors and shared workplaces with people from Bret's group, so I'm happy to talk more about the whole "tangible interaction" aspect of it all.


Also, I saw a demo of a similar technology in the 1990s at Sussex University, in which a projector was used to create an interactive table where one could write calculations with a pen and the results would appear projected (the numbers having been recognized) and sketches could be augmented with shape-recognition


Is this going to eventually be a product ? Or will there be a open source solution created ? I really like the idea it is making things be on the network. I bet you have really tight feedback loops reading on the design . Does cut and paste work and also photocopying the code ?


See the FAQ [1] which answers some, but not all, of your questions.

1: https://dynamicland.org/faq/


At first I thought this was a room sized vr code organization library thing? Like you could have a house with different rooms corresponding to different packages


I would be happy to donate if that would support open sourcing enough to bootstrap this in other locations.


Are all those large dots necessary for localization? They're really distracting me.


As I understand it, the dots are just a temporary measure while the team works on implementing better methods of detecting objects.


is there a video/life demonstration that shows off the basic interactions of the system? So far I have not been able to understand what the system is actually doing and how to use it (searched but did not find any demos on youtube)


They probably have a camera and a projector on the ceiling. The camera detects the dots on sheets of paper to be able to differentiate them. You will probably need a computer to code what each sheet is going to do.

> Every scrap of paper has the capabilities of a full computer, while remaining a fully-functional scrap of paper.

this is probably a false statement...


I love the passion behind this project, but as with most physical or high-level UIs, the idea strikes me as overly hubristic. In my experience, truly marvelous works of interaction which expand the range of human thinking require thousands (upon thousands) of lines of code. Clearly, that kind of logic can't fit on scattered, small bits of paper. So is most of that complexity defined by whatever code keeps this thing running? (Realtalk?) Does this mean that the limits of your thinking in this environment are constrained by the skill and intent of the coders developing the system libraries? I feel this will end up becoming little more than a (really cool) room-sized toy, not something that will change the way we think and interact.

Maybe boring ASCII code is simply the correct, inescapable tool for the job. Maybe expanding the range of human thinking is impossible inside a system that can't modify itself.

(But maybe I'm completely wrong!)

EDIT: One of the other comments mentioned that Realtalk is self-hosted on pieces of paper. Hmm, that makes this a whole lot more interesting!



Don't forget that programming languages get more abstract and hence more powerful with fewer lines of code. RealTalk has functions you can call to get the physical location of a piece of paper and being based on Lua makes it much more expressive than C++. You can keep digging down levels of abstraction but the interesting parts of this system are extremely accessible.


Aren't we lucky to have stumbled upon the "correct tool for the job" at our first attempt.


I mean, you could argue that just as with writing, code is emergent from our biology. Brains are naturally wired for language and symbolic thought. Once computation was defined and made technically possible, perhaps the current form of programming was simply inevitable. Not that other things aren't worth trying, of course.


I just love the word play of Smalltalk and Realtalk!


This seems like an interesting experience. But the wording on that page is as much of a turn-off to me as the most the most cliche PR lorem ipsum. I feel dumber having read it.


Yep. And when you think that's bad, then you see some of the comments here which appear to be aping it for free...


Can we get a picture of the ceiling?


> And everyone gets their hands on everything! You walk by, you see what someone is making, you play with it and trade ideas, you sit down and work on it together. This happens constantly.

Abandon your virtaul dreams hackers.. start to socalize and group think. And use paper.


I'm not sure what kind of software you write, but every large piece of software I've ever worked on has required collaboration, and collaboration done well builds better software than any one coder could write by themselves.


Not the OP, but I suffer from social anxiety, and the ability to socialize outside of "real life" is a huge part of what attracted me to tech as a kid. So to me, the idea of replacing data and virtual collaboration with a big noisy room sounds… well, it sounds like a nightmare, a dystopia. It actually makes my heart race a bit just thinking about it.

Of course, that doesn't mean the project is bad, just that it's profoundly unsuited to me personally. But I'm probably not the only person like that on this forum...


In contrast, often times a core design is created by one particularly brilliant individual, which is stable enough for others to build upon. For example, Linux (Torvald), Minecraft (notch), Redis (antirez), C++ (Stroustrup), Ruby (Matz), NodeJS (Dahl), Bitcoin (Satoshi), etc... I'd be interested to hear of examples of "great" software that started off in a collaborative fashion.


Cool, but I still prefer "Books Not Apps" for our children.


So do many of the Silicon Valley executives who are pushing their privacy invading products on schools. That's why they send their own children to private "tech-free" schools to prevent them from being exposed to the same products that pay their parents' bills.


It's reminiscent of how the daughter of one of the Lunchable inventors has never eaten a single Lunchable.


If anything, I believe that Dynamicland and "books not apps" attitude can only complement each other.


What do you mean?


Books can be take home, can be studied by the individual at their own pace. A student can create their own book because it is "real", "physical" and low tech.

Also, regarding books, there is, historically, a lot of content.


The same applies for Dynamicland.

For example: I learned how to use the graphics system in Realtalk with a physical booklet that I could have taken home and studied. The booklet had the code printed out on it, as well as hand written annotations from previous readers. The "magical" part was putting the booklet in a space running Realtalk and seeing live examples of the code projected onto each page as I flipped through the book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: