Hacker News new | past | comments | ask | show | jobs | submit login
Notes from Dynamicland: Geokit (rsnous.com)
87 points by petethomas on Aug 6, 2018 | hide | past | favorite | 10 comments



Having worked as a researcher at Dynamicland, I think this is one of the most important bits from the end of the article:

    > A lot of what's new about Dynamicland is that it's a programming system!
We've had projection mapping, physical UIs, gestures, etc for years, but this is the first system that puts them together in a user-programmable editing environment.

An analogy might be Xerox PARC in the 70s and early 80s. Alan Kay and his team saw hardware shrinking and becoming cheaper, saw powerful UIs demo'd in research settings, and had the spark of creativity to combine these elements synergistically into Smalltalk, a programming system to support a brand new expressive medium.

Similarly, the research vision of Dynamicland / Realtalk is to make a more humane and accessible dynamic medium. This is only the first version, with (hopefully, depending on funding) many more versions to come.

For a bit more information about the vision see my own demo made in the precursor to Dynamicland, a sweaty game called Laser Socks: http://glench.com/LaserSocks/


Do you know if there's any chance we'll get one on the east coast? Or other ways to get involved?


I've been following Dynamicland since its inception, and the work is really cool. But it seems like all of the interface work is rather two-dimensional (i.e. using paper). Isn't the underlying theory that this is all based on that humans are better suited at working with three-dimensional objects (like doorknobs and cutlery and what not)?

I think it would be really cool if the Dynamicland UI technology (projection and object recognition) could work with e.g. 3d printed objects, especially if they could be embedded with motion tracking sensors in the case that your hand occludes the computer vision trackers. Is anyone working on that sort of thing?


Sounds like something that you would see a lot of at SIGGRAPH, but all I can find is https://www.youtube.com/watch?v=QHSDGBGct9g, https://www.youtube.com/watch?v=oauJ99Mfru4, https://www.youtube.com/watch?v=ogsCTxAxGwU.

I know hololens can do some of that, but not to a terribly great extent (better at just bringing virtual 3D objects into your environment).


To some extent there is the reactable, which is geared towards making music. I would imagine an extension of the idea could go in the direction you picture. Note that the reactable is tied to the surface to capture some of its interactions

http://reactable.com


I would disagree. We live in a 3d world but more often than not we communicate complex concepts for understanding in 2d; writing and drawings


I wonder what the details of the computer vision tracking are. I am thinking its simple color tracking which has challenges under lighting and contrast with other colors. Object detection using deep learning could make the tracking more robust.


This summer at NYU ITP Camp, a fellow itp'r setup https://paperprograms.org which "recreates just a tiny part of what makes Dynamicland so interesting" (quote from JP, one of the creators) http://janpaulposma.nl) The outline answers some of your questions. It's really easy to setup, all you need is a web camera and a projector, and the colored dots and a printer. We were up and running playing pong with paper controllers within twenty minutes... Lots of fun.

How does Paper Programs work? Programs are stored on a server (using Node.jsand PostgreSQL), hosted on paperprograms.org. Each program has a number, and the dots on the paper encode that number. Currently each corner is uniquely identified with 5 dots of 5 possible colours, which means you can have about 600 unique papers currently (this is a significant limitation). A camera detects the dots and retrieves the program associated with each paper. This is done in a browser, using OpenCV compiled to WebAssembly, and some custom Javascript code. Calibration happens manually, using a UI built in React. Program code and configuration are stored in the browser’s local storage. Projection and execution of programs happens in a separate browser window. Each program runs asynchronously in a Web Worker, and can request access to a canvas, coordinates of other programs, and so on. Then there is an editor page, which anyone in the space with a laptop or tablet can use to edit programs, using Monaco. When having created a new program, you can click a print button to print out a new paper that runs that program. It has the program text printed on the paper itself. Any edited program can be reverted to its original state.


The calibration is pretty good at dealing with the background noise of your table top area, and we set this up in a pretty average open workspace environment and didn't encounter any issues with the image detection. Dynamicland has even lore advanced image detection from what I've gathered from various posts, where robots and action figures, and other 3d objects can be placed within a tabletop and integrated into a variety of different interactive programs. If you'd like to follow this down another path, nyc's Recurse Center has been supporting development of a Roomdb framework that also uses concepts from the Dynamicland universe. This goes more into the database side of things in the vein of Smalltalk and Linda, and is really interesting stuff. https://www.recurse.com/blog/132-living-room-making-rc-progr... https://livingroomresearch.tumblr.com


Can't do deep learning easily and without a lot of specialized effort at the latencies that they need to work with, iirc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: