Hacker News new | past | comments | ask | show | jobs | submit login

> Novel hardware devices (e.g., for VR, or the Wii remote, or for new musical instruments) can be used as the basis for new tools for thought. While hardware can be duplicated, it’s often much more expensive than duplicating software. And, in any case, the advantage for such companies is often in distribution, marketing, and relationships with vendors who make products for the platform.

SimulaVR was heavily influenced by the early visions for computing devices as "Tools for Thought".[1]

"Computing as an intelligence augmenter" seemed to be a really popular viewpoint through the 70s and 80s (even in corporate advertisements for computing devices). Then it somehow got co-opted by "computing as a medium for passive entertainment and video games", and then eventually as "a medium for social networking". I'm certainly not against these things in all contexts, but to me the exciting thing about VR/AR is that it creates a completely open field to make new advancements in the way human beings interact with machines to get stuff done immersively & with greater creativity.

Despite this, the entire VR ecosystem seems to be focused almost exclusively on gaming/entertainment + social networking (facebook "Metaverse"). I personally find it really exciting to think about how the extra dimension could help us create more interactive tools in line with Alan Kay (& co's) early visions. Before that's possible, we first need to get the basic hardware good enough to replace PCs & Laptops (for everyday 2-dimensional applications). After that, we can these devices as platforms to build better thinking tools on.

[1] https://simulavr.com




> "Computing as an intelligence augmenter" seemed to be a really popular viewpoint through the 70s and 80s (even in corporate advertisements for computing devices). Then it somehow got co-opted by "computing as a medium for passive entertainment and video games", and then eventually as "a medium for social networking".

This is probably a demonstration that intelligence only gets you so far and isn't really that useful in life. Nerds don't like this because intelligence is what they value in themselves; that's why so many of them think that if "artificial general intelligence" was invented that is "superintelligent" it would somehow also have superpowers and be able to invent viruses, convince anyone of anything, etc.

But how's it going to do that? Just imagine them really hard? What part about being able to think really fast says any of those thoughts are accurate?

Basically, computing technology is good at creating virtual worlds of logic, but it's much harder to affect the real world with it.

See also https://en.wikipedia.org/wiki/Moravec%27s_paradox.


For me, emacs (with org-mode) and array languages (like J) do augment my intelligence, because the productivity boost is so profound that it affords the opportunity to try more things and be more creative.


Passenger jet engines, nitrogen fixation, silicon fabs -- these don't just magically appear. It takes careful reasoning and years of exploration in the vast ocean of ideas. Most places in this ocean are not useful, so it's helpful to sail faster.


I think it come down to intelligence augmenter is hard to separate from skill augmenter. Reading a chess book can make a beginner much better at the game, where following the instructions of a chess engine doesn't have any enduring impact on skill. Computer as orical such as with chess engines, or weather models isn’t actually improving intelligence any more than a truck is improving strength.

Word processor, email etc, can speed up how quickly an author gets stuff done, but they don't turn the average person into a good writer. You basically need to become an engineer before CAD tools let you design a state of the art car engine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: