Hacker News new | past | comments | ask | show | jobs | submit login

Conceptually I agree with you, but the challenge is that there are signals transmitted between people in the same room that don't come through electronics well.

I'm talking about all of the non-verbal signals that feed back into your visual cortex without you even realizing it. Things that inspire you to follow someone, or realize they aren't understanding you, or that you are offending them, or that they don't themselves believe what they are saying.

Lots of human factors work has gone into this and we don't have a clue yet how to transmit that stuff much less render it in a virtual world. I expect we will get there, but its an area that I am not seeing much in the press about.

Would love Trevor to jump in here with what they've been learning with the AnyBot with regards to present/not-present sort of work.




That's true, but the constraints and nature of the medium open up new avenues of expression. We're not likely to catch subtle facial expressions-- or, for that matter, smells and touch-- for a long time, but we'll gain incredible new powers in the mean-time.

I'm thinking gesture capture (hands and body) and words/sounds in a virtual environment. You can fluidly interpret those inputs into sights and sounds which illustrate ideas. It won't be intimate in a physical sense, but it will be extremely expressive, and that's what the office needs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: