Just look at this nonsense [1]. We have brand new technology with unlimited potential for new abstractions and paradigms. So, what do we do? We make a virtual desktop workstation, the same one we had since 70s! We limit all rendering to a 2D surface, make it curved and put a nebula in the background. Is this really the cutting edge of information technologies?
Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
That was the same thinking behind Elon Musk's startup Neuralink, but given that there really aren't any options that are substantially better, Logitech isn't being stupid.
Perhaps some combination of voice, gesture, and 3D could be useful specifically while in-game. But for productivity, I don't think there's anything better than a traditional workstation.
Voice has privacy issues. Gesture recognition is inaccurate and causes "gorilla arm" syndrome.
3D doesn't really add much, and in fact for productivity some people think even 2D Window-based GUIs are cumbersome, and that is the entire motivation for tiling window managers. From that perspective, 3D is even more chaotic and cumbersome.
Voice not only has privacy issues, but could you imagine working in one of those god awful “open floor plan” offices with a bunch of people speaking in JavaScript all day?
WHY? There's a reason we have the keyboard, and it wasn't a limitation of technology. The reason the "typewriter" interface was carried over to the PC is because it's GOOD. This is almost as bad as when MS decided Kinect would replace all game controllers because look hands free. It turns out we have hands for a reason. The speed and precision we can do things with our hands far exceeds basically anything else beyond a direct link to your brain.
Have you ever watched someone who is handicapped who has eye tracking for a keyboard? Have you paid attention to how slow and error prone it is? What on earth makes you want to have THAT as your primary input method???
I absolutely do not want facial features and eye movement to completely replace finger movement for input. I can neither control my eyes nor face at anywhere near the rate that I can control my fingers. If VR wants to replace my desktop environment for anything other than games, it'll need to retain some sort of high fidelity, high frequency input vector.
Yeah, I wouldn't use facial and eye tracking instead of traditional input interface. They are meant to enrich interaction with other people. Eye tracking could be used for some optimizations, like dynamic LoD based on user focus.
I thought about input interface that could replace keyboard for VR and mobile devices. Brain implant would be perfect, but that's not feasible with today's technology. Voice is good contender, except for privacy issues, and it's not really usable in work environments. Keyboards are our best solution so far, but they don't evolve and to me they seem like a dead end (plus, they are not portable and require both hands to use effectively).
I was thinking about touch sensitive surface that recognizes drawn glyphs. The idea is that anyone who is literate can start using it without any training, just draw letters instead of typing them. With machine learning and some clever visual feedback, both user and machine could adapt to each other to increase the input speed (by simplifying glyphs and by defining 'snippets'). Interface could be expanded to both hands to double the speed, but it would be completely functional with just one hand. It could even be used blindly. For mobile devices, such sensitive surface could be placed on backside - virtual keyboards that take half of screen are just horrible.
> I was thinking about touch sensitive surface that recognizes drawn glyphs.
I'd like a touch sensitive surface... on a keyboard.
Consider an expert keyboard user. With a nice keyboard. Adding downward-facing hand tracking, such as a head-mounted Leap Motion, adds value. Now you can gesture on the keyboard, and above it in 3D. Permitting a very rich input vocabulary. And, for example, using the entire keyboard as a touch surface. Except... while fingers are highly sensitive to tactile contact, neither traditional keyboards, nor current hand tracking, can provide good contact information. Finger-tip position tracking isn't quite good enough to infer contact existence and pressure. So while you can use say the J-key keycap surface as a trackpad, and even infer touch-untouch events from gross finger motion (better than requiring a keypress, but not by much), you still can't get light-touch events (eg "I was clearly pressing harder when I stroked down, but just skimming when I moved back up - I clearly felt the difference, so why didn't the keyboard?") So, I'd love a contact sensing mesh which could be laid over an existing nice keyboard, without compromising key feel. I can get position information elsewhere, and keypresses from the keyboard of course, but there's no existing source of multitouch contact, let alone pressure. Any ideas?
One of the first things I did with my Rift (well, after Superhot...) was try out the Virtual Desktop kit.
It's lacking. I thought it'd give me a 3D frustrum to throw window back/forward, drop Spotify out to the peripheral, have VisStudio in glorious megapixel size.
Instead, you get your regular desktop screens mirrored and constrained to a 2D plane. Gods help you if you've got mismatched DPI screens, like my 1080p & 2160p pair.
(Go on, I'll await some smart sod to tell me "Product" is exactly what I'm looking for...)
I'm not sure it's nonsense. What I see is incremental change. Take the workstation, and move it to virtual space. Then add more 2D screens. Then try a different input or display method, one small step at a time. Sounds reasonable to me, much more so than re-inventing every part of the interface at the same time, and hoping to get it all within spitting distance of what's right.
It’s as stupid as creating a fancy raster graphics system with windowing and fast refresh rates and hardware acceleration for 3D rendering, and then using it to display a bunch of programs that pretend to be a 1960s teletype.
Using new technology to incrementally improve existing ideas and techniques usually works better than trying to start completely from scratch.
I would also hope for such incremental change, but I don't see it happening. If you take a look at historical development of desktop OS UIs, you'll see all concepts established in 80s and not much (any) progress since then. Icons, buttons, menues, windows, even whole applications (file manager) are unchanged, while there's very few new UI elements (date picker, for example).
VR should be considered as completely new medium. 2D windows in VR should only be used as backwards-compatibility layer. Re-inventing every part of interface at the same time sounds great to me. Let's make as many competing solutions and then pick ones that work the best. This paves the way not only for rich VR experience, but also for AR.
> VR should be considered as completely new medium.
I think you're throwing the baby out with the bathwater here. There are plenty of teams working on brand new UI/UX experiences in VR, and I'm sure that in time the old 2d paradigms will have more competition. But also, I still choose to use a terminal for git when I could use a GUI, so I'm not actually convinced that "2D windows in VR" are going to or should go away.
The OP shows a great step towards being able to replace your workstation with some VR goggles, a keyboard, and a CPU. That prospect excites me more than any "new medium" VR experiences that I can see on the immediate horizon; being able to work from a beach hammock and not compromise on workspace efficiency is incredibly appealing.
Until someone invents a neural interface, keyboards might be the best option for text input. That or develop purely conversational interfaces, but the issue with conversational interfaces is they lack the privacy of a physical interface.
I keep wondering if chording keyboards could be the answer for VR text input. It would require learning a new input system, but I'm curious if one could eventually become proficient enough to reach average qwerty speeds. I can't remember the name but I remember seeing a google-produced chip that tracked small finger movements, and I imagine something like this clip from Children of Men could work: https://youtu.be/sJO0n6kvPRU?t=2m4s
I spent a few hours the other day searching for existing chording keyboards and they were all really clunky.
Keyboards are still a highly precise and versatile text input method. Voice isn't good enough yet (and you may never get good enough voice recognition, thanks to homophones and the issue of making up new words).
While just putting a window on a nebula background works as a Proof of Concept, it does need further innovation. The keyboard, however, does not.
An infinite real estate computing environment is a pretty damn good value prop imo; a hell of a lot better than vr is being pushed atm (360 vid and games).
The problem with the gif is that it doesn't make any use of the "virtual" environment, but this is where vr should be heading at least as the intermediate before fully interactive vr.
>Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
No. You don't get it. This doesn't do 3d finger tracking. It just figures out where the keyboard is and does some edge detection to give a rough idea of where your hands are.
Maybe, but the current state of VR makes it clumsy to use in some scenarios. I expect this to go away with time and further development, but right now there's a gap to fill. One step at a time allows us to go far.
Combine this with the new Pimax, and infinite workspaces start to become a practical reality. Which could help propgate the technology more which would lead seamlessly into more teleconferences, which have a feeling of personal interactions... which is important, even if we as programmers don't always appreciate it.
The other day I was playing a game of onward. It was the rare instance that a team wanted to work together on a shared goal. In the 15 seconds before the match, while planning some tactics... it occured to me how natural the meeting felt. It was like we were all in the same room. I felt engaged with EVERYONE. That doesn't happen on conference calls.
I work from home full time, I think the biggest downside are the missing meaningful interactions with other people. I think VR has huge potential to bridge that gap.
"I work from home full time, I think the biggest downside are the missing meaningful interactions with other people. I think VR has huge potential to bridge that gap."
VR also has the potential to be way more addictive than current forms of online interaction and entertainment.
I'm reminded of I think it was a Larry Niven story (maybe Ringworld) in which the protagonist has an electrical wire put in to the pleasure center of his brain and he just sits home and pushes the button that activates it for weeks on end.
Computer games are bad enough now. When I first heard about a certain factory building game (I won't name it to prevent anyone else becoming addicted), for two weeks straight I played pretty much every waking hour when I wasn't working. I ate things that were easy to make and didn't need much attention like pasta - I'd just set it to cook and come back when the timer went off, eat, then back to the game.
Yup, Larry Niven's "The Ringworld Engineers". The main protagonist uses a "droud" (as the implant is called in the book) to directly stimulate the brain with electrical impulses.
He only disconects the droud because he knows he will die from starvation, dehydration and atrophy if the doesn't go through the motions daily. But every moment without the droud is existential dread.
I don't know about other users wasting away, but I remember Chmeee (of the tiger-like species) destroying the droud when they were travelling to the Ringworld, so Louis Wu was without an option on quitting it.
I love that natural feeling of interaction with other people in VR. When playing in Rec Room so often you take a break from games to just have a chat with some other people because the interaction is somehow so engaging despite the cartoony graphics.
Same here - I usually avoid multiplayer games altogether but this weekend I discovered VR poker and was utterly immersed - am playing in my first tournament tonight!
As someone who has been a developer of virtual worlds for 17 years. I can say that this is a very incremental change, as all of the changes have been since 1994.
Personally I think we need to reimagine interfaces to the world around us, even the virtual worlds (VR/AR/MR) around us. Voice input, AI, hand sensing technologies could make for new ways for changing the worlds. The book Daemon by Danial Suarez and his eSpace holds much promise. I have been experimenting with those idea in a new VR world I have been working on for a few years.
I think motion sensing is probably the way forward; voice control is intrinsically limiting in most (crowded) environments where computers are used today.
Considering humans can ride bikes, drive cars, play instruments, etc. (including typing on keyboards!), I think that indicates that non-verbal, physical interaction is not nearly saturated as a transmission channel.
Conversely, it's hard to imagine someone verbalizing "navigate to HN" in a loud open-space office, or "Excel, create a pivot table" or whatever. I think it's fine in private spaces like your home, but in public spaces, you're implicitly broadcasting your activity to everyone around you, which I consider to be a strong negative.
The Deamon book has some cool motion interaction in it. They call it the Shamanic interface! I really want to make one of those, too bad tech is not readily accessible yet.
Is subvocalization a possibility? Mic or EEG setups might need to be slightly different.
Voice input is good for people who have clear sounding voice and generic American accent. Which is maybe 5% of the world. For people who speak English with thick accent or don’t speak it at all or have deep / less clear voice this would be massive decrease in accessibility.
I don’t personally believe in voice input. English is my second language but I have spoken it daily for close to 10 years. I can’t get Siri to understand what I say so I just keep it disabled and iPhone keeps bugging me to enable it.
We cannot rely on voice input as that is a dead end. Until we figure out how to link thoughts directly to computer inputs keyboards will be the king.
A lot of these technologies are being developed and its pretty awesome. However, I'll argue there's clearly space for a keyboard in the virtual world.
If you want to sculpt a model in the real world, you'd use clay and your hands. We can use VR to move away from KBM interaction for things like that.
If you want to write a book in the real world you...type it out. Its the best way we know to get text out of your brain and somewhere else. Don't throw the baby out with the bathwater.
Indeed everything in its place. However I really don't think I would go into a virtual world to write a book, or do any other kind of activity that requires a lot of keyboard interaction.
I used to hope for the unlimited monitor space, now I am not so sure how practical that really is.
I'm more impressed by the videos of the hand tracking around the keyboard than the tracker in the keyboard! Is that fidelity just using the Vive's outward facing camera? They don't show any tracking gloves in the mockups.
Its actually pretty simple. If you know where the keyboard is, you can clip out that part of the camera's input. Then you just need some edge detection to get the hand silhouettes.
And actually...that seems like a portable idea. So I suppose you could do this for any tracked thing.
If you want to explore this kind of thing, you can mount a camera on your HMD (Vive's is crippled), use a WebVR stack (simple), track objects using visual markers and javascript tracking libraries (jsartoolkit5 and/or tracking.js), and do selective camera pass-through AR. It's crufty, but not hard.
EDIT: You can simply use the Vive's camera, with tracking.js color tracking, especially with a small minDimension (number of pixels) threshold. Yellow is good.
I understand Logitech wants to keep selling keyboards and I agree that keyboards are currently the fastest way to enter text. However, it IS the wrong direction. The better direction is what will be opened with tools such as the Vive Knuckles enabling a more complete language of gestures. It seems to me that the better way forward is to stop clinging to an old paradigm that requires bulky equipment that can obstruct the user's volumetric interactions and instead to build off of those volumetric interactions even though there is a cost of a learning curve. I doubt businesses will implement it since they run the risk of alienating current gen customers and thus losing money. I think there will be some VR experiences aimed at younger customers that will implement it and over the next decade we will wonder why we ever tried to bring a keyboard into VR/AR instead of just using our hands.
A lot of people can't touch-type. I can't, not really, I need to glance at the keyboard once in a while. My typing speed is comparable to touch-typists', which is why I never felt the need to learn to properly touch-type. The need of seeing the keyboard is problematic in very few situations in practice, and with a back-lit keyboard it's never a problem... as long as seeing my keyboard is possible at all, which was not the case in VR.
I could invest a lot of time to unlearn my current way of typing and learn proper touch-typing technique, but that's a lot of work and doing it just to be able to type in VR feels like a waste of time. With this tech I'd be able to work in VR without changing my way of typing. To me, this makes using VR for work practical for the first time. It's actually huge, and if it works well it's, to me and others in similar situation, potentially life-changing tech.
I thought the same thing so I disabled the backlight on my laptop's keyboard. After accidentally turning it on in the dark a month later I will never go back to a non back-lit keyboard.
It's hard to place your hands on the keyboard in pitch black or when you're in a weird position like lying on your back with the laptop on your chest. If you've ever partially closed your laptop so that the monitor would shine on your keyboard so you could find 'f' and 'j', you know exactly what I'm talking about.
And also TFA says "for a true typing experience you need to see your hands, and we’ve created a way to use the Vive’s existing tracking to do that".
The way I know where to place my fingers on a dark keyboard is by feeling for some raised bumps which are on the f and j keys on my keyboard. As long as I've got those, I can always easily put my fingers in the default position on the home row, and from there it's easy.
As long as I can use such markers to position my hands in the proper initial position, I really don't like to have a lit keyboard in the dark. It's distracting and annoying.
If your keyboard doesn't have such bumps it's easy enough to make them either using some craft materials you can stick on the keyboard (like some tape even) or by cutting small notches in to it using a file or a dremel tool.
My dad had a keyboard with those bumps rubbed almost completely off. Occasionally I would need to use the computer attached to it and found it really hard to use. I'm so accustomed to unconsciously placing my fingers correctly using them that without them I was really ineffective with the computer and found it frustrating.
A few ideas mentioned in the article sound interesting:
“But VR can transform and augment that trusty keyboard – so easy to disregard – into a contextually aware companion for whatever application you use, becoming a palette for your creative workflow, dynamically providing you with any commands and shortcuts you need.”
I touch-type in the dark without seeing the keyboard just fine.
But that's on a physical keyboard that has something which virtual keyboards lack: tactile feedback.
Tactile feedback is one of the missing pieces of today's VR devices. Without it typing on a fully virtual keyboard is just not going to be nearly as accurate as on a physical keyboard, and providing visual feedback is just about the only thing you can do about it.
If you are playing a VR game where you are using a joystick or some other peripheral, you still may need to use your keyboard for some actions, in-game chat, or simply to pull up a browser and google something while in-game.
If you primarily have your hands on a joystick or game controller, you would have to take your VR headset off just to figure out where your keyboard/mouse are.
strictly off-topic, but has anyone else noticed on the Yoga Book (the 'Halo' Keyboard') that they screen printed the physical locator nibs from the F and J onto the flat keyboard :)
This could be done, but I expect it to be usually pretty annoying. Most of the time when I type I don't want to be looking at my hands or the keyboard. Instead, I want to be looking at either the text I'm typing or something else (like some other text, or a video, etc). Having a silhouette of my hands and a keyboard superimposed over what I'm looking at will just be distracting.
That said, I don't really have a better suggestion. Typing in VR is just going to be plain painful for anyone who can touch type with any speed on a physical keyboard.
I could see the hand tracking being useful for people who don't know how to touch type (and have to look at their keyboard and fingers) and who need to type while using a headset.
I don't think non-fanatics are willing to go through such painful process for a large field of view. Maybe it would be more profitable to start working on a 360 projection screen.
I think this issue is caused partly by the design of modern keyboards. Back in the day, keyboards would have clear, lasting, physical markings on the F and J keys and the 5 key on the numpad. They also had special keys that were easily recognized for caps lock, return and so on. This meant you could put your hands almost anywhere on the keyboard and always know where you were. Finding the home row without looking took a fraction of a second. Another feature that keyboards used to have, which is becoming increasingly uncommon today, is grouping of the function keys into fours. This meant you could always press the right function key without looking. Modern keyboards more often than not don't do this, and reliably pressing the right function key without looking is nearly impossible today.
Old keyboards were (generally) designed to allow for touch-typing. Modern keyboards are (generally) designed to be looked at.
I honestly don't understand how this is difficult for people. I can touch type without looking, on a keyboard which is moving randomly. It's not a parlour trick, it's just what's supposed to happen if you type much at all. If you can't touch type in the dark, there's something wrong with your skill or your equipment.
I can touch type but not everybody can. Look at non sw developers using computers: usually they look for keys and are slow. They are the huge majority of the market. I'll be like them if I were not using a keyboard all the time.
Btw, this is a VR keyboard from the Dennou Coil anime 2009. Nobody was using real reality (pun intended) laptops there, only VR ones. I guess they were running on some AWS like cloud.
Also, we need both mechanical keyboard to sense key presses and 3D tracking of finger movements? I get it, Logitech wants to keep selling keyboards, but for VR experience I would rather have tracking of facial features and eye movement.
[1] https://d201n44z4ifond.cloudfront.net/wp-content/uploads/sit...