What I think will be the killer feature over Occulus Rift is that Project Morpheus has 60 virtual speakers, which (from the liveblog commentary) could lead to audio taking a bigger part in gaming.
I'm also looking forward to this tech being used for concert demos. I'd love to strap this on and listen to AC/DC's Live At River Plate, all the sound and the atmosphere of being there!
Shame that "no PC support has been announced" [1]. But some enterprising hacker'll get it going, no doubt.
60 Virtual speakers What does that even mean? It sounds like clever marketing talk.
I bet they have just regular stereo and software. Here, take a pair of good stereo headphones and try this 3d demo. And don't forget to close your eyes.
I would agree. With sound, you perceive sounds to be close if they have plenty of bass and far away if this bass content rolls off. You also get phase smearing, where the sound waves coming towards you interact and mush together. A perception of closeness can be achieved by reversing this process, as used by BBE in their products (I have a BBE BMax-T bass preamp that has got this in it; if you take the top off, it's a little chip doing this).
I would imagine that they're just processing audio in a way to create a logical spherical arrangement, and sounds that fit into that sphere at certain points have equalisation applied to them to give the psychological effect of being far away or close; in effect, most things can be done with equalisation! For example, guitar effects boxes these day commonly "simulate" cabinets. Each speaker cabinet has its own resonant frequency, and the speakers in it also have certain frequency responses (they are not flat responses), hence the reason 4x12" Marshall cabs sound a certain way. The speakers also have sharp treble roll-offs, so these cabinet simulators are just putting an equaliser on the output signal that sounds like the frequency response of a particular cabinet. This is also the reason that signals taken from an amplifier's "direct out" that doesn't have this EQ applied sound fizzy - the sound doesn't have the same response as the speaker (which happens to be really ineffective at converting the input signal to the output sound), and so the guitarist will compare the speaker sound to the "direct out" sound and think the "direct out" sounds rubbish! In reality, the sound they are expecting to hear is the massively coloured sound of the speaker but they are hearing the true-sounding DI tone.
(Your ears are far more responsive to middle/treble frequencies and not bass, but bass gives the feeling of power and treble the perception of loudness, hence the reason most "rock" equalisers on hifis or MP3 players boost those two, so you think "Wow, this is really powerful and LOUD!")
BTW, that's a great demo. Haven't heard that in ages.
The virtual barbershop is a binaural recording made with special microphones. Presumably they're saying '60 virtual speakers' because they're modelling binaural 'ears' from 60 locations.
I bought a Razer Tiamat a couple years ago (a true 7.1 surround sound headset) and it was one of the best purchases I've ever made. Imagine living your life with one eye closed, then suddenly having the opportunity to open both eyes, and how surprising it would be to realize that you have depth perception, etc. That's sort of what experiencing true surround sound is like.
It requires a bit of tweaking to get configured properly though, which led to a lot of negative reviews or people leaving reviews saying they weren't impressed.
Experiencing actual 7.1 surround sound headphones (rather than simulated surround) is kind of like a mini version of experiencing the Oculus Rift for the first time, in terms of the "wow" factor when you finally get it working properly. What Rift is to eyes, the Tiamat is to ears.
The reason headphones have had just two speakers till now is because people have two ears, so we've incorrectly assumed that that's all that's needed. But human ears are designed to capture 3D positional audio. Two speakers means there are only two positions that audio can come from. 7.1 headphones simply blow everything else out of the water. It's a very visceral experience that's hard to articulate.
The takeaway is that having a headset which is capable of physically producing soundwaves from 7 different directions at once is one of the coolest experiences that any gamer can have. For casual gamers, it enhances the experience and immersion of any game. For competitive gamers, you can hear people sneaking up behind you, so you gain a competitive advantage.
All of this means that real (not simulated) 7.1 surround sound is a valuable idea which till now has seemingly been overlooked by the gaming industry. The first industry player that delivers a true positional surround sound experience to the masses stands to profit handsomely, whether it's Oculus or Sony or someone else. So build it!
(That said, I have no idea what 60 virtual speakers means, but I wanted to share my experience with true positional surround sound. Also, the surround sound headset works fine in tandem with the Rift. So until the Oculus guys realize how important 3D positional audio is and ship their next product with a pair of surround sound headphones, you can get the same effect right now from the Tiamat.)
> The takeaway is that having a headset which is capable of physically producing soundwaves from 7 different directions at once is one of the coolest experiences that any gamer can have.
I'm pretty skeptical as to whether this really requires special headphones. Amazing-sounding recordings made with a binaural head can be played back on normal headphones, after all. It seems more like a signal processing problem.
Sort of analogous to the way Creative used to sell overwrought, overpriced hardware for creating sound effects with EAX when the CPU and signal-processing libraries could have been used to serve the same purposes.
a recording from a binaural head cannot account for the different shapes of pinnae that humans have. The outer ear of humans are shaped quite differently. So each ear has his own transfer function which not only depends on frequency but also on the direction of the sound.
I don't know: the stuff I remember hearing was pretty amazing and there wasn't an ear measurement step in the listening process. If what you say is correct I wonder how perfect the match has to be in practice. (but, to the point regarding the magic headphones: this is something that could happen in the software if you're simulating the whole thing)
At least EAX was DSP accelerated, which meant no latency. I used to use my Soundblaster Live as a guitar effects pedal. Doing effects processing on the CPU was entirely feasible with the hardware, but the stock Windows drivers added about 2 seconds of latency to the input - useless for jamming. So I'd just plug my guitar into Line in, go to settings, and play with different EAX environments. Eventually I found the kX Project which provides non-sucky drivers for EMU10K1 cards, but I remember how awed I was when I first installed Linux and found that Linux soundcard drivers were non-sucky by default. I think that was formative...
Not the parent, but I had no idea that existed myself. What's the actual cause if vision in either eye works fine otherwise? The wikipedia page (http://en.wikipedia.org/wiki/Stereoblindness) wasn't too helpful.
For me, my eyes were slightly crossed as a kid. Had a couple of eye surgeries to align my eyes correctly, but the pupils were not aligned vertically perfectly, causing slightly conflicting images from each eye, and thus causing the brain to use the image from one of the eyes as opposed to fusing the two images as those with stereo-vision do. Both eyes still work all the time, the non-dominant eye at any given time just provides peripheral vision.
I don't remember the source, read this over six years ago, but there was a case where a women born with stereo blindness just suddenly regained depth perception.
> Imagine living your life with one eye closed, then suddenly having the opportunity to open both eyes, and how surprising it would be to realize that you have depth perception, etc.
Interestingly, some people with faulty depth perception have had that experience with Oculus Rift. I look forward to trying it out myself.
> What I think will be the killer feature over Occulus Rift is that Project Morpheus has 60 virtual speakers, which (from the liveblog commentary) could lead to audio taking a bigger part in gaming.
With old fashioned two-speaker headphones and binaural recording, you can already achieve a surprising level of audio spatialization: https://www.youtube.com/watch?v=8IXm6SuUigI
Currently, headphones are pretty much required hardware for VR, because the positions of the speakers relative to the ears are fixed. Achieving realistic binaural audio with external speakers isn't currently feasible because of the need to compensate for reflections and interference, let alone maintaining positionally invariant virtualized audio sources through head rotations and translations.
I'm not sure how the "60 virtual speakers" would work (it sounds like marketing terminology), but currently, the state-of-the-art in immersive in-game audio involves constructing an acoustically realistic head model, and harnessing the raytracing abilities of game engines and graphics cards to apply a material-based frequency attenuation curve at each bounce.
Yerp. Games just don't simulate it very well. People are hyperfocused on graphics and nobody pays attention to audio.
However I seem to recall Creative acquiring some audio company that had products with superior audio positioning technology, & basically axed the other product in favour of EAX or whatever its called..? Could be wrong
Wow, it's so great that patents are here to protect the little guys from the big guys. Imagine, without patents, Creative might have just ripped off Aureal instead of ripping them off and suing them to death.
I want one of these things for software development. Being able to have one giant screen which encompasses your field of vision could work wonders for developers who feel constrained by monitors. Add in some sort of motion sensor and you can arrange your items on your giant virtual window with your hands. Throw a camera on the front of it so that when you need to look down at some paperwork, you can still see it.
My thoughts exactly. A portable system powered by a pocket-sized computing device, or a more powerful desktop, or the portable device dialling into the desktop. Perhaps rather than a camera a simple (Blade Runner-ish) SLR-type of screen flipper to show you RL in an instant, no extra battery power required.
I can also see a glove, similar to the early Nintendo efforts, with a keyboard that slides down over (or under) the hand, pivots 90 degrees and locks out between both hands. Speech recognition is of course great and all, but slightly humiliating to use (from experience), getting progressively moreso on a sliding scale of how many people can hear you talking to a computer. What we really need is either the sensor, as you mention, for 'air keyboard', or a brainwave reader that can intercept and interpret thought. Obviously one of those is more workable than the other, but no harm in planning ahead.
They look very interesting. Steve Mann is on their team I see. He looks very amusing with his Borg-style facial additions and his cheery grin.
Hopefully the product will be good. But I can't see myself using one and sat there waving my arms around like a lunatic. Instead of typing on a virtual keyboard, why not just use a laptop or netbook or something? It seems to be built to fix a problem that doesn't exist.
I'm also looking forward to this tech being used for concert demos. I'd love to strap this on and listen to AC/DC's Live At River Plate, all the sound and the atmosphere of being there!
Shame that "no PC support has been announced" [1]. But some enterprising hacker'll get it going, no doubt.
[1] http://www.dualshockers.com/2014/03/18/sonys-ps4-virtual-rea...