Hacker News new | past | comments | ask | show | jobs | submit login
New Oculus Rift dev kit goes on sale for $350 today, likely ships in July (polygon.com)
282 points by amelim on March 19, 2014 | hide | past | favorite | 152 comments



I can't help but laugh with glee a little when watching the video at http://www.oculusvr.com/dk2/ , specifically at the part where John Carmack comes on.

I'm still so excited that he joined Oculus VR. When I see him in a video like this, it's like a giant "100% Guaranteed" stamp that this whole thing is going to eventually be everything we could possibly hope for.


I had a similar reaction.

They say in the video that they brought together the "best developers in the industry" which sounds like generic marketing embellishment until they cut to Carmack. Hard to argue that statement at that point.


Any reservations I would have had about the rift are totally destroyed by the fact that Carmack is on the team.

I held off on the first Dev kit, mostly because I didn't have a decent gaming rig, and the lower resolutions, but I just bought a new machine, and I couldn't give them my money fast enough to get in on the Dev Kit 2


A (likely terrible) idea that just occurred to me-

How cool would it be to use a VR headset to help you navigate a vehicle? Yes, I mean to say wearing this while driving essentially.

Given a fast/good enough sensor/input system, you could scan your environment, know the dimensions of your vehicle (I'm mainly thinking boats and huge trucks here) and be able to "see through" massive blind spots on your vehicle to enable for tighter maneuvering.

Imagine a boat. Sometimes with a large enough boat it can be hard to see all around it. And frankly, its almost impossible to see under it. But given depth finding/scanning and sensors, you could make it so you could see the virtual hull of the boat and anything under the water, making navigating through areas with dangerous stumps or other things under the water much easier.

Anyway, this is kinda silly... but neat to think about. Also, ordered one :)


My understanding is that this "see through the vehicle" capability is built into at least some helmet-HUD/Helicopter combinations that the military has.

If it suffices for an attack helicopter, I'd say that the idea probably has serious potential for automobiles as well. I wouldn't expect to see it come to standard consumer cars yet, but for commercial drivers (particularly truck drivers who have massive blindspots) this could be invaluable. Imagine being able to see through the back of your box truck while backing up!


Yep, that's pretty much what I'm thinking. It started all with thinking how terrible of an idea it would be to wear this while motorcycling...

Someone at Google with access to one of the self driving cars should see what this would 'look like' from a passenger perspective with the data.


Feasible to cross dev with glass? Hmmm... I think I have a new project to work on!


I've joked that the ultimate tech hipsters need to use the Rift + Glass at the same time to look at a virtual Pebble watch on their arm...


Only if they can get the code for that efficient enough to run on the actual Pebble on their arm.


what purpose does Glass serve?


So you can see your tweets ;)


It's so other people can watch you seeing your tweets without being able to actually see your tweets.


The F-35 has six cameras embedded in the aircraft which project images inside the helmet, literally allowing the pilot to look through the airframe.

https://www.f35.com/about/capabilities/helmet


Maybe Erlang will finally get more popular.


I want to be able to see through other people's vehicles.


"FPV Flying" (First Person Video) of an R/C plane or quadcopter is quite similar, although obviously you're not in said vehicle.

The challenges there are mostly around downlink and latency, with analog video being the standard for its favorable latency and signal fade characteristics.

Many flyers do add a few more channels to their existing radio control systems in order to use camera gimbals with head tracking to allow the true "first person" experience you suggest.

Obviously for a real vehicle something like modern in-dash parking systems using stitched camera views to build an immersive environment would be more practical than a gimbal, although software latency is occasionally harder to control than that of hardware.


Actually this is a different but related area that Steve Mann calls mediated reality. It's essentially a level beyond AR by not only augmenting what you see but manipulating what you see (improved human vision is an example).

Meta is trying to achieve this through their AR glasses, which Steve Mann is part of.


I have had this exact thought for a while - use a bunch of cameras to compose a virtual third-person perspective of your car.

No more blindspots.

As it stands our local CubeSat team apparently want to try high-bandwidth radio - I'm suddenly on that side as long as we include a pair of stereoscopic cameras.


I've been wanting that for months.

I would put one camera in each of the lights, so I would have front and rear stereo vision.

The top of the screen would be forward, and the lower part, or a small window could be towards the back.

Easy parking!


Awesome news... on a related note, John Carmack had a couple of recent, Sony-related Tweets with regard to their Project Morpheus announcement:

"Trivia: I had suggested to Sony that they try to hire Palmer Luckey before the Oculus kick starter." [1]

"Calibrate PS4 VR expectations: a game that ran 60 fps on PS3 could be done in VR (stereo 1080 MSAA low latency 60 fps) on PS4." [2]

[1] https://twitter.com/ID_AA_Carmack/status/446122463747776512

[2] https://twitter.com/ID_AA_Carmack/status/446271995668217857


Could anyone with more background elaborate on what you think Carmack means with the second tweet? The PS4 already outputs 1080p with (varying algorithms of) antialiasing for several of its launch titles. I'd expect some rendering pipeline shortcuts to cut down on the latency that some operations incur and to achieve a constant 60 FPS, but a generational drop in graphical fidelity seems steep.


Many games on PS4 do NOT run at full 1080p resolution, they are rendered at lower resolution and upscaled. And on top of that, several games on PS4 do not run either at 60 fps. Performance wise, the new consoles are not THAT great, that's why the Occulus Rift team has been specifically said the new consoles will not be good enough for where they want to take VR.


Carmack has written a lot detailed information about the latency problem, which is much more demanding for VR, and about how to mitigate it: http://www.altdevblogaday.com/2013/02/22/latency-mitigation-...


Stereo rendering requires rendering the scene twice, once for each eye. Off the bat, this is almost twice as computationally intensive as rendering it once.


It's rendering it twice, but at only half the resolution.

Thinking about this more, the closest analog to stereo rendering would seem to be games with a split-screen mode. Which, in my experience, often does entail a performance hit (though certainly not a generation-sized one).

My guess is that how much a second viewport affects performance is highly engine-dependent. Ironically, the one game I remember having the biggest delta from single-screen to split-screen is the Dreamcast version of Quake III: I distinctly recall the sharp drop in poly-count on the otherwise curvy rocket launcher.


Rendering the scene twice has overhead, so it would take extra time even if it was the same number of pixels total. But that's far from the only reason VR requires more power.

Due to the distortion caused by the lenses the scene must be rendered at about 1.4x the normal resolution. Then there's a warping step that performs the inverse of the lens distortion, which is an additional cost. Also, good VR requires rendering at 90 Hz, not 60 Hz, so that's another 1.5x. Furthermore, frame tearing artifacts and FPS hiccups are much worse in VR, so you need extra headroom to eliminate them even in worst-case scenarios.


I remember being surprised that the bottleneck for some modern devices (like cell phones) wasn't the number of vertices one could push, but rather the fill rate and number of draw calls. Do consoles these days hit performance bounds as a function of the number of draw calls rather than the number of polygons/vertices?

I ask because I can completely understand how doubling the number of draw calls could be problematic in a VR situation.


Mobile devices are often limited by memory bandwidth, and by OpenGL driver overhead. Consoles have very little driver overhead, so number of draw calls is not as big a problem, but state changes are still costly, so doubling the number of state changes hurts performance. I could imagine some clever techniques to avoid doubling the number of state changes (perhaps a geometry shader that duplicates triangles to render both eyes at once) but I don't know how well that would work. VR rendering is still mostly unexplored!


  Also, good VR requires rendering at 90 Hz
The DK2 can only do 75 Hz. Anything more than that is wasted.


DK2 isn't good enough. That's why it isn't the consumer version already. Palmer has said in interviews that the consumer version will be higher resolution and higher frame rate. The higher frame rate is required to enable low enough persistence without flickering.


The "stereo" part likely adds significant work (up to 2x), since you're rendering from two points of view.


How are you going to push twice as many pixels without some quality tradeoff?


That's just it, though. It's the exact same number of pixels. It's a single 1080p screen divided in two.


you are rendering from two different angles. its not the same as taking one rendered frame and cutting it in half


Number of pixels isn't the primary cost of rendering, unless you are using ray tracing.


You lose some performance with V-sync, which is necessary.


His second tweet makes an excellent point...


Fresh new Oculus Rift Development Kit 2 FAQ is here:

https://support.oculusvr.com/hc/en-us/articles/201835987

Here's an excerpt:

  Q: What is the Oculus Rift Development Kit 2, its features,
  and what does it come with?

  A: The Oculus Rift Development Kit 2 (DK2) is a development
  kit meant for developers that want to create virtual reality
  content for the upcoming consumer version of the Oculus Rift.

  New features:

  - Positional Tracking

  - Low Persistence OLED Display

  - Built-In Latency Tester

  What’s included:
[...]


Apparently it supports the "Linux 12.04 LTS operating system"!


Good catch. I've let the web team know.


Tested.com got their hands on one at GDC 2014. They interviewd Nate Mitchell and did a short review in this video https://www.youtube.com/watch?v=4d3Wli7s6KY


For those that have the first dev kit, how appropriate is this for a curious engineer who wants to play with Alpha titles? It is hard for me to imagine a scenario where I would actually have the time to develop something, but man, playing with it seems well worth $350.

Am I going to be disappointed in what is available?


I think you'll find a large proportion of people who have ordered the dev kit have done so just for a taste of what's to come. And I'm sure OculusVR don't mind at all because the sales generate additional revenue and PR without all of the responsibility that comes with shipping a consumer product.


Does the latest versions make people sick after prolonged use?

How similar does it feel to looking at real far away objects? Is it easy to tell it's not 3D and the screen is flat?

If I tied a passed out "friend" up and put on the Oculus Rift glasses and played a movie of them being thrown out of a helicopter from 1st person, would they be able to tell that it was fake or is there a perceived feeling of it being real?


1. According to media reports, the latest version almost eliminates motion sickness. You can still get motion sickness from e.g. going on a virtual roller coaster that would make you sick in the real world, but you won't get sick just sitting in place (which is why the new demos do just that).

2. Far away objects look pixelated, and very close objects look incorrect because the depth of field is wrong. However, the 3D effect is much better than with any previous 3D technology, and there's an overwhelming sense of real scale that you don't get watching e.g. a 3D TV set. You feel placed in the 3D world and you can easily judge the size of things relative to yourself.

3. If your friend knew about the Rift they would easily be able to tell that it was VR; however there is also a perceived feeling of it being real that doesn't go away just from knowing that it's not. VR "presence" means that you'll dodge an object thrown at you even though you know it isn't there, and you'll be afraid to step out over a cliff even though you know you're on a flat floor in real life.


By (2), are you referring to the fact that you don't have to refocus your eyes when shifting your view between objects at different distances?

I've never tried one, but I'd assume the screen is projected at infinity or close to? This should only be an issue with objects that are closer than 1.5ft or something.


Yes, it's only a problem with objects that are unusually close to your face. It's a small concern and only really relevant for people designing VR content, because with properly designed content users should never notice.


I guess I won't be able to faithfully simulate having to put on my reading glasses to read the cockpit instruments in a flight simulator then... ;-)


I got the chance to play with it a few weeks ago, along with several others in our office.

1) It didn't make any of us sick, but I could see how it still would for users, particular if they didn't try to move their body in roughly the same direction as they would if it was IRL. If you have ever sat in a vehicle and watched the outside via a mirror - it's like that.

2) For me, while it was immersive (we had several users scream at things and dance around), it is still nothing like real vision. It still gives you one real focal point (or lack of one at all), which immediately makes it different from real life vision - particularly over long distances.

3) Unlikely. It would probably severely freak them out. Perhaps the rapidness of it would leave enough confusion that they worked it out, but it is still obvious it is computer generated graphics. See the focal issue above as an example.

Having said that, it was amazingly fun and I wish I had one now. I'd love to experiment with a 'Virtual Programming' environment. I'll leave what that means as an exercise for the reader :)


I would love a virtual programming environment. If they (eventually) get the resolution high enough, you could use it to replace all of your external monitors, and be able to use your full desktop environment anywhere.


Indeed that would be assume. Solves the how many monitor's do you need debate, no longer restricts you by either available space or location. Would save money on power, etc. Seems like it could be a win for both the developer and budget.


There's one significant catch, which is a limited ability to look around by moving your eyes rather than your head.


The screen spans your entire field of view. You can still look around with your eyes only. The only problem is the low resolution.


That's not the case: different headsets have different fields of view, and most, including the DK2, have a FOV that's considerably narrower than the Sensics dSight's. Likely the dSight doesn't cover the full range of eye movement either.

https://docs.google.com/spreadsheet/ccc?key=0ApTBEPEA_odkdHp...


I've been thinking about it for development as well, but I now think that resolution isn't as big a concern as I initially thought.

You'd be able to have a virtual desktop of (essentially) unlimited size. You're frontmost task could be high resolution, but ancillary tasks could be lower resolution or high resolution but partially occluded by your field of vision. This wouldn't be opening an IDE to fill the entire screen - it would be a multi-window environment with at most a few primary tasks using up most of the screen real estate. Like Mission Control with real time zooming in a 3D environment and no distinction between workspaces.


I would love to get over a fear of heights and be able to enjoy some new places on this earth.


Some video games give me that feeling of "death is imminent" when the 1st person character jumps off of a cliff or falls, but it only happens the first time, so there's a good chance that it's possible to help overcome the fear of heights with VR.

I'm horrified about being near the edges of buildings and cliffs, and in my mind it's perfectly justified (less chance of death away from edge).

http://en.wikipedia.org/wiki/File:CCTV_Beijing_April_2008.jp...

Also this building freaks me out imagining actually being in that building. I'd much rather have a support column under me. If I worked for a company in that building and our offices were on those floors in the middle I would probably be forced to quit. Oh, and it's in an area known for seismic activity, nope.


This is an actual in-use therapy approach, and one that will probably become much more widespread now that VR hardware is getting affordable and accessible. (Also used for social anxieties, like a fear of public speaking.)


I wonder if it would be possible to detect from user eyes where he is focusing and adjust the focus point accordingly. Would you think that would add another level of immersion?


From what I've heard from the Oculus folks, it won't happen any time soon. Your eyes move way too fast for a VR system to react to them.

It will happen eventually, but it seems like we need cameras/computer vision systems/drawing pipelines with an order of magnitude lower latency for that to actually work.


It's pretty close to happening; they can already track your eyes fast enough to render at higher resolution at the fovea area as your eyes move around:

http://research.microsoft.com/en-us/um/redmond/projects/fove...

Also, your eyes are very slow to focus relative to your eye movements, so it is probably doable now. It still will not feel like real life, just sort of eye tracked depth-of-field. Your eyes will still be focused at infinity, but near things will blur when you look in the distance, and vice versa.


I need this for my two monitor setup. When I look left the app on the left should take focus. That it doesn't has led to much confusion.


That would be an interesting research project. Track eye movement and focus. I could see how it would greatly enhancement the experience as well as introduce some fun & new rendering performance problems needing to solve.


One alternate possibility is to use a lightfield display. The problem with these is they are currently extremely low resolution and require consistent rendering at all focal lengths.


Hrm. That's interesting, I've never thought about the focal point issue before. Do you know of anyone working on this problem or any general status on this in "the field"?


The good news about sim sickness is that most people get over it with practice. Not only that, but there are many reports from Oculus DK1 users that warming up to the Rift cured their motion sickness outside of VR! As in "I can read in the car for the first time!"

The bad news is that while better tech in the DK2 and better design in new software can significantly reduce motion sickness, I doubt we will ever completely eliminate it.

The really bad news is that nausea is funny. It's great for catchy headlines and quippy comments. I'm really afraid that the Rift will get a reputation as a "vomit helmet" and that will lead the masses to dismiss it. That would be a tragedy.

It's really important PR for VR to get the word out: If you feel sick, stop. Don't try to push through, you'll make it really bad. Try again much later. With practice, it will get better and better. When you get over sim sickness, all kinds of awesomeness awaits!


The motion sickness feeling is not due to the product itself but the mixed sensory perception by the brain when different senses present contradictory signals: e.g. visual movement perceived by the eyes vs lack of movement by the ears. Therefore, a newer product version is unlikely to address this.

Regarding your helicopter question, if you're tied and blind-folded and somebody moves you around, would you be able to fill that? So, in the same way your friend would "see" that he is falling through the glasses, but he would "feel" that he is actually not.


> The motion sickness feeling is not due to the product itself but the mixed sensory perception by the brain when different senses present contradictory signals: e.g. visual movement perceived by the eyes vs lack of movement by the ears. Therefore, a newer product version is unlikely to address this.

Citation?

There are several comments on this page that conflict with what you're claiming -- that motion sickness has not been reduced and, for some fundamental physiological reason, can't. First-hand accounts are suggesting otherwise.


I would argue that your brain probably believes what it sees over the sense of touch, at first at least. It would be confused why the body didn't feel anything, but that could be explained by being paralyzed on or some pain-killer.


The new version has positional tracking, so it should reduce the eyes vs. ears problem.


Depends on how awake your soon-to-be-former-friend is, but the pressure of the device on his face would most likely give away that it's fake within a couple of seconds.


Well in a movie he wouldnt be able to look around, defeating the purpose somewhat. So a game scene or 360 degree movie would be needed.


That is some sick thoughts you have there.


It's just a thought experiment to gather some information.

How would you better word that statement to get the same information back? One of the keys is that the person wearing the goggles doesn't know they are wearing goggles and is in a panicked and rushed scenario with lots going through his head so that they don't have time to consider, "Wait, did someone put a virtual reality device on my head?".

Also because I realize that bumping into things and a video moving when all other senses are saying something different is happening would be clues that it would be easy to guess you are possibly in a simulation, being tied up takes away this freedom of movement (and is safer than drugs that paralyze) so I just made up a random scenario that was slightly humorous and anyone could easily imagine.

I do hope that nobody actually tries that, it would be funny but could be considered torture. I've heard of the media blacking out things willingly because they do not want people copying it, it would be an interesting article if anyone wants to research that stuff, not sure if that's ethical or not though.


Ordered! Being as this is a developer kit and I am a developer I guess I better develop something for it :-)

Just super happy to be getting my hands on one... I think this thing is going to change the world.


No doubt it's going to be fantastic.

I used my DK1 to death, tried every demo I could. Palmer is a pretty frequent poster on Reddit (they even got an early DK2-order heads up, before the e-mail went out), and it became fairly obvious a big announcement was coming at GDC. On that, offloaded my DK1 for $500 here in the UK just last week.

The positional tracking and the low-persistency display have what we've all been screaming for, and IMO is what's made, most people I've seen, feel sick.


Maybe a stupid question, but: I'm not a game dev nor even a gamer, but I kind of want to buy a dev kit. How well will it work with a rMBP, and are there lots of fun demos already available?


Judging from my experience with the dev kit, even on my gaming rig there was a little bit of trouble running some of the games fast enough.

Your machine needs to be capable of running your game at 1080p 60fps. Not sure how the rMBP stacks up, but for most of the demos you should be able to play on low.

I found the best demos were more about an experience then games. The novelty wears off after a while, but I think when the real games come out, that's when things are going to get interesting.


There are tons of fantastic demos and experiments out there already, more than I ever thought would be at this point.

The retina mac book will be passable but will have trouble maintaining 60 fps with many of them.


Do you guys think the consumer version will launch this year or in 2015 ?


All signs are pointing to 2015.


2015 would make sense. People that really want it can get it now and regular consumers will have there socks blown off when/if the gen 3 comes as a consumer version.


I hope so! I really want to get the Dev Kit but don't want to spend that much money if consumer version is released in just couple of months after that.


Yeah, it will definitely be better off being released well after its had time to have been polished and, you know, has games


At SXSW interactive there was the suggestion of 18-20 months.


My guess is mid to late 2015.


Generally not an early adopter, but after backing, using and being blown away by the original kit this was an easy decision.

Interestingly, the checkout page reflected my Oculus Store Credit discount, but the confirmation email I received did not.

Can't wait.


How much was your discount?


kickstarter purchasers got around a $30 discount because it was supposed to include Doom 3 BFG with VR support, which didn't end up being ready in time.


It's beyond commendable that Palmer et.al. seem to have no problem at all employing people smarter than he is wherever they can be found and enticed away from what they are doing. I've gotta wonder, though, if he's looking up at everyone around him and seeing not only stellar talent but stellar ambition and ego how he'll stay on top of it.

I've got a DK1 and had a whole lot of fun with it despite its laughable resolution but as much as I want to play with the DK2 and follow application development (which is all I'm capable of) I'm going to await the store shelf version. I think. :-)


I don't know if you are praising Luckey or calling him just a figurehead. Don't underestimate his intelligence, he is a really smart guy.


Praising, for sure. I was afraid my comment might be taken as an underestimation. Not so, I'm just blown away by the extraordinarily accomplished people he has brought on board and know that, smart though he may be, many of them have accrued far more impressive technical track records.

His early work demonstrated potential with contemporary, off the shelf technology along with a visionary template. His growing team is taking it so much further. I consider that kind of talent acquisition and management a form of genius.

I just expect it to be a really tough herd to manage in a collective fashion and for it to get increasingly difficult as time makes room for the inevitable ego expansions and the elbowing for power, influence and credit. But then it also appears that he has placed some very alpha managers at the top of the executive team.


My dream device:

Take the Razer Edge - a little 10" gaming tablet. Create a keyboard peripheral that can be attached to the Edge at any angle - it can be used as a laptop or a tablet, or:

Put the keyboard next to the tablet, laid flat. Then emulate a mouse-input with the touchscreen - a full 10" mouse-pad-like area for your fingers to simulate a mouse - your middle-finger is the mouse-position and you can l-click and r-click with your ring and index fingers a-la magic trackpad. Boom, we've got a lapboard keyboard/mouse.

Then use the Rift with that. The ultimate portable PC gaming experience.


Dev Kit in the news: http://techwatching.com/page.php?i=22975

vs Sony: http://techwatching.com/page.php?i=22780

Oculus' focus is on latency; Sony's looks like its on packaging. At least Sony has cut its lead time on cloning other's products, compared to Sony's "Move" following the Wii.


sanitize GET variable!


This is one of those rare moments where reading hacker news is not just worth while but makes me really really happy. I ordered mine!


Does anybody have experience using the Rift with the Ibex window manager[1] (esp. on OS X)? Or another similar solution? I'm very interested in using it as a gigantic monitor!

[1] http://hwahba.com/ibex/


> "you now connect the headset to your computer using a single cable that includes an HDMI and USB connection"

My computer only has a mini-displayport port. Does anybody know if a minidp->hdmi adapter would be suitable for this HDMI/USB pairing?


I was able to get the DK1 working using a hdmi -> regular displayport adapter for my Thinkpad X220 on Ubuntu 13.04


I can't find anything that definitively states it, but from how it's worded it sounds like it's a single cable that just has both HDMI and USB connectors on the end, not some sort of weird custom connector that tries to send both over the same set of pins.


I assume this means there is one cable that splits in to an HDMI and USB connector. It will work with any HDMI adaptors.


I haven't had any problems connecting my DK1 to my macbook with a mini display port -> adapter.


I don't know what the center of gravity is for that thing, but at a glance, what about the idea of less (or less tight) strap, more counter-weight on the back?

P.S. Although that would represent more mass to accelerate/decelerate with head movements...



> Fortunately, Gabe [Newell, Valve's managing director] just let us have it all. It wasn’t exactly easy to get it past the lawyers, but we were able to negotiate, free and clear, all the technology

That was really nice. Not many companies would do that.


Am I the only one who thinks the motion sickness is something good? Like, "it feels so real that you get motion sickness" and that the new version feels less realistic. I hope I'm wrong though!


I'm pretty sure the motion sickness is caused by conflict between what you see and how your brain knows your head is moving.

Even a slight delay between your head moving and eyes seeing a reaction can cause nausea


Recently read a great working theory of motion sickness... when your brain can't reconcile your sense of "down" or motion via your ear and your body orientation with your vision's perception of down or motion, then your brain thinks one or the other is hallucinating because of something you ate, and that's why you have a nauseous reaction, to expel the poison.


The vertigo isn't because it feels so real, but because it doesn't, and your brain knows the difference.


The motion sickness is not just because it looks real, it's because of the small little mistakes, such as high persistence and latency.

Imagine that every time you move your head, the image not only takes time to appear, but also is a little blured.

It's kind of simmilar to being a little drunk, but sober, so the effect is really awkward.


I don't get motion sickness moving through the real world, do you? Should have that checked out. There for motion sickness in a virtual world does not seem more real.


it sounds like you haven't used an Oculus. There's nothing fun about VR sickness; it just makes you know that you're not really there.


Realism isn't always good. Thinking that motion sickness is good just because it's realistic is like thinking that the ultimate VR experience will cause debilitating pain whenever your in-game character gets shot. Some experiences are better left out.


This is the moment I have been waiting for, for more than a year!!!


Any info as to what the input lag on the latest dev kit is?


I guess a lot of people won't buy a VR system if they get forced to set up a camera for it.


I guess they wont "need" to set up the camera. But it is recommended as it is there to help with head-tracking. So your movements inside the 3d environment will feel more natural with more degrees of freedom.


Too bad it's a camera by Facebook now. No, thanks.


Has anyone used it for coding?


The DK1's resolution is way too low for reading large amounts of text, and I'm somewhat skeptical that the DK2 is good enough to be used as a monitor replacement.

Different applications that place less emphasis on text may be a better fit for VR - think photoshop and autocad for a start.


This. There is an article by Michael Abrash that goes into good detail about this:

http://blogs.valvesoftware.com/abrash/when-it-comes-to-resol...

quoted:

"Given which, the obvious question is: how high does VR resolution need to go before it’s good enough? I don’t know what would be ideal, but getting to parity with monitors in terms of pixel density seems like a reasonable target. Given a 90-degree field of view in both directions, 4K-by-4K resolution would be close to achieving that, and 8K-by-8K would exceed it."


I was just thinking about this the other day. How weird would it be to have a technical interview for a job where you and the employer both hook up via OR, walk up to a virtual computer, and start coding at a computer inside the game?

Things are about to get crazy.


I'm interesting in this too. My guess is you'd really need good, light, haptic gloves to make this work... certainly you will need the same sort of precision and physical feedback you have now with a physical keyboard, even if you're not primarily using it for traditional typing.


why not just use a real keyboard?


Plus 55 USD shipping and about 80 USD VAT when shipping to Germany -- ouch.


Not too bad if you have any remotely serious plans of developing for it.

Personally, I'll be waiting for the second generation public release before seeing how my badly unbalanced far-sighted vision handles the experience. I'm happy to give them plenty of time to sort out all the little problems and upgrade the hardware.


I'm in the UK and I had to pay $35 shipping and $77 Tax, bringing my total order to $462 or about £277.89.

I only have a hobbyist/tinkerers interest in developing for anything for this, but I figure that (in my opinion) the Rift is the most exciting piece of technology out right now and I'd love to have a play about and see what I could achieve with it.

Also, I've never even so much as tried any VR before, so I reckon the price is cheap for the chance to experience it and see what it's all about. It might be a total let down - and I'm ready for that, but at < £300 I figure it's worth a punt.


I'm being quoted $22 shipping within the US. Seems really excessive.


$55 USD for Ontario, Canada


What's this missing that the consumer version will have?


From what I've heard, at least a higher resolution, 90 Hz display.


They'd indicated that they working on some kind of integrated audio as well.


Anyone successful in using paypal checkout? Failing for me.


Failed for me as well, decided to just put the credit card in.


There were some issues in the order processing system earlier this morning, but PayPal should be working now.


credit card is failing for me as well..


Any idea why they charge sales tax? (just curious)


Some companies cover sales tax for their customers or they do some trick to avoid sales tax, some don't. Some states require sales tax over online order. See http://www.amazon.com/gp/help/customer/display.html?nodeId=4...


They're probably legally required to, if they have a nexus of business in the state you're buying from.


Is DK2 shipping in July or consumer?

Not entirely clear.


DK2 is expected to start shipping in July, no date has been set for consumer release but people seem to expect a consumer product release date either at the very end of this year or sometime in 2015. Any dates around the consumer product release date is pure speculation at this point though.


any particular languages that you should be good at to deal with this dev kit? my dad codes a lot in c/c++, i want to buy him this as a surprise gift


The Oculus SDK is C++, but I'm sure there are bindings for many languages. Probably the easiest way to get started is with C# in Unity. If anyone is feeling adventurous, you can even access it in JavaScript! https://github.com/DanAndersen/cupola


If I recall correctly, a devkit comes with a limited license key (1 year?) for Unity 3D. That was the case for DK1.


From the FAQ: https://support.oculusvr.com/hc/en-us/articles/201835987

Q: I received a Unity Pro Trial code with my purchase of the original Oculus Rift Development Kit. Will I receive another code after my purchase of Oculus Rift Development Kit 2?

A: There are no plans to offer a Unity Pro Trial code with Oculus Rift Development Kit 2 purchases at this time.


so how many % off is the sale?


I can see the confusion of "on sale" versus "for sale", but they're both commonly used terms to represent the ability to purchase.


So is selling stuff before it actually exists the new norm?

I get it when you have a kick-starter, but when you're already established doesn't it come off as a bit shady and scummy?

Are they low on money and desperately need to cash-in asap? Are they worried their product will be poorly reviewed b/c they aren't confident in the quality - so they're trying to lock-in buyers?

EDIT: I'm just putting out the question "Why do they have a preorder system? What's the rational behind it? Why just announce it when it's available?"

Preorders are a way of locking in buyers. You can trick a few people into buying it, that may have otherwise not bought it when it was actually released (maybe due to negative reviews, or a change in their financial position). Except... why would you ever want to do that with developers? So why are they trying to do that? I don't get it


It's a dev kit. Things have always been done this way. Nintendo, Sony, and Microsoft do it too. It just typically costs thousands of dollars and you have to already have a business relationship with them. So you rarely hear about it. Most PC guys hate this model, hence this...

Example: http://www.retrotrader.com/catalog/product_info.php?products...

If consumers want to be dumb and ignore everything that states very clearly that this is a product for developers, that's their business.


Why would you bitch about it? Why couldn't you wait until it becomes available and then decide if you want to buy it?

And yes, selling things before they actually exist is the new norm for a while now.

I hope you are not too confused since you got out of your 1990s time capsule.

Edit: And as for why would they take preorders... Because they can, because people are willing to throw money at them. Last september preorder queue was 50k people deep if you wanted to get in line. They would be mad not to take preorders as it helps them keep costs down (expected inventory, it is easier to get loan from a bank if you have N widgets already sold,...).


When you're manufacturing things, preorders can also be helpful to calculate the size of your initial run. (I don't know if this is why Oculus is doing it, but they wouldn't be the first.)


they are only taking a deposit for 50 bucks and they recently got a lot of money from a VC. You don't call apple out on preordering the latest iPhone and I see this as about the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: