I think Microsoft re-branding Augmented reality as "holographic" is hugely impactful and, with the new wave of AR and CV products that are coming onto the market [1] might be the thing that makes people take it seriously.
It's just so much easier to explain to people what it is if you say Holographic vs Augmented Reality - even though it is technically wrong. Kudos to MSFT for making that leap.
I totally agree. AR made and makes ZERO sense to my non-technical spouse and family. Holographic makes total sense. It's a clever rebranding and it will help sell ALL AR platforms, IMHO.
Portable tape players were cool. Then suddenly Sony made one, and we now all (those of us old enough) remember fondly our first or favorite walkman, whether it was and actual Walkman or not.
Smart Phones were hot tech, but our non-technical families didn't really get the point until suddenly those "smart phones" were "iPhones".
Web searches were a mystical dark art, with all your Alta Vista voodoo, until there was a single box with an algorithm that did a pretty good job finding what you meant... and then many competitors later, someone called one of those boxes "Google" and now that's the verb we use.
Marketing, sales, adoption drive the names things come to be known by. Sure, in reality it's Augmented Reality, but if in 5 years everyone's talking about their sweet new Holographic ski goggles, it's still AR in the mainstream.
Hologram/holographic is already a word. It has a very specific meaning. It is not a new conjured word like Walkman or Iphone, which were just one implementation of a portable tape player or smart phone.
What are you talking about? The walkman was the first portable tape player commercially available.
Also what Google brought that Altavista didn't have was that it actually returned relevant results, not pages of spam where webmaster cranked as many keywords as they could (including "Pamela Anderson", always).
I was talking portable players, not cassette tapes specifically. There were various options for tape, and 8-track and the like before the Walkman, all of which were portable and played music but none in the way that made the Walkman take off, obviously.
Sometimes people learn new words, other times keeping old words (usually modified) works better. To take your example, in Turkish the word for vacuum cleaner literally translates as "electric broom", and cell phones are commonly called "cep" which actually translates as "pocket". Or consider the British who still use electric torches (that's a flashlight for all you non-Brits).
All the companies are branding as "mobile." I don't think they really even sell household corded phones anymore, we don't have to be specific about them.
If I wanted a corded phone, I think I'd have to use that specific modifier to get one.
Anyway, back to the point--say "mobile phone" to Americans and they know it's a cellular phone.
Seconded. "cellphone" and "mobile phone" are basically interchangeable to me. Though, honestly I'm more likely to just refer to it as a "phone" and use a qualifier to refer to the non-mobile variety (e.g. "house phone" or "land-line")
Nah, "hoover" is just your dialect. American here, grew up in CA, and I've never heard or said "hoover".
But yeah, I wouldn't say "vacuum" as a method of action unless you really drilled down. (Something about "pump" and "sucking", but I'd have to think for a bit to get to the actual physical "vacuum" aspect of it.)
But vacuum cleaners and and cell phones contain the name the explanation of what they are and do. "augmented reality" doesn't. Holographic... it does a better job, at least for some people.
How does "cell phone" explain anything useful about that particular type of phone? Using the word "cell" for a specific geographic area covered by a particular radio tower is itself a rather vague and general analogy. You could use "cell" for any element of a larger structure or organism I guess, but it wouldn't explain anything specific.
"Mobile phone" could have made sense to someone even before the introduction of mobile phones. But I think the word "cell phone" only took on meaning after the introduction of the device itself.
The cell(ular) in cellphone refers to the handoff between cell towers as you move around.
In comparison Radio tends to have a single tower that sends radio waves and does not care about who is listing. The advantage for cellphones is you can reuse the bandwidth from the same small set of frequencies across the country AND maintain a phonecall durring a handoff (where raido keeps forcing you to change stations on a long trip). The issue is the phone's need to rebroadcast their position constantly which eats’ battery life. It's even worse when they fail to connect to a tower as they just keep trying until the battery fails.
Granted, an end user might care, but MMX or SSE mean little to most Intel customers. 'with techron', 'dual turbo', 'LED TV's', 'tessellation', 'electrolytes'
I wasn't referring to "cell", I was referring to "phone". Just like with "vacuum" and "cleaner". You don't need to be an engineer to understand that the first one is used to make phone calls, and the second one to clean.
It makes no sense at all if you think of an actual hologram, but makes perfect sense if you think of a Star Trek hologram. This is like a wearable holodeck. Just add a haptic body suit (no doubt in the works)!
From their marketing material, they pretend they figured a way to fake depth. If true, that would be a huge step compared to classic stereoscopic technologies or other cumbersome devices.
However, I doubt this is the case, else they wouldn't just slap this announcement at the end of a boring Windows 10 presentation. I mean, all I could remember about this presentation was: "Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10 with HOLOGRAMS using Windows 10! Windows 10. Windows 10. Windows 10."
I don't think they are pretending they figured out a way to fake depth, it sounds like they have created a compact head-mounted light field display, which is absolutely a huge step forward. Unlike classic stereoscopic technologies light field displays let your eyes refocus the image because the displays recreate the direction of the from the object as well as the color and intensity. NVIDIA demoed a compact head mounted light field display recently that explains the concept. See it here https://news.ycombinator.com/item?id=8451746
No need to fake it... if you can get the retinal projection accurate enough, with fast enough eye tracking, the depth is as real as anything else you'll see in real life...the goal would be to have it feel completely natural.
... and to do it, as well as motion tracking and environmental feature identification, in real-time with as little latency as possible, and do it on batteries.
I keep feeling like watching the Longhorn demo at PDC 2003...
They added the third chip, besides CPU and GPU, they call it HPU (holographic processing unit), which could speed things considerably. If they hooked it directly to ccd then they could really grok terabytes of data on battery.
Magic is not possible. Whatever the HPU does is still constrained by physics. They can't explain away the engineering problem with an invented new name for a device we know nothing about except that it defies the laws of physics WRT computation.
I really hope they get the accurate tracking and depth and getting objects to "stick" where they belong in 3d space correctly, without moving out of place or floating in a wrong way, with quick head movement. If they can do that, most of the battle is won and it will be amazing.
Edit: although, of course they'll need some intelligence on the surroundings to identify surfaces and stuff. But imagine like re-decorating your work room, adding scifi textures or something, and maybe pipes or whatever ;p
But they showed footage "through the eyes of the wearer" and they let press have a hands on demonstration, so it's not like they can really fake anything.
I did see a tiny bit of judder in the footage that was supposed to be exactly what the person wearing the glasses would see, but it was hard to tell.
In case any reader here weren't aware how Kinect works, it sends to the developer a 2D image of the depth. Of course as walod says, there's work to do to identify surfaces (as you can see on the image below, background elements are excluded).
Pretty much everything in this space is incorrect on some level. People casually call the Oculus "VR" when its just a HMD. The VR is going to be the software that works with the HMD. This is like calling a joystick a game.
Personally, I like the hologram branding. Its like the Star Trek holodeck, which invokes a really neat sense of futurism.
Its turned into shorthand for "seeing objects in space that aren't really there". They could have also gone with "compugraphic hallucinations" or "pink elephant computing".
Visidraft might be useful for an idea at our company. The video looks good but the "Get Visdraft Now" button doesn't do anything. Is it available yet? Pricing?
I tuned out when I read this: "Sensors flood the device with terabytes of data every second, all managed with an onboard CPU, GPU and first-of-its-kind HPU (holographic processing unit)."
I'm sorry, but there is no wearable device which can handle terabytes of data per second. Heck, my brand new Haswell has a peak memory bandwidth of 17GB/s; even to the L1 cache, its theoretical max is 700GB/s.
They probably confused that with GB. Not sure which sensors would even put out Terabytes per second. Let's assume they have a brand new high dynamic range RGB-D camera that has 32 bits per channel at 60 Hz. To reach 1 TB/s that camera would have to have approx. 1 Gigapixel. --> Not very likely.
Completely understandable, given the fact that Wired Magazine is after all, pretty new to all this computing and technology business ... /s
I can't imagine any (tech-savvy) editor proofreading this not going "wait what, terabytes, really??" -- which is then presumably their job to doublecheck.
I understood it as saying the sensors are reading terabytes of information every second. The data would get selectively loaded into the device based upon the task at hand. I don't see that as far-fetched -- I can hook 100 digital cameras up to my computer and make the same claim.
They didn't mention a timeframe for the amount of data collected. The exact quote is "[...] all by processing terabytes of information from all of these sensors, all in real time."
I got the impression that it was a write up of the demo video, not actually hands on... It's a confusing article and there's no 'real time video or images' of anyone actually interacting with it. I'm actually quite confused as to what this is, bad PR piece - more like a mocumentary on discovery channel...
> I got the impression that it was a write up of the demo video, not actually hands on...
That's not really an excuse to dumbly repeat something which sounds so amazingly far-fetched (see other comment elsewhere, what sensors even produce TBs of data per second?), it verges on the physically impossible, reporting it without even blinking an eye--such as "yes, you read that right, we think it's hard to believe too", or preferably some explanation of how it can even be possible. How did the reporter not go "Wait what--terabytes per second?!!".
A current generation Hawsell chip can easily manage terabytes per second in on-chip cache and by manipulating registers.
Eight or more virtual cores plus SIMD operations that can smash against large chunks of data per cycle adds up awfully fast on a 4GHz chip.
They're also probably counting the fact that data flows from one system to another in sequence, but adding up each sequence. Eight streams of 150 gigabytes per second for example.
This doesn't count information that's captured but discarded at the source, processed away before it's transmitted downstream.
It's probably more along the lines of the same data being processed within the same chip. Not actual memory bandwidth. This HPU they are talking about is processing depth information and eye lines (probably quite a bit more often than 60hz), it's quite possible they are processing the same data multiple times over achieving theoretical bandwidth in the TBs. It's disingenuous sure but it's like saying a network with 20 100GB routers can handle terabytes of data.
A single 4K2K RGBD sensor at a high enough refresh could generate something in the 100Gbps range. The device has at least 2 (possibly 4?) forward facing sensors. It's presumably also doing inward facing gaze tracking, audio and IMU.
As a point of reference the Leap Motion Dragonfly has 2 x 3K sensors w/ 225fps color and 720fps tracking.
Presumably the "HPU" is an ASIC that bakes in some sort of SLAM/positional tracking, skeletal tracking, and gaze tracking.
I think this type of AR / holographic technology has many, many more potential real-world applications than VR. With VR, you're shutting yourself off from the outside world. Here, you're enhancing the outside world with technology. You still get to interact with others. Using HoloLens doesn't stop you from doing almost anything.
What I'm curious to find out is whether HoloLens will run into the same core problem as Glass. People are afraid of people wearing Glass. They're scared that they're being filmed, or worse. Unless HoloLens can avoid making you stand out - by looking like regular glasses, or even contact lenses - I'd guess that HoloLens will end up suffering the same fate as Glass.
It's hard to understand how AR / holographic technology could help people in their day-to-day life. There are a zillion potential uses, but all of them seem extremely complicated and hard to pull off.
Take the example of fixing your car. For example, performing your own oil change, or replacing your alternator. That seems like a perfect use case for holo, right? The goggles would tell you what needs to be done and what the next step is.
But that would involve so many technical challenges that it seems very difficult. You, as the creator of FixYourCar holo app, would need need to detect what type of car the user is looking at, what part of the car they're looking at, render an overlay with the correct orientation, and so on. And at the end of all of that, it's not entirely clear that your app is more helpful to them than if they'd just look up a list of steps for fixing their car using their mobile phone.
I guess what I'm asking is, what do you think holo's killer app would be?
I don't know about the consumer market, but I can think of numerous commercial applications.
Sony currently sell an HMD for surgical use, allowing for comfortable and convenient viewing of video from endoscopic cameras. A practical translucent HMD would be extremely valuable in surgical procedures guided by x-ray or ultrasonic imagery.
To give a trivial and easily-implementable example, I would have bought Google Glass without hesitation if it integrated with my electronics test equipment. Being able to view data from an oscilloscope or logic analyser without taking my eyes off the PCB would be a boon. PCBs are designed with fiducial markers as a necessary part of manufacture, and machine vision is already used extensively in many aspects of electronics manufacture and repair; It would be relatively straightforward to overlay all sorts of data that would be enormously useful to technicians and engineers.
Stereoscopic and volumetric displays are used extensively in petroleum and mining geophysics; This equipment is currently relatively niche due to extremely high cost, but could be used in a much greater range of geoinformation applications if costs fell.
Imagine a car mechanic remotely helping you to do what's needed to be done. This use case is showcased in one of the videos and it is probably not as complicated as building an AI engine to help a user repair his car.
This is actually a relatively silly use case for this. A lot of the actual difficult things that a mechanic can do for you usually involve more strength than you have. Or just experience in actually working with things that they can't see.
Now, a mechanic using this to "see" things that are actually in control of a remote robot? Pretty cool. Showing you the thing that is right in front of you? Cute, but ultimately silly.
Yeah, I'm just not sure that I buy the idea that there's a big market for an expert coaching you through doing repairs via AR goggles.
How exactly does this work?
You still need to pay for an expert's time -- in fact, you probably need to pay more for it, because the expert is probably faster to do a thing than to explain the thing to you and then you do it. Also, the expert now needs to be someone with these additional skills of coaching someone through an operation.
Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?
I mean, maybe! Especially if you can locate the expert in some place where labor prices are much lower (so: India). But then you also need the person to buy the AR goggles. And how often does this use case come up? Is this like 3D printers where people try to sell me on the concept that I could pay hundreds or thousands of dollars for something that can make me things that cost less than $20 and which I need three of every year?
I agree this is highly unlikely to be a common use case. I might use it, because I try to do most things myself, and I could often use some expert advice. And I know the people who would help me, but it'd be inconvenient to have them come all the way out here. But overall, this is a one-in-a-thousand use case.
But:
> Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?
Almost all repairs on appliances, cars, houses, and so on can be done with tools you have laying around the house; they don't require anything more than a hammer, drill, screwdrivers, wrench sets, etc. The only thing you're typically not going to have on hand is replacement parts, which are usually not too difficult to get your hands on and which you would have to pay for anyway.
When people do this, it's not going to be an "expert", it's going to be something along the lines of "hey dad, look at this real quick."
> But then you also need the person to buy the AR goggles.
The Ars reporting showed someone using a Surface to view and annotate the HoloLens-user's view, not another set of HoloLens. So the barrier is much lower, any Windows 10 PC should be godo enough.
Have you ever worked on a car? Tons of specialized tools can be needed for some tasks. Do you have a set of triple square bits? General purpose puller? Bearing extraction tool? How about half inch drive Tor-X sockets? 200 ft lb torque wrench? When you work on a car for fun, you find that your collection of tools balloons just for all the things on the car that require one specific tool. If you think you can do everything on the car with just a simple socket set, you'll wind up stuck and having to buy a new tool for every task.
Yes, although granted when I said "almost all repairs" I did not have in mind major automotive work, but more along the lines of general maintenance and little things going wrong. Of course if you're doing something like rebuilding the transmission you'll need more tools than the average bear.
Fair enough, but my point was that for things where you'd benefit from a mechanic walking you through a task, you'll probably need some specialty tools to do the job.
None of those things require the augmented reality aspect of this: you don't need to place math and physics lessons into your local environment. They'd do just as well with VR, and indeed it doesn't sound to me like they'd do MUCH worse with just a plain old screen. What is it that you're imagining we couldn't do with a tablet that has swipe gestures to rotate the demo around all axes?
An electronics teaching kit might not work on a tablet (but would in VR), and note that any kind of really fluid manipulation of a virtual environment is going to involve a whole additional technology that gives precise locations of your hands (at least). The HoloLens allows a few simple gestures, not the ability to handle virtual objects in many degrees of freedom.
I'd approach this problem differently. Not from a service industry angle but from a product vendor angle. Imagine a world where AR-glasses are widespread.
I'd be pretty interested in buying the kitchen sink that comes with an AR repair guide or the furniture that comes with AR assembly instructions.
So I think the interesting market is in building the infrastructure/app that makes it easy for vendors to create the content and ship it as a (free) addon for their products.
You could draw on a much bigger pool than professionals. There are plenty of people who know a skill and don't professionally sell their services but could spare a few minutes occasionally to help someone with a problem. If you combine skill tracking with instant global availability of services, you have a lot of room for development.
I don't think that 'strength' is the issue but better tools
and a lot more experience are the core parts: I remember watching one mechanic changing the light in my car: what took me ~15 minutes (I'm not kidding) took him ~30secondes..
And that's like riding a bike: you cannot really tell someone how to do it..
But the hologram need not be a person! At least, it won't be once this use case makes sense. The hologram, will be an AI hologram. Just like the light switch installation in TFA, it would be silly to have a human expert show you how to install a switch, or trade out a component in your car, once an AI expert will do.
If there's a market for this, why doesn't it already exist? You could take your smartphone under your car, and video chat with a mechanic anywhere in the world. The mechanic could even draw arrows or highlight areas on your video in real-time as it's looped back to your display.
That adblock thing would probably be great. Also, I ofthen wish there was an easy way all those specials in the store could be compared more easily. Usually when you work out that 2 for x offer you see they gave a generous 10% discount that made you buy an entire extra thing for no real savibg.
I noticed that the promo videos all show you doing things indoors, in more or less private settings. Your living room, your kitchen, your workspace. Contrasted to initial Google Glass video (skydiving, jogging, meeting for lunch .etc) I think it's safe to say Microsoft has learned from Google's mistakes.
I noticed this too. It avoids a whole class of problems interacting with other people, and seems like a good idea marketing-wise.
I was also thinking about battery life. If it's not designed for outside, then presumably you'll be near a charger, so you're less likely to run out of charge when you need it.
From the videos HoloLens looks like being an actually well thought out product unlike Glass.
Many people would find tools like these incredibly useful at work, in the car or at home. But not in the street, at the beach or in restaurants whilst talking to other people. That's just socially awkward/insensitive.
I definitely want HoloLens to be real, too, but to avoid heartbreak I'll temper my hopes until more reports come in. Or, even better, a firsthand experience.
I've never used a Kinect; how does the promo video live up to reality? It's looks almost identical to what I still assume Kinect is like, minus perhaps some of the highest fidelity parts like the skateboarding and soccer which I imagine have been attempted but turn out too clunky to be worthwhile. Am I wrong?
No. The problem is Google kept trying to force Glass as a consumer product for use in public.
And given that most normal people would know that it was socially awkward to use it in public only "glassholes" remained. This meant that buying/wearing Glass tarnished you with that label and associated you with that group.
This has become forgotten as the public perception of Glass became dominated by the whole "glassholes" phenomenon, but the Explorer Program was supposed to demonstrate that people could think up these kind of amazing life-altering apps that proved the utility of bothering to wear Glass.
They didn't. Years later, the reason to wear Glass remained "take pictures/videos hands free and shave 3 seconds off the time it takes you to check your text messages."
The MS product seems pretty clearly to be more broadly capable hardware, but I do still wonder if it will have actual applications.
The main application that sells it is likely to be less specialised than the cool demos, which are always a bit niche (modelling industrial design for motorbikes etc).
I wonder if its "killer app" might just be that now a virtually big screen takes up little physical space/weight.
Clear the big monitor off your desk, now your 11" laptop (or smaller) can effectively have a 40" screen, etc.
Unlike Oculus, you can still see the real world. Unlike Google Glass, it's a big display and not an awkward eye movement.
There's still the barriers of
- showing other people stuff
- social awkwardness of sitting with a keyboard seeming (to others) to be staring into empty space while working
- it might feel like wearing a hat
- what's the effective pixel density like?
This is the clear winner for me. A portable, wireless keyboard + hololens = the biggest virtual desktop in the world that also doesn't shut you out from reality / coworkers / your desk / etc. Whether or not the more ambitious use-cases ever materialize, I'd be happy to trade in my macbook for this.
The focal point is a problem. It is advised to keep your screen at >65cm so your eye doesn't have to accomodate (coincidentally, the length of your arms). A big problem of Google Glass is the focal point is a few cm away and it is known to give headaches. The smaller the screen is, the more myopic you become.
It is absolutely possible to use a lens system to move the focal point to the distance, but hasn't been done yet, probably because you can't do it on 120x120 degrees.
I wouldn't work on a virtual screen for long hours until there's an answer to that. But once it is solved, I can see how we'll all become Holographic addicts ;)
Surely they must have sorted out the focus issue for HoloLens -- otherwise that Minecraft demo where the castle is on the table would have felt very trip for the journalist (if you consider where the castle touches the table, you'd have a joint that is both several feet and a couple of centimetres from your eye)
I had an idea to do something similar once for a Uni dissertation, involving a rift mounted with two cameras to do a very hacky and cheap prototype version of what you've described.
My supervisor shot it down because "Google glass will do that" :(
I think it's hard to say whether nobody used it because there were no killer apps, or whether there are were no killer apps because no one wanted to use it.
Ya the screen wasn't that great and was awkward to look at. Seems like the photo taking is the best part of it -- for that you don't really need all the rest of the complexity. And too bad you also look like a douchbag wearing it, especially in SF where tech is stigmatized enough. Reminds me of a joke a comic said last night at an SF standup spot: "so I was on google last night... Do you guys know Google? It's this company making people homeless in SF"
"7. On that note, don't give one to Robert Scoble"
It's not that Google Glass intrinsically makes you look like a narcissistic douchebag, it's that the first people to show them off were narcissistic douchebags, posing with their smug self important "look at me I want your attention" expressions, who crystallized the image of the "glasshole" in everyone's minds.
What it reminds me of, is Google Project Tango, which also has the NASA's JPL listed as a partner[1]. Also worth mentioning, is Johnny Lee, who worked for Microsoft Kinect, and is now working at Google for project Tango.
First thing, it crashes, a lot. We're talking 2-5 minutes active 3d scanning before the structure sensor driver bites the big one. Requires killing and restarting service along with all programs associated.
Also had hard freezes as well.
Its "google quality" in other words, crap. It might get better. It probably won't, given their track history regarding consumer devices in "google beta" (read as alpha).
As far as I can tell, a big difference is that Google's concept video looks very little like actually using it, whereas Microsoft's is clearly just a better version of their live demo. The live demo was amazing.
It's interesting how they go out of their way to describe this as NOT augmented reality when... that's exactly what it is. The only time the term appears on the product page is here:
"Microsoft HoloLens goes beyond augmented reality and virtual reality by enabling you to interact with three-dimensional holograms blended with your real world. "
I understand the marketing reasons for this, but contrast this with the fact that Oculus embraces the term "virtual reality" despite the baggage that comes with it and the fact that they can't trademark it. I guess AR never caught the public imagination like VR did.
AR is traditionally a 2D projection on 3D space. This is a 3D projection that you interact with. Sure, its still AR on some level, but I think differentiating the product makes a lot of sense. My idea of AR is a boring HUD-like system that fits in with things like flying fighter jets. This holographic projection is different and notably so.
MS could find the middle ground between lush 3D VR-like environments and the real world. I find things like the Oculus and other HMDs to be terribly claustrophobic and dizzying. Not to mention really asocial. I don't want to mount a tissue-box size thing to my face that removes the real world. I'd prefer having the real world still here with the digital world tied to it. There just seems to be something wrong with giving software my entire field of view. I don't want to stare into the same Unity3D generated environments. I want to augment my real world life, not replace it.
>MS could find the middle ground between lush 3D VR-like environments and the real world.
Back around 2000 a Slashdot article reported an attempt to create a human-sized hamster ball constructed of a semi-opaque projection-friendly surface. The ball would sit on some sort of roller mount. Five projection screens surrounding the ball would project a virtual environment over the "port", "starboard", "fore", "aft", and "north" surfaces. A human occupant would enter the ball, and, based on his movement detected through the roller base, be presented with a continually updating holodeck-like virtual environment.
Perhaps something like this is still in development somewhere.
When we say virtual reality, there is this implicit expectation (at least in my mind) that its an always ON experience. The Video here did show some of that too but I think scoping it to specific tasks at least initially would be very powerful. So you don't wear these bulky, dorky glasses/headsets all day long but only when you need to do specific things. And then you return to your normal life.
In the early days of computer, the usage was like that...very task oriented. When you are done you go back to your non-digital life. It's only when the technology and public perception changes, you start carrying PCs in your pocket all day long like we are doing today.
Right - but let's say they get less dorky and more comfortable - and really do have the visual quality we really want - objects look solid and real.
That seems much more useful than a VR that you have to plug out of the real world and immerse yourself in - at least for collaboration with others on real world things... like the example with the motorbike design.
Makes me think of something i read about from CES.
Basically a helmet of sorts where you dropped a smartphone into a slot at the top, and some semi-transparent lenses in front of wearer's eyes then made the phone screen appear to float in front of said wearer.
With the camera, it's easy to understand how they make the objects feel solid - they just overlay it over the video feed. How would that work on the glasses, I have no idea. Does it have LCD shutters that block incoming light where they want to add a "solid" object?
If you pause at 9 seconds in and check out the rig they're using to film the feed where the virtual objects are visible, it looks as though they are using the same lenses that are on the goggles to display the virtual objects in front of what the camera is recording. So I would bet that what we are seeing in the video is exactly what the user is seeing and not something added on the fly by other means.
That is amazing. So you have two (or more) people that hook into the same "scene", with the kick being that they see it from a different angle. Wow, very nice.
This was something that I think will be a bigger deal. They can communicate with one another, and potentially split the processing load. I'd love to play an RTS where my device processes my pieces, I see their backs and my opponent sees their fronts.
Of course, the caveat there would be that my command console would be invisible or hidden still from my opponent. Providing selective vision would make for interesting game possibilities.
Nope. Those lenses are above the camera's main lenses. Whatever those circles do, they are not generating the images for the video. Plus, only one would be needed for that - the camera has only one "eye".
I would like to see video recorded direct from the "eyeball's eye view", because I agree - the video looks generated. As you say, you can see areas where darker superimposed objects overlay lighter areas of reality in the visual field. Can this technology really do that? If so- wow. If not, still wow but just... less wow.
I'm so excited to see this. I cannot wait to try one of these on for the first time. I'm going to bet writing apps for this kit will be a lot of fun.
And all the C# .NET developers can say again, thank you Microsoft since they have committed to one platform, one store, all device types. I think it will be a while yet before there are thousands of developers working on this, but it will grow exponentially for a while.
If this is comfortable for extended wear, it's just going to further increase the value of remote workers. You can't beat time zones, but for everything else, there's holograms.
I really, really want to understand how high fidelity this is.
One point is we aren't seeing it used for video conferencing between two people each wearing a band. Probably because face-on you look pretty silly in it. So it's not quite a natural way to meet people. Just yet. I think it has world changing potential.
But in 2016 or whenever, this will not be selling for $499 or $899, it's more like $3299 I would guess. And really, that's more like the price level we expect for a very high quality gear. Actually, you could realistically go upward of $10k if the quality truly reflects that price.
So of course the next thing I did was check MSFT stock price. $382 billion market cap. This is a $100 billion idea, might be a good time to get back in. I guarantee you, the market has not fully priced in holograms. Just saying it, you know it's true. For now I will choose to believe the hype, because eventually, absolutely, this is all possible.
The goofy look is really just an image processing problem. If it has a camera watching the face (I know it has one watching the eyes), it wouldn't be impossible to reconstruct what the face looks like without the goggles.
But imagine picking up an original iPhone today and comparing it to v6. Now imagine HoloLens going through that refinement process.
If the platform is as powerful as it sounds, and you can openly develop software which effectively leverages that platform, how is that not awesome?
I mean, it's an entirely new hardware form factor that we all get to hack on and play with, and it's not a sure thing, but if it goes well, it could become mainstream and open an entirely new era of computing.
That's the vision anyway... like I said, I'll choose to believe the hype because it's more fun that way.
Thinking about the design a bit, it's interesting all the compute is on the band, and not broken out to a separate box communicating over wireless. They mention fans blowing hot air away from your head, and then add in the battery too... how long can it run?
I think their use cases are a little weak. There are much more impressive things you could do with this kit.
It looks really really really impressive! It's a see through pair of glasses so not like Occulus and they said they invented a new technology: HPU. The glasses will have their own GPU, CPU and HPU and will be wireless.
HPU isn't really much of a "new technology". It's just a custom coprocessor, not all that different from the motion coprocessor in the iPhone (except designed for video processing, so presumably a lot more powerful).
Any information on whether it truly is holographic? For me this is only satisfied if it has: 1) binocular disparity, 2) accomodation, i.e. it reproduces a light field like a real hologram does.
The only thing I've seen on that so far is from the article:
> To create Project HoloLens’ images, light particles bounce around millions of times in the so-called light engine of the device. Then the photons enter the goggles’ two lenses, where they ricochet between layers of blue, green and red glass before they reach the back of your eye. “When you get the light to be at the exact angle,” Kipman tells me, “that’s where all the magic comes in.”
They could be doing real holographic images with that description, but who knows.
I believe that the TB reference comes from listening to Alex Kipman at the Microsoft announcement event.
In speaking about the so-called "HPU" (around 01:50 in http://www.theverge.com/2015/1/21/7867593/microsoft-announce..., second video) Kipman mentions "processing terabytes of information from all of these sensors". This is straight from the proverbial horse's mouth, and while it seems hype-ish it should be looked into.
I'm pretty sure he didn't say that the HPU was processing it either. He said something along the lines of "when we look around a room our brains process terabytes of data".
if you have enough pins, a custom asic can do just about whatever you want. the data flowing into the HPU is likely huge, but it is processed down into something the CPU can deal with.
Yeah, well, 4x DDR4 DIMMs have 4x288 = 1152 pins. If you want to be two orders of magnitude faster than that, you're talking on the order of 100 000 pins, which is just absurd.
Terabytes is out there, terabits not so much... I'm developing a single chip with 3Tbit (384GBytes/s) aggregate external (chip to chip) bandwidth, and with 8TByte/s aggregate internal (core to core) bandwidth.
So many questions. I can understand light reflecting into the eye from a microdisplay (using the same principle as a car HUD), but are they actually creating opaque imagery as well? How is that physically possible?
Then there's 3D spatial interaction: In my experience with Leap Motion, Kinect, and other competing tech, the level of accuracy still limits interactions to broad gestures. If they've made a big enough leap in this domain to enable precise object manipulation, that's a major achievement on its own.
The WIRED article seems to imply that this device has outwardly-facing cameras that use Kinect-like technology to track the operator's hands, which is probably how they are able to let you interact with the projections without using some kind of wand or controller.
I could see some "high-precision" gloves being an optional accessory for this that would include some kind of tracker markings to allow even more precise controls (maybe for medical applications or something)
I would almost imagine gloves like this would be a requirement for precise control. The Kinect had seemingly pretty decent limb sensing from the few minutes I played with it (it was able to accurately model the bones in my fingers moving). However, it had an advantage in that it was a few feet away, viewing you straight-on.
This device will be looking nearly straight down instead, and seems to me that your limbs & fingers will often occlude what is behind them. I doubt the twin cameras used to sense depth would be far enough apart to always see your fingers behind your other arm, for example.
If you want to see similar technology that's being used now, check out the Leap Motion being used with the Oculus Rift. They mount it on the front. I've personally only got experience with the rift (I have a DK2), but I've heard good things.
I think Oculus has it right, though, in that any HMD that does positional tracking needs super low latency to feel natural. Should be a little less problematic since the whole world wouldn't lag, but I'd be disappointed if the virtual overlay had perceptible lag after using some of the better experiences on the rift.
As I understand it, the light is not merely being projected or reflected, but dispersed on a coordinate system, so the display actually illuminates where it was transparent before. That would interfere with natural light coming through (it's tinted as well) allowing the 'hologram' to obscure the real world. I could be wrong, though.
Unless there is some curious property of light I don't understand (and given my perplexity at radial polarization, there may well be), there's no way that external light coming into the glasses can be diminished by internal light emitted by the glasses.
For instance, if you're looking at a white wall in the real world, there's no way to render a black shape in front of it. You can only add luminance to it, in the same way a video projector can only add luminance to the screen it's projecting an image onto.
If the projection surface in front of the eye is also an LCD, it's possible to block out part of the background at the same time you project something onto it. I don't know what happens in this particular product, but from the Mars demo, it sounds like they can do some display of dark objects:
"The sun shines brightly over the rover, creating short black shadows on the ground beneath its legs."
It's possible that they are just getting that effect by making the Mars surface bright, but they could also be actively blocking light from the shadow regions. We'll have to wait for more details.
Certainly you can only add luminance, but consider that the goggles themselves are tinted, and the brightness of a display an inch from your eye will likely be far higher than the light bouncing off the wall. So while you can't "render" black, you should be able to simulate the darker part of the spectrum using negative space. That's not ideal, of course, but it's something.
LCD panels work by filtering out light emitted by a near-white backlight. If they are bouncing around th incoming light, they could be running it through such a panel to dynamically reduce light by color. They could then selectively add light using existing backlighting setup. Its at least theoretically possible, and I'm hoping that they've actually accomplished something like it.
This is correct. That's a big part of what Magic Leap is supposedly working on, being able to black out the background so that objects don't have the ghostly hologram look.
Yet the demo videos show some virtual objects that are darker than their background. That may be vaporware. So far, there seem to be no images on-line actually taken through the device. Has anyone seen any?
This matters. If it can only brighten things, it can only overlay bright things on top of the real world, which is what Google Glass did. Fine detail won't show up unless the background is very dark or very uniform.
If you look carefully at Microsoft's pictures, the backgrounds are subdued gray, black or brown, and free of glare. The press was forbidden to take pictures of or through the device, and their cameras and phones were confiscated for the demos. Microsoft used custom-built rooms for the demos, giving them total control over the contrast and lighting situation.
It could still work, but it's probably not going to look as good in the real world as it does in the demos.
If they're making it opaque I'm imagining that they're doing it the way I've wanted to for a transparent monitor. A second pass-through/reflective LCD just after the microdisplay is.
One like the ones on the pebble watch would allow you to selectively let light through from the outside or let it be reflective and show the microdisplay instead.
In the picture of the marscape in the article, you can faintly see an ordinary room overlaid on the marscape (most visible in the up-left part of the graphic). I took that to possibly mean that the holograms were transparent, not opaque.
Or maybe they're just trying to indicate the unreality of the marscape.
The demo video below specifically shows translucent as well as opaque. Watch when the woman is interacting with the rela motorcycle, she extends the height:
Now it makes sense that Google just announced that they're stopping Glass from being sold retail. They did not want to have their device compared to the new product from MS. Actually I see this new device as more prone to success than the Glass. These devices are competing for the same markets: healthcare, education and entertainment. And here Glass is somewhat underpowered.
I met a guy in 1998 who had done his Ph.D building full color direct to retina projection. He was doing VR with it back then. That's how long this technology has been in development.
He did it at MIT and if I understood correctly, the US military bought everything (he had no say in the matter).
It's possible.. they dropped their hugely incomplete Google Wave rotten egg on the public only a day before Microsoft announced MSN's Bing rebranding. Various project domains (waveprotocol.org) were only registered 2 weeks prior. Wave seemed like a weird and frivolous product until you look at it from this angle, at which point distraction seems like it could have been the only reason it was released.
There has been things in the past that indicate that large tech companies are somewhat aware of the big announcements before they happen. Look at Google & Amazon timing their cloud pricing announcements within 24 hours of each other, for example.
What are your thoughts about this? I'll tell you mine.
I just watched Google slowly, painfully, realize that Google Glass isn't commercially viable (in it's current form and to the general public). I can't help but feel that this is a larger, albeit more immersive, version of Google Glass.
I only make this point in regards to any plans for a sci-fi, everyday wearable HUD. There is obviously a great demand for this kind of immersion within the gaming community (although I would argue that Oculus will have market control for the forseeable future).
My opinion is that the kind of augmented reality that we all dream of, that sort of matrix-like constant data download, won't become a reality until someone figures out how to take it (visibly) out of human interaction, i.e. with smart contacts, etc. The current tech is just too intrusive in normal human interaction. My understanding is that this kind of tech is still a long way off.
Google Glass was to be used in personal interactions and that put a social barrier in the way of what was some cool technology. It seems that it was way too awkward to be 'glassing' in public no matter how useful or cool the technology was.
HoloLens is pitched to be used in the home and in the workplace, where you can comfortably immerse yourself in that experience without the social implications. This allows the technology to be judged on its own without mixing in the social implications.
I have high hopes for this, at least its a very different experience than what most others are doing (though its obviously taking cues from VR, augmented reality/Google glass etc.)
I agree for this to get 'really big' the form factor has to be a lot more portable, but this is a great stop-gap.
I think this product hits the sweet spot between the Oculus which covers/masks your eyes completely and Google Glass which had a relatively "small" and non immersive screen.
The glass front allows these holograms to appear more naturally to the user all the while still being aware of your surroundings.
The real challenge is not the hardware, but the software and "experiences".
I actually like the fact that it looks a little clunky. No one's going to want to wear this in a bar or restaurant, so the odds of a "Glasshole"-like backlash seem unlikely.
What I find interesting is Microsoft launching this technology just after Google dropped it. I think the focus on business is the right way to go in this case, not everything can start from the consumer, e.g. look at how PC started from a geeky thing for scientists and is now in our pockets and wrists.
I really hope the pinnacle of AR is not as intrusive and uncomfortable (for some) as placing plastic on my eyeball (this seems to be the goto for many people). Either equip the environment, give me eyewear (goggles, then lightweight glasses), and eventually (may be awhile) augmentation implants.
I think you're exactly right. Smart contacts are going to be the next huge computing revolution (punch-cards/no screen -> screen in front of you/keyboard -> small screen in your hand - > virtual big screen on your contacts).
But we're just not there yet tech-wise. And there have to be some intermediate steps along the way (i.e. big ugly glasses/goggles) because it's just going to be too hard to make it commercially viable to go directly to smart contacts.
What Google has shows is that the public won't accept this until it's _awesome_ (and Glass was not awesome). So perhaps the gaming route (Oculus/MSFT) is the path we'll have to take.
I think you're right. If I'm home alone, I don't mind wearing something on my head that covers my eyes. I don't want that if I'm out and about with other people.
Well I'm impressed. I wasn't initially in the stock promo videos but once she put the actual device on and you could see the quality of the hologram, the spatial tracking, etc, pretty impressive. We're also not getting a feel for the sound system built into it. They mentioned in the demo the sound is also projected virtually from where the hologram is located.
I'm curious what the price tag will be? Battery life?
Apparently there will be both consumer and enterprise pricing, whatever that means. I'm guessing $500 to start at consumer level with lower resolution, less battery life, etc. Twice that for Holo Pro.
Sound is easy enough - a pair of decent headphones can produce fairly decent 3D sound, and combined with the illusion of depth provided to your visual system, I suspect it will be extremely convincing.
I'd assume for sound that it doesn't use headphones so much as bone conduction so that it doesn't get in the way of natural sounds. From the design, it seems that there's some kind of device-skull contact most of the way around.
The tech is cool. The UI as described is for campy unergonomic noobs, only.
"he is trying to see Project HoloLens as if for the first time" This is a rather significant problem when talking about the real world as per the article. Consider the electrician discussion in the article. Real electricians want / need / expect real tools, not noob friendly fisher price toys.
Its a tool for camp. As wikipedia says "based on deliberate and self-acknowledged theatricality." And camp doesn't appeal to everyone, all the time. If they were making a UI based on camp in a cultural era of camp ascendance, lets say the late 60s, early 70s in the USA, then this would be a win, it would be "groovy", it would be "boss". But... it isn't.
Another way to describe it is ergonomic problems. I'm used to expressing myself, however poorly, staccato finger gestures at 104 keys at a desk at 100+ WPM. Any failure is a failure of my own creativity, not the user interface of my keyboard, which seems fairly capable in better hands... Now I must downshift and do interpretive dance, or gang hand symbols to communicate. No, I think not. Aside from gorilla arms problems limiting duration and comfort (no 12 hour shifts at the computer, for better or worse). And the speech interface limits it to quiet home use, while alone.
The optical technology sounds incredibly impressive, I'd love to play Minecraft wearing it. Or Forza, or a zillion other games. Its just the UI that sounds truly awful.
I'd imagine that the target is a replacement of the PC. You could (theoretically) have the same utility as a PC with a wireless keyboard/mouse paired with the headset. On top of that is all the holographic applications that will be figured out.
One general HUD use case I think would be beneficial is a driving aid: pedestrian and vehicle detection (or in rural areas animal detection). Whether the HUD is goggles or projected onto the windshield the extra data would be useful when driving. Automatic braking systems are great and all, but if you can see a deer on the side of the road in the distance via a HUD at night you can slow down well in advance (whereas the automatic braking system would engage when the deer crosses the road).
I would want to see more than a couple of independent tests verifying this is actually safer before it were allowed on roads. I can see the potential benefits but there are enough terrible drivers already without introducing more distractions.
Think about the holodeck from star trek. From an entertainment standpoint, you could immerse yourself in another world and solve a mystery. You could also learn all sorts of stuff with live instructions while working with physical tools when doing things like woodworking or electrical work etc.
Both portable cell phones and PDAs existed 10 years ago. The smartphone was a logical extension of those paradigms. AR Glasses (for lack of a better term) are still very young.
VR is going to be a real game-changer for entertainment, but it's AR that's going to change how we work and live our everyday lives. The key to making VR work is reducing lag, but this is even more important for AR, which carries additional complexity in sensing and blending the virtual with the real. The difficulty of sensing and correctly modelling the real world in real-time is immense. There aren't many companies I'd believe could make the leap directly to functional, useful AR, but MS's experience in gaming, with the Kinect, gives them a huge head-start. This could be the real deal!
>VR is going to be a real game-changer for entertainment
I think the jury is still very much out on this. Personally, I hate VR. Its claustrophobic, asocial, dizzying, and I do not like the idea of giving software my entire field of view either. Cheap HMDs are going to be nice, but I don't know how well they're going to do outside the hardcore gamer demographic. I can't imagine watching a movie with friends with each of us wearing these things or even playing a console game with the other players in the room.
AR gaming, on the other hand, isn't something that gets much press, but I can really see some novel uses. Imagine playing something like "Gone Home" but in your home. Clues hidden in your real-life closets, drawers, etc. Or peering into the mirror in your bathroom and seeing the main characters face instead of yours. Or something in the background that's not really there.
I wouldn't watch a movie in the same room as friends with a HMD, but I could watch a movie with someone in a "virtual theater" when they're 1000 miles away.
I don't think we should write it off so quickly; it's like saying "Text messages are pointless, they don't carry inflections and emotion like phone calls. I can't see myself texting anyone." There are things you can do in real life but not in VR and there are things you can do with VR that you can't do in real life.
Depending on the resolution, it could replace your monitors at work (assuming the headset is light, comfortable, etc). Using a wireless keyboard and mouse, you'd have your computer with you anywhere in the home or office.
As a developer, having a wall of holographic desktop screen space in front of me would be amazing. My home office would look much better too!
I have no idea how far away this version of HoloLens is from that reality though.
I think the 'depending on the resolution' is key. Its not really a matter of software to me, its whether or not the hardware can deliver the necessary resolution to replace a monitor that's typically 1.5ft from your face. If you can break the dam by executing on the hardware the software will flow. (I wonder how Hololens compares to Magic Leap's technology.)
I think it would be very cool to just have 3 tripods on your desk (a ball on a stick) that represent 3 screen spaces. You could reach out and move them around, telescope them up and down in the physical world and the headset could use them for triangulation to render an accurate monitor on each. You'd still have a keyboard and mouse in the first iteration and then slowly overtime give way to other forms of input.
Thinking about how our eyes really work - you don't need the same resolution. You only need proper eye tracking and the right amount of resolution in the center of your vision (or wherever). You only need it to render what you are looking at - I'm sitting about 2 feet away from a 27 inch screen.... I'm never looking at the entire thing in such a way that I need all that detail at every point. Sure, I need it to be there when my eyes dart around... but as long as that's done, it will look just as real.
Given something more adaptive, there's no reason you couldn't have a ginormous holographic wraparound workspace... or whatever your imagination can come up with.
Very much this, being able to have 6 or 8 virtual displays projected on a wall would be great.
I know that this tech could do so much more (one giant display the size of the wall) but the different displays is nice bec it allows me to silo things into different categories.
Many years ago (close to 20) I saw a cool demo of a palm-able chording keyboard, and looking around it seems that a few brave souls have actually attempted to bring the idea into production.
A good keyboard only UI (I don't trust small scale pointer input), and you could move the entire traditional PC interface over into a very discreet package.
The web would be the one place where changing UI paradigms would be hard, the web pretty much insists on fine motor skills clicking one of hundreds of visible links on a page.
It's what I first thought of when I tried the Youtube app on Google Cardboard. The app has you in a theater with a main viewing screen, but then littered 360 degrees around you are other videos you can watch or cycle through.
All I could think was how awesome it would be to have half a dozen virtual screens (that I could manipulate) for programming. It would well justify the cost of a good device because six monitors would be probably just as expensive.
After seeing that Autodesk is finally iterating on some of their software to support the rapid protoyping/3D printing community in major ways, I'd like to see what kinds of tools Autodesk could come up with. I see Holo Studio as the MS Paint of "Holographic computing" or whatever we're calling it now.
Is that a killer app, I guess not, but if this HoloLens device is really running Windows 10, and the API's are baked into the OS, that puts it MILES ahead of where Google Glass was, and developers will be able to do something more interesting than take photos and share them on Google+.
Can you expand and maybe source that comment. "Initially funded" to me suggests that the R&D was funded by people/companies for the purpose of carrying pornography.
"Popularised by" is often suggested, or even "first exploited commercially for" - either of these seem far more likely. I just can't see Daguerre, or whoever (Wedgewood, Fox-Talbot, ..), getting pay-checks from people/companies that want to publish porn?
I'm not saying, yet, that you're wrong - history is often surprising.
By wire recordings you mean the audio recordings on wires that predate reel-to-reel and such. Are there existing audio porn recordings from the 1890s? It seems strange with the cost of the tech that anyone would even want it as a recording when the people who could afford it, given the massive difference in income and the availability of prostitutes, could order a live rendition. Stranger things have happened though.
Telegraph? Morse or semaphore porn, ... I'd have thought that was really an exception to Rule 34!?
Where I work we sell shipping containers filled with products of various kinds. It would be incredible to fit a customer with these and walk them around the containers, open the doors, and interact with what's inside.
Normally you'd need a large open space (and a forklift) to demonstrate the product but with this? You could do it in a large room.
I think this is actually the killer app for anyone too.
Imagine shopping at Amazon and being able to manipulate a product in your hands before you buy? Want to buy studio monitors for the PC but don't know where to place them? Whip out your HoloLens and put them wherever you want!
How would that piece of art look on your wall? What about the other wall over there?
IKEA? Hmm... which couch looks best in my living room?
Imagine seeing a marker shooting into the sky, like an old movie premier spotlight, that marked the locations of your friends and family? Or just anything.
At an amusement park and don't know where the nearest bathroom is? Follow the blue arrows on the ground and you'll find it.
Star Trek's Holodeck. Enter a large laser tag room with these on. Now you can project anything in the room. Unlike Oculus you can actually run around and play in the environment because you can see where you are going.
I argue it is going to be in the field of CAD editing and display. The whole field is totally set up for these "displays" (which at the bottom of it is what they are) and the bulk of 3D models are being produced in that field and are ready to immediately integrate.
A greenhorn tradesperson way out in the middle of nowhere looking at some broken machinery and having no idea why it's failed, calling up their boss with 40 years of experience, and having the boss, sitting on a comfy couch back at the office, look at the problem, explain it and draw diagrams of how to fix it in 3d space infront of the greenhorn.
It's "augmented reality", so multiple people can walk around and see the display from all angles at the same time. So it's beyond stereoscopic, but it's not a traditional hologram. (It's much better than the Tupac "hologram" which was a single 2D image!)
It works by stereoscopy. There maybe something "holographic" in the math used to compute what to show each eye, but there is no projection of light into space to form holographic images that people can walk around.
Nor is there with holograms. Holography requires creation of light fields from a flat surface - each point on the surface reflects a different amount of light depending on what angle it is viewed from, exactly mimicking the way light would pass through that plane if an object were there. No 'projection of light into space' is involved.
Since images are formed on your retina by focusing real lightfields, a true holographic display which produced a complete lightfield would be much more realistic and comfortable to view than a flat stereoscopic image is.
I don't know enough about holography to agree or disagree, I was under the impression that the "lightfield" has a 3D structure that e.g. the light coming from a movie screen doesn't.
In any event I don't think that the "holographic" goggles are actually projecting a complete lightfield. I'm pretty sure they just shine two more-or-less normal images into your eyes although the math to compute those images might, uh, be holographic.
You can capture a lightfield using a 2d sensor https://www.lytro.com/ It's like the way your eye can re-focus on different distances without moving. That's something you can't do with MS's new tech - everything in the image will be in focus at the same distance, even if your eyes are getting different images.
I doubt it actually has a holographic display; I assume that processing power of those glasses isn't enough and current display don't have high enough resolution yet. But then again the following description could be a holographic display (or it could be a description of antireflective coating -- those dumbed down explanations are sometimes worse than useless)
> To create Project HoloLens’ images, light particles bounce around millions of times in the so-called light engine of the device. Then the photons enter the goggles’ two lenses, where they ricochet between layers of blue, green and red glass before they reach the back of your eye. “When you get the light to be at the exact angle,” Kipman tells me, “that’s where all the magic comes in.”
Yes, that description certainly -sounds- like they're doing something more than just suspending two stereographic LCD displays in front of your face. Talk of angles suggests they may be doing something to stimulate the correct lightfield passing through the pupil of your eye... but really, need more technical reviews to know more.
I just finished reading Vernor Vinge's Rainbows End [1], and then this comes out!! I want it now!!! This is so the way to go. If you are a SciFi fan, you have probably heard of Vernor's books, if not, I really recommend them. They have actually given me a brighter outlook on the future of humanity :) Apart from Rainbows End, check out Zones of Thought[2] if you want a outlook at what space travel could become in the future.
Back to the AR subject, I much prefer AR over true matrix style VR. Let us stay in the real world, move about the real world, extend it with the virtual.
I think they have potential, they are very clunky right now. But they have most of the same features at the Microsoft ones. It was really cool to "touch" a 3d object.
Yeah, I'd guess that someone leaked to wired accidentally or on purpose (but the person wasn't cleared to leak) and to keep any info from getting out they gave Wired an exclusive on condition that Wired keep it's mouth shut until today.
Skeuomorphic UI design makes a comeback ;) Ironic that after they push for flat UI that this device is going back to 3-D UI (obviously 3-D is the whole point).
It might look bad if light sources of the rendered objects are not consistent with your surroundings.
It makes the Minecraft purchase make a whole lot more sense now.
I mean, it made immediate sense it's a cash cow, but long-term Minecraft is perfect, it's simple, easy, forgiving and fun - a perfect entrance into VR/AR
Why do I get the feeling this is more akin to the trailer to a movie. One that I really really want to like. And with decent editing to fit in a small demo looks bloody awesome. But on arrival will mostly just be boring.
I also feel that the videos give a very misleading sense of what it will look like to see someone using something like this. Unless, I suppose, they have worked out the "shared" experience would be like. (That is, two of these in the same room.)
More closeup picture of the device in that link. There's some sort of camera/light emitter above the left and right eyes. Two panes, and some sort of smaller HUD in front of the right (and left?) eyes, but it actually might be a part of the second glass pane.
Wow. This looks seriously impressive and based on everything we know so far, it looks far better than Oculus Rift and Sony's Project Morpheus. I can't really afford such purchases at the moment because of an upcoming wedding, but I am definitely going to get one of these when they become available. Seriously, look at that Minecraft demo, impressive.
Looks like Microsoft has just upped the virtual reality goggles race. My mind is racing with excitement over all of the applications this could serve. The tech behind how these glasses actually work is also pretty clever, I don't entirely understand it, but it seems to be more than just a OLED diplay, lenses and a driver like existing solutions. I legitimately feel more excited for this than I have been for Oculus and Morpheus.
Another little clever thing Microsoft have done here is calling it a holographic headset, not a VR headet. The different wording not only separates Microsoft from other competitors refering to their headsets as virtual reality headsets but it also makes much more sense (from a technical and branding perspective).
Yes, and Oculus Rift is now less relevant as well (good timing for selling it to Facebook). Why buy a dedicated heavy wired non-see-throw helmet? To shit yourself when someone is tapping your shoulder?
To get fully immersed in another world. For example, playing games or watching a movie. I go to an IMAX theatre because it engulfs my senses, audio all around me, most of my field of view watching the screen. If the screen were translucent it simply wouldn't be the same.
This will no doubt have some incredible applications, most of which augment reality. Not quite the same goal as Occulus.
Correct me if I'm wrong, but couldn't Microsoft just release (or ship this product with) a sort of blinder to put around these glasses to make the dedicated sight come from the glasses themselves while darkening everything else?
Certainly possible. It'll be interesting to see how feasible it is for these goggles to render entire 3d worlds in this scenario, since normally they'd be rendering a small fraction of the space you're in.
If the speculation about creating opaque holograms is correct, you can just use a virtual blinder as well - a black hologram that obscures or creates a virtual stage.
Exactly. MSFT and Oculus are not at all targeting the same thing. The comparison is natural, but ill-fitting.
I bought an oculus and am excited by VR because teleportation IS FREAKING AWESOME. AR is cool, but a far cry from the sense of presence that we talk all day about in VR land.
Although they can easily replace it entirely. See the Mars Rover demo in the article, I believe it said that the entire image was replaced with Mars, and it was realistic enough that his legs weren't believing what they were stepping on.
Is the plan that you develop apps for this thing using C# (or any .Net language) plus special libraries? Or will the somewhat real-time nature of 3D imagery integrated with the real world using a lightweight mobile device require something closer to the metal?
"Developers can target all these device types, with one platform and one store. And stay tuned later while our device types expand." They will not abandon that mission, they just committed to it! (video 3:00)
Someday there could be gloves that allow you to feel these "holograms" using cables that prevent your fingers from moving up/down, etc.
I also think this could be a short-term way to introduce full-body physical constraints in full-on virtual reality. (In the more distant future, we will probably know how to stimulate the brain directly to produce these sensations.) I envision a full-body suit (fitted to the user with near perfection) with a bunch of cables going in every direction. So if you try to push a virtual wall, the proper cables will be set to resist. Doesn't seem very practical, but the glove version might be.
Here is fresh "back-to-reality" report from Engadget journalist who just tried working HoloLens device prototype (at the same event where it was announced):
---------------
"Does it work? Yes, it works. Is it any good? That's a much harder question to answer."
"I say this in the nicest way possible: Using Microsoft HoloLens kinda stinks. In its current form, it feels like someone is tightening your head into a vice. The model being shown today on Microsoft's Redmond, Washington, campus isn't what you saw onstage, but a development kit. The demos begin by lowering a tethered, relatively small, rectangular computer over your head, which hangs around your neck by sling."
"You can literally feel the heat coming off the computer's fans, which face upward. It feels like you're wearing a computer around your neck, because you are."
There doesn't seem to be full hand tracking (as suggested by the demo on stage). Instead it uses something they call AirTap which uses gaze for pointing and hand only for clicking:
"By looking at any of them and using "AirTap" (hold up your hand in front of your eyes, tap with your pointer finger), I could select any contact to call."
"While the effects of interaction were impressive, the actual interaction was less so. Rather than picking up a sheep with my hand by literally just grabbing it with my actual hand, my only means of interaction were voice (pickaxe! redstone torch! etc.) and the aforementioned "AirTap."
Overall impression is kinda mixed (especially compared to how well were received even very crude Oculus Rift early prototypes):
"HoloLens is clearly very early, and kinda sucks right now. It's uncomfortable. It's cumbersome. It looks and feels like a piece of hardware that's far from final."
"Is it bad? No. Lord no. Stop it. It's very impressive, but it's a brand new entry in a market that basically doesn't exist yet."
Gizmodo report was more enthusiastic, especially about ability of display to hide real view, though noted "tiny" field-of-view:
"It's one of the most amazing and tantalizing experiences I've ever had with a piece of technology."
"It's not like the Oculus Rift, where you're totally immersed in a virtual world practically anywhere you look. The current Hololens field of view is TINY! I wasn't even impressed at first. All that weight for this? But that's when I noticed that I wasn't just looking at some ghostly transparent representation of Mars superimposed on my vision. I was standing in a room filled with objects. Posters covering the walls. And yet somehow—without blocking my vision—the Hololens was making those objects almost totally invisible."
"... you look down at the coffee table and there's a castle sitting right on the damn thing. It's not shimmery, but it's not quite real, either. It's just sitting there, perfectly flat on the table, reacting in space to your head movements. It's nearly as lifelike as the actual table, and there's no lag at all. The castle is there. It's simply magic."
>There doesn't seem to be full hand tracking (as suggested by the demo on stage)
The demo on stage clearly mentioned that you used your gaze to select and then tapped with your finger. Which is also exactly what we saw the lady on stage doing.
Promotional videos look impressive. The demo, thought, not so much: gesturing seemed awkward, clicking in the air is not an interface. I guess this could easily be improved using haptic feedback.
Of course, the naysayer in me is just thinking this is a PR move to tell investors and stakeholders everything is alright and that MS got their backs covered by stepping into the future. Which is not that unusual, Google and Amazon are doing it too.
It was a bit irritating to see the kid running up to the model of the rocket ship with excitement and no goggles on. Why lie and make it seem like you'll be able to see these "holograms" without the goggles on? Everyone walking around with these big ski goggles strapped to their head seems worse than Google Glass, which is awkward enough as it is.
This has the potential of making being rich kind of...obsolete.
Think of it like this: can't afford an iphone? There's a Holographic iPhone you can download. Can't afford a fancy big screen TV, there are thousands out there you can download, etc.
This will all depend on the quality / ease of use of Holographic Windows, but I can already see it's the future of computing.
The last few major purchases I made with my relatively plentiful disposable income: A pair of Doc Martens, two plane tickets to Kenya, yearly membership fee to my concierge medical clinic. None of these could be replaced by this technology. Arguably, I could make it look like I'm wearing new shoes when look down, and I could load up "Kenya" mode on my Holographic Goggles -- but nothing will replace the feel and protection of good shoes, or replicate the smells, tastes, and adventure of actual travel.
Also, I still own an iPhone 5 and don't have a particularly fancy TV...
Your same argument has been made (to a greater or lesser degree) with the advent of the industrial revolution, the Internet, cheap processors, 3D printers. There is no technology that will make wealth obsolete.
I wonder why they didn't demo an outdoor use case.
I'm curious if this technology is closer to oculus rift than to google glass.
I'm thinking maybe they use a front-facing camera to capture the scene and render the 3D stuff on top of the camera view? a user simply sees the composite scene via a display similar to oculus rift.
Maybe the tracking of hands for the controls do not work outside. At least with the kinect, the IR projection that it depends on to map the 3d space will not work well (or at all) in direct sunlight. When I was working with robotics that had kinects mounted for 3d vision we always had to put the blinds down when the sun was shining into the office.
I watched the live demo and was really excited about this and Mary Jo Foley and Paul Thurrot got to use it and are really excited about this. They also mentioned that it will be released this year. They talked about it on TWiT. Paul said that it looks as good as the pictures on the holo website.
The HoloLens description reminds me of the VR technology depicted in Arthur Clarke's "Light of Other Days". Its interesting to see how many of things he predicted of the future we have achieved, through different means than what he imagined.
The technology looks very impressive but I do not see how this will be widely useful. The necessity to hold your arm out as far as it will go will only cause Gorilla Arm, and is the reason touchscreen desktop PCs are always abysmal failures. You need to move your arm far more than with a keyboard and mouse, so you'll never be productive. The technology looks impressive but I can't see it being useful unless used for holographic weather reports where excessive gesticulation is apparently mandatory. 3D modelling with that would be exceptionally painful. Perhaps gaming or something, but I wouldn't want to be sat there on my armchair waving my arms around like I am suffering some sort of uncontrollable seizure (no offence intended btw, just the best way of describing my actions to onlookers).
Might allow them to reinvent their "Windows" theme when people start interacting with their environment through some sort of lens. It will act as a 'window' to the world.
I don't understand why there is no comment about the probably most important aspect: The image quality. Is it low resolution or really high def and you can't see single pixels?!
Hey! We're having our first Meta (YC S13) AR Hackathon in SF Feb 20th-21st. Come and hack on our Meta Glasses and have a hands-on AR experience! goo.gl/b6BIWN
I know this is the future, and I feel somewhat weird in saying this but I am not quite happy. I feel like technology is coming to replace the physical world instead of augment and improve on our actual experiences. Either way it's some awesome work that was done.
This is 100% designed to augment our actual experiences. That's why they said "this is not VR". It's putting digital objects in the real world. That's augmenting actual experiences.
This discussion is happening quite often in the VR world. Essentially the accepted opinion is that 1) AR is harder than VR and 2) solving the problems that VR has will get us closer to usable AR
I might be ignoring something, but this appears to be the first ahead-of-the-pack innovation from Microsoft that I've seen in at least 20 years. Congrats to them.
Kinect was pretty surprising too, especially at that price point. EDIT: Granted they didn't develop 100% of the tech themselves, but still: depth perception, human skeleton recognition, in real time, on a crappy 360, for 150$? That was impressive.
It's just so much easier to explain to people what it is if you say Holographic vs Augmented Reality - even though it is technically wrong. Kudos to MSFT for making that leap.
[1] Shameless plug for (http://www.visidraft.com), my AR CAD company.