Hacker News new | past | comments | ask | show | jobs | submit login
Hands-On with Microsoft's New Holographic Goggles (wired.com)
946 points by vesinisa on Jan 21, 2015 | hide | past | favorite | 372 comments



I think Microsoft re-branding Augmented reality as "holographic" is hugely impactful and, with the new wave of AR and CV products that are coming onto the market [1] might be the thing that makes people take it seriously.

It's just so much easier to explain to people what it is if you say Holographic vs Augmented Reality - even though it is technically wrong. Kudos to MSFT for making that leap.

[1] Shameless plug for (http://www.visidraft.com), my AR CAD company.


I totally agree. AR made and makes ZERO sense to my non-technical spouse and family. Holographic makes total sense. It's a clever rebranding and it will help sell ALL AR platforms, IMHO.


There are various times in history when vacuum cleaners, cell phones and browsing the web would have made ZERO sense to your non-technical family.

People will learn new words that define new things and then suddenly it will seem the most natural thing in the world to call it by that name.


Portable tape players were cool. Then suddenly Sony made one, and we now all (those of us old enough) remember fondly our first or favorite walkman, whether it was and actual Walkman or not.

Smart Phones were hot tech, but our non-technical families didn't really get the point until suddenly those "smart phones" were "iPhones".

Web searches were a mystical dark art, with all your Alta Vista voodoo, until there was a single box with an algorithm that did a pretty good job finding what you meant... and then many competitors later, someone called one of those boxes "Google" and now that's the verb we use.

Marketing, sales, adoption drive the names things come to be known by. Sure, in reality it's Augmented Reality, but if in 5 years everyone's talking about their sweet new Holographic ski goggles, it's still AR in the mainstream.

A rose by any other name


Hologram/holographic is already a word. It has a very specific meaning. It is not a new conjured word like Walkman or Iphone, which were just one implementation of a portable tape player or smart phone.


It also has a rather general meaning for the majority of non-experts.

We re-appropriate words all the time in English, and it's generally a fine thing to do, specialists' consternation notwithstanding.


My mp3 player is still an actual Walkman.


What are you talking about? The walkman was the first portable tape player commercially available.

Also what Google brought that Altavista didn't have was that it actually returned relevant results, not pages of spam where webmaster cranked as many keywords as they could (including "Pamela Anderson", always).


I was talking portable players, not cassette tapes specifically. There were various options for tape, and 8-track and the like before the Walkman, all of which were portable and played music but none in the way that made the Walkman take off, obviously.


Sometimes people learn new words, other times keeping old words (usually modified) works better. To take your example, in Turkish the word for vacuum cleaner literally translates as "electric broom", and cell phones are commonly called "cep" which actually translates as "pocket". Or consider the British who still use electric torches (that's a flashlight for all you non-Brits).


We Brits also call a cell phone a "mobile phone" which describes perfectly what it is.


In the US, a "mobile phone" is a cordless phone with a base.


As an American, I don't agree.

All the companies are branding as "mobile." I don't think they really even sell household corded phones anymore, we don't have to be specific about them.

If I wanted a corded phone, I think I'd have to use that specific modifier to get one.

Anyway, back to the point--say "mobile phone" to Americans and they know it's a cellular phone.


Seconded. "cellphone" and "mobile phone" are basically interchangeable to me. Though, honestly I'm more likely to just refer to it as a "phone" and use a qualifier to refer to the non-mobile variety (e.g. "house phone" or "land-line")


we just say "torch"


and just about everyone says "hoover" not vacuum cleaner

I bet it would take along time for most people asked "how does a vacuum cleaner work?" to use the word vacuum, apart from as a name for the object.


Nah, "hoover" is just your dialect. American here, grew up in CA, and I've never heard or said "hoover".

But yeah, I wouldn't say "vacuum" as a method of action unless you really drilled down. (Something about "pump" and "sucking", but I'd have to think for a bit to get to the actual physical "vacuum" aspect of it.)


You're right, I meant to type "in the UK" somewhere but it seems I forgot


In Indiana, a lot people said "sweeper" and they "sweeped" the room with it.


Can confirm, Hoosier here.

Also 'clicker' for remote.


True, but that word is pretty clearly gonna be Hologram. Augmented Reality isn't as fun to to as as Hologram or Kleenex.


Browsing the web - yes

But vacuum cleaners and and cell phones contain the name the explanation of what they are and do. "augmented reality" doesn't. Holographic... it does a better job, at least for some people.


How does "cell phone" explain anything useful about that particular type of phone? Using the word "cell" for a specific geographic area covered by a particular radio tower is itself a rather vague and general analogy. You could use "cell" for any element of a larger structure or organism I guess, but it wouldn't explain anything specific.

"Mobile phone" could have made sense to someone even before the introduction of mobile phones. But I think the word "cell phone" only took on meaning after the introduction of the device itself.


The cell(ular) in cellphone refers to the handoff between cell towers as you move around.

In comparison Radio tends to have a single tower that sends radio waves and does not care about who is listing. The advantage for cellphones is you can reuse the bandwidth from the same small set of frequencies across the country AND maintain a phonecall durring a handoff (where raido keeps forcing you to change stations on a long trip). The issue is the phone's need to rebroadcast their position constantly which eats’ battery life. It's even worse when they fail to connect to a tower as they just keep trying until the battery fails.

Granted, an end user might care, but MMX or SSE mean little to most Intel customers. 'with techron', 'dual turbo', 'LED TV's', 'tessellation', 'electrolytes'


I wasn't referring to "cell", I was referring to "phone". Just like with "vacuum" and "cleaner". You don't need to be an engineer to understand that the first one is used to make phone calls, and the second one to clean.

Augmented reality, on the other hand...


It makes no sense at all if you think of an actual hologram, but makes perfect sense if you think of a Star Trek hologram. This is like a wearable holodeck. Just add a haptic body suit (no doubt in the works)!


Note to self: Add to list of things I should make.


Hacker news comments probably aren't the best place to store personal notes.


Because "Make Holodeck" is such a personal and valuable idea


From their marketing material, they pretend they figured a way to fake depth. If true, that would be a huge step compared to classic stereoscopic technologies or other cumbersome devices.

However, I doubt this is the case, else they wouldn't just slap this announcement at the end of a boring Windows 10 presentation. I mean, all I could remember about this presentation was: "Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10. Windows 10 with HOLOGRAMS using Windows 10! Windows 10. Windows 10. Windows 10."


I don't think they are pretending they figured out a way to fake depth, it sounds like they have created a compact head-mounted light field display, which is absolutely a huge step forward. Unlike classic stereoscopic technologies light field displays let your eyes refocus the image because the displays recreate the direction of the from the object as well as the color and intensity. NVIDIA demoed a compact head mounted light field display recently that explains the concept. See it here https://news.ycombinator.com/item?id=8451746


No need to fake it... if you can get the retinal projection accurate enough, with fast enough eye tracking, the depth is as real as anything else you'll see in real life...the goal would be to have it feel completely natural.


... and to do it, as well as motion tracking and environmental feature identification, in real-time with as little latency as possible, and do it on batteries.

I keep feeling like watching the Longhorn demo at PDC 2003...


They added the third chip, besides CPU and GPU, they call it HPU (holographic processing unit), which could speed things considerably. If they hooked it directly to ccd then they could really grok terabytes of data on battery.


Magic is not possible. Whatever the HPU does is still constrained by physics. They can't explain away the engineering problem with an invented new name for a device we know nothing about except that it defies the laws of physics WRT computation.


What we do know is that fixed function ASICs can be can be 10s to 100s of times more power efficient than general purpose (Von Newman) computing.

So noting they have described defies the laws of physics.


So, where would the terabytes per second come from? ... on a head-mounted device?


I really hope they get the accurate tracking and depth and getting objects to "stick" where they belong in 3d space correctly, without moving out of place or floating in a wrong way, with quick head movement. If they can do that, most of the battle is won and it will be amazing.

Edit: although, of course they'll need some intelligence on the surroundings to identify surfaces and stuff. But imagine like re-decorating your work room, adding scifi textures or something, and maybe pipes or whatever ;p


They didnt, otherwise they would show you eye view instead of third person impression of what its supposed to look like to the user.

Most likely it suffers the same shaky snap laggy tracking like every other AR setup.


But they showed footage "through the eyes of the wearer" and they let press have a hands on demonstration, so it's not like they can really fake anything.

I did see a tiny bit of judder in the footage that was supposed to be exactly what the person wearing the glasses would see, but it was hard to tell.


In the video I saw at the conference presentation, the "holograms" were always in front of the person's appendages, obscuring things: https://www.youtube.com/watch?v=b6sL_5Wgvrg&spfreload=10


Peter Bright said it didn't suffer from that - see the Minecraft section of his review: http://arstechnica.com/gadgets/2015/01/hands-on-with-hololen...


> intelligence on the surroundings

In case any reader here weren't aware how Kinect works, it sends to the developer a 2D image of the depth. Of course as walod says, there's work to do to identify surfaces (as you can see on the image below, background elements are excluded).

http://www.gadgetguy.com.au/hands-on-with-the-xbox-one/micro...


You still need to blur objects that should not be in focus or you’re going to get mixed depth information.

EX: http://www.photographyblogger.net/wp-content/uploads/2011/06... Now picture an infocus immage behind the blurry background pen's.


>even though it is technically wrong.

Pretty much everything in this space is incorrect on some level. People casually call the Oculus "VR" when its just a HMD. The VR is going to be the software that works with the HMD. This is like calling a joystick a game.

Personally, I like the hologram branding. Its like the Star Trek holodeck, which invokes a really neat sense of futurism.


Oculus VR is the name of the company. Oculus Rift is the name of the product.


> People casually call the Oculus "VR"...

What people? Things I've never heard anybody say:

"Can you hand me my Oculus air quotes VR?"

"Dude, I spilled deer urine all over my Oculus virtual reality!"

I can imagine a northern european saying it in english, but those guys kind of sound like lolcats anyway:

"Sven, can I has Oculus virtual reality?"

Personally, I dislike the hologram branding - yet another corruption to spread confusion. I understand the motive, I just don't like it.


Its turned into shorthand for "seeing objects in space that aren't really there". They could have also gone with "compugraphic hallucinations" or "pink elephant computing".


Visidraft might be useful for an idea at our company. The video looks good but the "Get Visdraft Now" button doesn't do anything. Is it available yet? Pricing?


Sorry, send me an e-mail: Andrew@visidraft.com and we can talk.


All they had to say is: you can play Minecraft on this. Sold!


Don't let my kids see this :)


Well, "stuff superimposed in front of a live video" got re-branded as AR, so AR which was intended to be this kind of crap needs a new marketing name.


It will work. Until somebody comes up with an actual holographic device. I hope the name rights won't be gone by then...


MS just has to purchase that company and say they just made the next generation, problem solved.


You're probably right.

So sad...


I tuned out when I read this: "Sensors flood the device with terabytes of data every second, all managed with an onboard CPU, GPU and first-of-its-kind HPU (holographic processing unit)."

I'm sorry, but there is no wearable device which can handle terabytes of data per second. Heck, my brand new Haswell has a peak memory bandwidth of 17GB/s; even to the L1 cache, its theoretical max is 700GB/s.

This sounds like a puff PR piece.


It's Wired. The only reason we care about them is that sometimes they report badly on something others don't report on at all.


They probably confused that with GB. Not sure which sensors would even put out Terabytes per second. Let's assume they have a brand new high dynamic range RGB-D camera that has 32 bits per channel at 60 Hz. To reach 1 TB/s that camera would have to have approx. 1 Gigapixel. --> Not very likely.


> They probably confused that with GB.

Completely understandable, given the fact that Wired Magazine is after all, pretty new to all this computing and technology business ... /s

I can't imagine any (tech-savvy) editor proofreading this not going "wait what, terabytes, really??" -- which is then presumably their job to doublecheck.


I understood it as saying the sensors are reading terabytes of information every second. The data would get selectively loaded into the device based upon the task at hand. I don't see that as far-fetched -- I can hook 100 digital cameras up to my computer and make the same claim.


The terabytes of data every second was also mentioned in the live stream of the announcement, when talking about the custom HPU chip.


They didn't mention a timeframe for the amount of data collected. The exact quote is "[...] all by processing terabytes of information from all of these sensors, all in real time."

1:48:45 in the official video: http://news.microsoft.com/windows10story/


I got the impression that it was a write up of the demo video, not actually hands on... It's a confusing article and there's no 'real time video or images' of anyone actually interacting with it. I'm actually quite confused as to what this is, bad PR piece - more like a mocumentary on discovery channel...

I'm confused Wire - please clarify


> I got the impression that it was a write up of the demo video, not actually hands on...

That's not really an excuse to dumbly repeat something which sounds so amazingly far-fetched (see other comment elsewhere, what sensors even produce TBs of data per second?), it verges on the physically impossible, reporting it without even blinking an eye--such as "yes, you read that right, we think it's hard to believe too", or preferably some explanation of how it can even be possible. How did the reporter not go "Wait what--terabytes per second?!!".


A current generation Hawsell chip can easily manage terabytes per second in on-chip cache and by manipulating registers.

Eight or more virtual cores plus SIMD operations that can smash against large chunks of data per cycle adds up awfully fast on a 4GHz chip.

They're also probably counting the fact that data flows from one system to another in sequence, but adding up each sequence. Eight streams of 150 gigabytes per second for example.

This doesn't count information that's captured but discarded at the source, processed away before it's transmitted downstream.


Maybe the article is, but the product isn't puff. https://www.youtube.com/watch?v=RCCXZ8ErVag


It's probably more along the lines of the same data being processed within the same chip. Not actual memory bandwidth. This HPU they are talking about is processing depth information and eye lines (probably quite a bit more often than 60hz), it's quite possible they are processing the same data multiple times over achieving theoretical bandwidth in the TBs. It's disingenuous sure but it's like saying a network with 20 100GB routers can handle terabytes of data.


For that matter, what sensor can produce terabytes of data per second?


The CERN Large Hadron Collider sensors[0]

(not head-mounted!)

[0] http://en.wikipedia.org/wiki/Large_Hadron_Collider


Thanks for the costume idea!


Nope, that's 25 petabytes per year - under 1 gigabyte per second.


It's not constantly producing data though.


A single 4K2K RGBD sensor at a high enough refresh could generate something in the 100Gbps range. The device has at least 2 (possibly 4?) forward facing sensors. It's presumably also doing inward facing gaze tracking, audio and IMU.

As a point of reference the Leap Motion Dragonfly has 2 x 3K sensors w/ 225fps color and 720fps tracking.

Presumably the "HPU" is an ASIC that bakes in some sort of SLAM/positional tracking, skeletal tracking, and gaze tracking.


Video of it in action here (?): https://www.youtube.com/watch?v=WIiTfdqCUIY


Here is the product website with three promo videos on it:

http://www.microsoft.com/microsoft-hololens/en-us


I think this type of AR / holographic technology has many, many more potential real-world applications than VR. With VR, you're shutting yourself off from the outside world. Here, you're enhancing the outside world with technology. You still get to interact with others. Using HoloLens doesn't stop you from doing almost anything.

What I'm curious to find out is whether HoloLens will run into the same core problem as Glass. People are afraid of people wearing Glass. They're scared that they're being filmed, or worse. Unless HoloLens can avoid making you stand out - by looking like regular glasses, or even contact lenses - I'd guess that HoloLens will end up suffering the same fate as Glass.


It's hard to understand how AR / holographic technology could help people in their day-to-day life. There are a zillion potential uses, but all of them seem extremely complicated and hard to pull off.

Take the example of fixing your car. For example, performing your own oil change, or replacing your alternator. That seems like a perfect use case for holo, right? The goggles would tell you what needs to be done and what the next step is.

But that would involve so many technical challenges that it seems very difficult. You, as the creator of FixYourCar holo app, would need need to detect what type of car the user is looking at, what part of the car they're looking at, render an overlay with the correct orientation, and so on. And at the end of all of that, it's not entirely clear that your app is more helpful to them than if they'd just look up a list of steps for fixing their car using their mobile phone.

I guess what I'm asking is, what do you think holo's killer app would be?


I don't know about the consumer market, but I can think of numerous commercial applications.

Sony currently sell an HMD for surgical use, allowing for comfortable and convenient viewing of video from endoscopic cameras. A practical translucent HMD would be extremely valuable in surgical procedures guided by x-ray or ultrasonic imagery.

To give a trivial and easily-implementable example, I would have bought Google Glass without hesitation if it integrated with my electronics test equipment. Being able to view data from an oscilloscope or logic analyser without taking my eyes off the PCB would be a boon. PCBs are designed with fiducial markers as a necessary part of manufacture, and machine vision is already used extensively in many aspects of electronics manufacture and repair; It would be relatively straightforward to overlay all sorts of data that would be enormously useful to technicians and engineers.

Stereoscopic and volumetric displays are used extensively in petroleum and mining geophysics; This equipment is currently relatively niche due to extremely high cost, but could be used in a much greater range of geoinformation applications if costs fell.


Imagine a car mechanic remotely helping you to do what's needed to be done. This use case is showcased in one of the videos and it is probably not as complicated as building an AI engine to help a user repair his car.

This thing is really crazy!!


This is actually a relatively silly use case for this. A lot of the actual difficult things that a mechanic can do for you usually involve more strength than you have. Or just experience in actually working with things that they can't see.

Now, a mechanic using this to "see" things that are actually in control of a remote robot? Pretty cool. Showing you the thing that is right in front of you? Cute, but ultimately silly.


Yeah, I'm just not sure that I buy the idea that there's a big market for an expert coaching you through doing repairs via AR goggles.

How exactly does this work?

You still need to pay for an expert's time -- in fact, you probably need to pay more for it, because the expert is probably faster to do a thing than to explain the thing to you and then you do it. Also, the expert now needs to be someone with these additional skills of coaching someone through an operation.

Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?

I mean, maybe! Especially if you can locate the expert in some place where labor prices are much lower (so: India). But then you also need the person to buy the AR goggles. And how often does this use case come up? Is this like 3D printers where people try to sell me on the concept that I could pay hundreds or thousands of dollars for something that can make me things that cost less than $20 and which I need three of every year?


I agree this is highly unlikely to be a common use case. I might use it, because I try to do most things myself, and I could often use some expert advice. And I know the people who would help me, but it'd be inconvenient to have them come all the way out here. But overall, this is a one-in-a-thousand use case.

But:

> Tools are still needed -- is there actually a big market for the kind of repairs you can do with the tools that everyone has lying around at home but which is complicated enough to need hand-coaching by an expert?

Almost all repairs on appliances, cars, houses, and so on can be done with tools you have laying around the house; they don't require anything more than a hammer, drill, screwdrivers, wrench sets, etc. The only thing you're typically not going to have on hand is replacement parts, which are usually not too difficult to get your hands on and which you would have to pay for anyway.

When people do this, it's not going to be an "expert", it's going to be something along the lines of "hey dad, look at this real quick."

> But then you also need the person to buy the AR goggles.

The Ars reporting showed someone using a Surface to view and annotate the HoloLens-user's view, not another set of HoloLens. So the barrier is much lower, any Windows 10 PC should be godo enough.


Have you ever worked on a car? Tons of specialized tools can be needed for some tasks. Do you have a set of triple square bits? General purpose puller? Bearing extraction tool? How about half inch drive Tor-X sockets? 200 ft lb torque wrench? When you work on a car for fun, you find that your collection of tools balloons just for all the things on the car that require one specific tool. If you think you can do everything on the car with just a simple socket set, you'll wind up stuck and having to buy a new tool for every task.


Yes, although granted when I said "almost all repairs" I did not have in mind major automotive work, but more along the lines of general maintenance and little things going wrong. Of course if you're doing something like rebuilding the transmission you'll need more tools than the average bear.


Fair enough, but my point was that for things where you'd benefit from a mechanic walking you through a task, you'll probably need some specialty tools to do the job.

Also, apologies for my brusqueness.


Training. Emergency assistance. Military applications. Don't trap yourself by thinking that the item needs to be the be-all for the Joneses.


Yeah this could make putting Ikea furniture together alot easier.


Like that's every going to happen.

Online training though - a whole other thing. Imagine math and physics with interactive 3D visualisations.

Or for kids, a virtual version of one of those 30-in-1 electronics teaching kits.

And so on.

If this works it could be a game changer, and it could also create a whole new app industry.

I just hope the technology is non-crappy, and it doesn't get managementised into uselessness.


None of those things require the augmented reality aspect of this: you don't need to place math and physics lessons into your local environment. They'd do just as well with VR, and indeed it doesn't sound to me like they'd do MUCH worse with just a plain old screen. What is it that you're imagining we couldn't do with a tablet that has swipe gestures to rotate the demo around all axes?

An electronics teaching kit might not work on a tablet (but would in VR), and note that any kind of really fluid manipulation of a virtual environment is going to involve a whole additional technology that gives precise locations of your hands (at least). The HoloLens allows a few simple gestures, not the ability to handle virtual objects in many degrees of freedom.


I'd approach this problem differently. Not from a service industry angle but from a product vendor angle. Imagine a world where AR-glasses are widespread. I'd be pretty interested in buying the kitchen sink that comes with an AR repair guide or the furniture that comes with AR assembly instructions.

So I think the interesting market is in building the infrastructure/app that makes it easy for vendors to create the content and ship it as a (free) addon for their products.


You could draw on a much bigger pool than professionals. There are plenty of people who know a skill and don't professionally sell their services but could spare a few minutes occasionally to help someone with a problem. If you combine skill tracking with instant global availability of services, you have a lot of room for development.


At any given time there are probably thousands of car mechanic experts sitting idle


I don't think that 'strength' is the issue but better tools and a lot more experience are the core parts: I remember watching one mechanic changing the light in my car: what took me ~15 minutes (I'm not kidding) took him ~30secondes..

And that's like riding a bike: you cannot really tell someone how to do it..

It can still be very interesting in many ways!


But the hologram need not be a person! At least, it won't be once this use case makes sense. The hologram, will be an AI hologram. Just like the light switch installation in TFA, it would be silly to have a human expert show you how to install a switch, or trade out a component in your car, once an AI expert will do.


Don't forget actually having the right tools to do the job.


Ah, yeah.. That's a good point. And the bandwidth probably exists to transmit images from the goggles to someone else in near-realtime.

Now I'm really excited about this.


If there's a market for this, why doesn't it already exist? You could take your smartphone under your car, and video chat with a mechanic anywhere in the world. The mechanic could even draw arrows or highlight areas on your video in real-time as it's looped back to your display.


https://www.youtube.com/watch?v=4LE_IocFnL0

http://www.bmw.com/com/en/owners/service/augmented_reality_w...

Its not like car manufacturers dont have cad models of their own cars or anything, right?


That adblock thing would probably be great. Also, I ofthen wish there was an easy way all those specials in the store could be compared more easily. Usually when you work out that 2 for x offer you see they gave a generous 10% discount that made you buy an entire extra thing for no real savibg.


Or allow them to select from a list?


Making it easier to find your car in a large parking lot would be a start.


I noticed that the promo videos all show you doing things indoors, in more or less private settings. Your living room, your kitchen, your workspace. Contrasted to initial Google Glass video (skydiving, jogging, meeting for lunch .etc) I think it's safe to say Microsoft has learned from Google's mistakes.


I noticed this too. It avoids a whole class of problems interacting with other people, and seems like a good idea marketing-wise.

I was also thinking about battery life. If it's not designed for outside, then presumably you'll be near a charger, so you're less likely to run out of charge when you need it.


From the videos HoloLens looks like being an actually well thought out product unlike Glass.

Many people would find tools like these incredibly useful at work, in the car or at home. But not in the street, at the beach or in restaurants whilst talking to other people. That's just socially awkward/insensitive.


Reminds me of Google Glass's concept video:

https://www.youtube.com/watch?v=5R1snVxGNVs

I think Google over-promised initially which lead to many being underwhelmed with Glass. I hope Microsoft isn't making the same mistake here.


A better comparison would be Microsoft's demos for Project Natal, which eventually became Kinect: https://www.youtube.com/watch?v=j5__fZ3GsW8

There was also the other Kinect demo that featured Milo, but Molyneux probably deserves the blame for that infamous piece of hype: https://www.youtube.com/watch?v=yDvHlwNvXaM#t=10

I definitely want HoloLens to be real, too, but to avoid heartbreak I'll temper my hopes until more reports come in. Or, even better, a firsthand experience.


I've never used a Kinect; how does the promo video live up to reality? It's looks almost identical to what I still assume Kinect is like, minus perhaps some of the highest fidelity parts like the skateboarding and soccer which I imagine have been attempted but turn out too clunky to be worthwhile. Am I wrong?


No. The problem is Google kept trying to force Glass as a consumer product for use in public.

And given that most normal people would know that it was socially awkward to use it in public only "glassholes" remained. This meant that buying/wearing Glass tarnished you with that label and associated you with that group.


It also just didn't work that well. Poor battery life, uninspiring apps.


This has become forgotten as the public perception of Glass became dominated by the whole "glassholes" phenomenon, but the Explorer Program was supposed to demonstrate that people could think up these kind of amazing life-altering apps that proved the utility of bothering to wear Glass.

They didn't. Years later, the reason to wear Glass remained "take pictures/videos hands free and shave 3 seconds off the time it takes you to check your text messages."

The MS product seems pretty clearly to be more broadly capable hardware, but I do still wonder if it will have actual applications.


The main application that sells it is likely to be less specialised than the cool demos, which are always a bit niche (modelling industrial design for motorbikes etc).

I wonder if its "killer app" might just be that now a virtually big screen takes up little physical space/weight.

Clear the big monitor off your desk, now your 11" laptop (or smaller) can effectively have a 40" screen, etc.

Unlike Oculus, you can still see the real world. Unlike Google Glass, it's a big display and not an awkward eye movement.

There's still the barriers of - showing other people stuff - social awkwardness of sitting with a keyboard seeming (to others) to be staring into empty space while working - it might feel like wearing a hat - what's the effective pixel density like?


This is the clear winner for me. A portable, wireless keyboard + hololens = the biggest virtual desktop in the world that also doesn't shut you out from reality / coworkers / your desk / etc. Whether or not the more ambitious use-cases ever materialize, I'd be happy to trade in my macbook for this.


The focal point is a problem. It is advised to keep your screen at >65cm so your eye doesn't have to accomodate (coincidentally, the length of your arms). A big problem of Google Glass is the focal point is a few cm away and it is known to give headaches. The smaller the screen is, the more myopic you become.

It is absolutely possible to use a lens system to move the focal point to the distance, but hasn't been done yet, probably because you can't do it on 120x120 degrees.

I wouldn't work on a virtual screen for long hours until there's an answer to that. But once it is solved, I can see how we'll all become Holographic addicts ;)


Surely they must have sorted out the focus issue for HoloLens -- otherwise that Minecraft demo where the castle is on the table would have felt very trip for the journalist (if you consider where the castle touches the table, you'd have a joint that is both several feet and a couple of centimetres from your eye)


I had an idea to do something similar once for a Uni dissertation, involving a rift mounted with two cameras to do a very hacky and cheap prototype version of what you've described.

My supervisor shot it down because "Google glass will do that" :(


I think it's hard to say whether nobody used it because there were no killer apps, or whether there are were no killer apps because no one wanted to use it.


Ya the screen wasn't that great and was awkward to look at. Seems like the photo taking is the best part of it -- for that you don't really need all the rest of the complexity. And too bad you also look like a douchbag wearing it, especially in SF where tech is stigmatized enough. Reminds me of a joke a comic said last night at an SF standup spot: "so I was on google last night... Do you guys know Google? It's this company making people homeless in SF"


The biggest mistake Google made was giving Google Glass to Robert Scobel.

http://www.ibtimes.co.uk/google-glass-2-0-eight-things-addre...

"7. On that note, don't give one to Robert Scoble"

It's not that Google Glass intrinsically makes you look like a narcissistic douchebag, it's that the first people to show them off were narcissistic douchebags, posing with their smug self important "look at me I want your attention" expressions, who crystallized the image of the "glasshole" in everyone's minds.

http://whitemenwearinggoogleglass.tumblr.com/


What it reminds me of, is Google Project Tango, which also has the NASA's JPL listed as a partner[1]. Also worth mentioning, is Johnny Lee, who worked for Microsoft Kinect, and is now working at Google for project Tango.

[1]https://www.google.com/atap/projecttango/#partners


We have a Tango tablet.

First thing, it crashes, a lot. We're talking 2-5 minutes active 3d scanning before the structure sensor driver bites the big one. Requires killing and restarting service along with all programs associated.

Also had hard freezes as well.

Its "google quality" in other words, crap. It might get better. It probably won't, given their track history regarding consumer devices in "google beta" (read as alpha).


As far as I can tell, a big difference is that Google's concept video looks very little like actually using it, whereas Microsoft's is clearly just a better version of their live demo. The live demo was amazing.


Project natal videos were also amazing.


Project Natal was amazing. The Kinect dialled most of the cool stuff down for cost reasons.


It's interesting how they go out of their way to describe this as NOT augmented reality when... that's exactly what it is. The only time the term appears on the product page is here:

"Microsoft HoloLens goes beyond augmented reality and virtual reality by enabling you to interact with three-dimensional holograms blended with your real world. "

I understand the marketing reasons for this, but contrast this with the fact that Oculus embraces the term "virtual reality" despite the baggage that comes with it and the fact that they can't trademark it. I guess AR never caught the public imagination like VR did.


AR is traditionally a 2D projection on 3D space. This is a 3D projection that you interact with. Sure, its still AR on some level, but I think differentiating the product makes a lot of sense. My idea of AR is a boring HUD-like system that fits in with things like flying fighter jets. This holographic projection is different and notably so.

MS could find the middle ground between lush 3D VR-like environments and the real world. I find things like the Oculus and other HMDs to be terribly claustrophobic and dizzying. Not to mention really asocial. I don't want to mount a tissue-box size thing to my face that removes the real world. I'd prefer having the real world still here with the digital world tied to it. There just seems to be something wrong with giving software my entire field of view. I don't want to stare into the same Unity3D generated environments. I want to augment my real world life, not replace it.


>MS could find the middle ground between lush 3D VR-like environments and the real world.

Back around 2000 a Slashdot article reported an attempt to create a human-sized hamster ball constructed of a semi-opaque projection-friendly surface. The ball would sit on some sort of roller mount. Five projection screens surrounding the ball would project a virtual environment over the "port", "starboard", "fore", "aft", and "north" surfaces. A human occupant would enter the ball, and, based on his movement detected through the roller base, be presented with a continually updating holodeck-like virtual environment.

Perhaps something like this is still in development somewhere.


Search around for omnidirectional treadmills. There are quite a few different models in development.


When we say virtual reality, there is this implicit expectation (at least in my mind) that its an always ON experience. The Video here did show some of that too but I think scoping it to specific tasks at least initially would be very powerful. So you don't wear these bulky, dorky glasses/headsets all day long but only when you need to do specific things. And then you return to your normal life.

In the early days of computer, the usage was like that...very task oriented. When you are done you go back to your non-digital life. It's only when the technology and public perception changes, you start carrying PCs in your pocket all day long like we are doing today.


Right - but let's say they get less dorky and more comfortable - and really do have the visual quality we really want - objects look solid and real.

That seems much more useful than a VR that you have to plug out of the real world and immerse yourself in - at least for collaboration with others on real world things... like the example with the motorbike design.


Makes me think of something i read about from CES.

Basically a helmet of sorts where you dropped a smartphone into a slot at the top, and some semi-transparent lenses in front of wearer's eyes then made the phone screen appear to float in front of said wearer.

Edit: found an article about it: http://www.pocket-lint.com/news/132280-seer-the-augmented-re...


In the NASA promotional video, the project lead says they plan to implement the tech in July.


With the camera, it's easy to understand how they make the objects feel solid - they just overlay it over the video feed. How would that work on the glasses, I have no idea. Does it have LCD shutters that block incoming light where they want to add a "solid" object?


If you pause at 9 seconds in and check out the rig they're using to film the feed where the virtual objects are visible, it looks as though they are using the same lenses that are on the goggles to display the virtual objects in front of what the camera is recording. So I would bet that what we are seeing in the video is exactly what the user is seeing and not something added on the fly by other means.


That is amazing. So you have two (or more) people that hook into the same "scene", with the kick being that they see it from a different angle. Wow, very nice.


This was something that I think will be a bigger deal. They can communicate with one another, and potentially split the processing load. I'd love to play an RTS where my device processes my pieces, I see their backs and my opponent sees their fronts.

Of course, the caveat there would be that my command console would be invisible or hidden still from my opponent. Providing selective vision would make for interesting game possibilities.



Nope. Those lenses are above the camera's main lenses. Whatever those circles do, they are not generating the images for the video. Plus, only one would be needed for that - the camera has only one "eye".


I would like to see video recorded direct from the "eyeball's eye view", because I agree - the video looks generated. As you say, you can see areas where darker superimposed objects overlay lighter areas of reality in the visual field. Can this technology really do that? If so- wow. If not, still wow but just... less wow.


I'm so excited to see this. I cannot wait to try one of these on for the first time. I'm going to bet writing apps for this kit will be a lot of fun.

And all the C# .NET developers can say again, thank you Microsoft since they have committed to one platform, one store, all device types. I think it will be a while yet before there are thousands of developers working on this, but it will grow exponentially for a while.

If this is comfortable for extended wear, it's just going to further increase the value of remote workers. You can't beat time zones, but for everything else, there's holograms.

I really, really want to understand how high fidelity this is.

One point is we aren't seeing it used for video conferencing between two people each wearing a band. Probably because face-on you look pretty silly in it. So it's not quite a natural way to meet people. Just yet. I think it has world changing potential.

But in 2016 or whenever, this will not be selling for $499 or $899, it's more like $3299 I would guess. And really, that's more like the price level we expect for a very high quality gear. Actually, you could realistically go upward of $10k if the quality truly reflects that price.

So of course the next thing I did was check MSFT stock price. $382 billion market cap. This is a $100 billion idea, might be a good time to get back in. I guarantee you, the market has not fully priced in holograms. Just saying it, you know it's true. For now I will choose to believe the hype, because eventually, absolutely, this is all possible.


The goofy look is really just an image processing problem. If it has a camera watching the face (I know it has one watching the eyes), it wouldn't be impossible to reconstruct what the face looks like without the goggles.


"the market has not fully priced in holograms"

Did you factor in the potential disruption by Magic Leap? Retinal projection could be a better bet long-term.


"Developer, developers, developers."

But imagine picking up an original iPhone today and comparing it to v6. Now imagine HoloLens going through that refinement process.

If the platform is as powerful as it sounds, and you can openly develop software which effectively leverages that platform, how is that not awesome?

I mean, it's an entirely new hardware form factor that we all get to hack on and play with, and it's not a sure thing, but if it goes well, it could become mainstream and open an entirely new era of computing.

That's the vision anyway... like I said, I'll choose to believe the hype because it's more fun that way.

Thinking about the design a bit, it's interesting all the compute is on the band, and not broken out to a separate box communicating over wireless. They mention fans blowing hot air away from your head, and then add in the battery too... how long can it run?

I think their use cases are a little weak. There are much more impressive things you could do with this kit.

Also, what does it look like in a dark room?


This is a $100 billion idea? Maybe, sure. But how is this, as opposed to Oculus or Glass or Magic Leap or or or, worth $100B?


It was demoed just right now here: http://news.microsoft.com/windows10story/

It looks really really really impressive! It's a see through pair of glasses so not like Occulus and they said they invented a new technology: HPU. The glasses will have their own GPU, CPU and HPU and will be wireless.


HPU isn't really much of a "new technology". It's just a custom coprocessor, not all that different from the motion coprocessor in the iPhone (except designed for video processing, so presumably a lot more powerful).


Except Apple's motion coprocessor is just an ARM Cortex M3 microcontroller...


Any information on whether it truly is holographic? For me this is only satisfied if it has: 1) binocular disparity, 2) accomodation, i.e. it reproduces a light field like a real hologram does.


The only thing I've seen on that so far is from the article:

> To create Project HoloLens’ images, light particles bounce around millions of times in the so-called light engine of the device. Then the photons enter the goggles’ two lenses, where they ricochet between layers of blue, green and red glass before they reach the back of your eye. “When you get the light to be at the exact angle,” Kipman tells me, “that’s where all the magic comes in.”

They could be doing real holographic images with that description, but who knows.


It starts around 1:37 Live demo starts around 1:53


In which video, I looked at both on that page - http://news.microsoft.com/windows10story/ - and neither had a holo product demo at that time mark.


The live event webcast. I meant 1 hour 37 and 1 hour 53 minutes.


It looks impressive, but it would help to be able to tell the hype from reality:

"Sensors flood the device with terabytes of data every second" ... somehow I doubt the aggregate bandwidth of the device is > 1TB\s

Makes it harder to know how much of the rest of the 'explanations' are accurate.


I believe that the TB reference comes from listening to Alex Kipman at the Microsoft announcement event.

In speaking about the so-called "HPU" (around 01:50 in http://www.theverge.com/2015/1/21/7867593/microsoft-announce..., second video) Kipman mentions "processing terabytes of information from all of these sensors". This is straight from the proverbial horse's mouth, and while it seems hype-ish it should be looked into.

Anyone in here know about this HPU?


He didn't say "per second" though, so the OPs quote is just a piece of bad reporting.


I'm pretty sure he didn't say that the HPU was processing it either. He said something along the lines of "when we look around a room our brains process terabytes of data".


Could the sensors be indeed flooding the device with terabytes of data, but the device can only sample that data at a more reasonable rate?


Well, from that perspective, an analog temperature sensor is flooding your ADC with infinite GB/s.

For further comparison: the fastest CPUs you can get nowadays have an aggregated memory bandwidth of ~ 90 GB/s using four lanes.


if you have enough pins, a custom asic can do just about whatever you want. the data flowing into the HPU is likely huge, but it is processed down into something the CPU can deal with.


Yeah, well, 4x DDR4 DIMMs have 4x288 = 1152 pins. If you want to be two orders of magnitude faster than that, you're talking on the order of 100 000 pins, which is just absurd.


To give an example, a raw 4K stream at 12 bit is about 500MB/s, so unless it has 2000 4K cameras, unlikely.


I think you forgot to multiply by a frame rate, otherwise you don't have a "per second" unit. At 60fps, it's 29.66 GiB/s.


Original Kinect data rates, measured empirically by a third party:

Colour: 10.37 Mb/s Depth: 29.1 Mb/s Skeleton: 0.49 Mb/s

So roughly 40Mb/s (4 * 10^7)

Does this device produce > 1Tb/s (1 * 10^12) or 25,000 times as much data? I'd be surprised.


Maybe we're counting photons now.


Terabytes is out there, terabits not so much... I'm developing a single chip with 3Tbit (384GBytes/s) aggregate external (chip to chip) bandwidth, and with 8TByte/s aggregate internal (core to core) bandwidth.


I feel like you're the kind of guy (or girl) who could explain really complex nerdy things to me on a regular basis and I'd be ok with that


So many questions. I can understand light reflecting into the eye from a microdisplay (using the same principle as a car HUD), but are they actually creating opaque imagery as well? How is that physically possible?

Then there's 3D spatial interaction: In my experience with Leap Motion, Kinect, and other competing tech, the level of accuracy still limits interactions to broad gestures. If they've made a big enough leap in this domain to enable precise object manipulation, that's a major achievement on its own.


Maybe it is just a two-layer panel: first layer hiding the background ...

http://gd3.alicdn.com/imgextra/i3/1069821249/TB2UIWkbXXXXXXZ...

... and the second layer glowing to show objects.

http://img1.mydrivers.com/img/20140710/c72872d7ae4242b5965e1...


I'm not sure they're using that second layer -- is there space to put optics behind that layer to focus it into the eye?


The WIRED article seems to imply that this device has outwardly-facing cameras that use Kinect-like technology to track the operator's hands, which is probably how they are able to let you interact with the projections without using some kind of wand or controller.

I could see some "high-precision" gloves being an optional accessory for this that would include some kind of tracker markings to allow even more precise controls (maybe for medical applications or something)


Not necessarily, they are likely using the glove-less Handpose technology that Microsoft Research developed.

http://research.microsoft.com/apps/video/default.aspx?id=230...


I would almost imagine gloves like this would be a requirement for precise control. The Kinect had seemingly pretty decent limb sensing from the few minutes I played with it (it was able to accurately model the bones in my fingers moving). However, it had an advantage in that it was a few feet away, viewing you straight-on.

This device will be looking nearly straight down instead, and seems to me that your limbs & fingers will often occlude what is behind them. I doubt the twin cameras used to sense depth would be far enough apart to always see your fingers behind your other arm, for example.


If you want to see similar technology that's being used now, check out the Leap Motion being used with the Oculus Rift. They mount it on the front. I've personally only got experience with the rift (I have a DK2), but I've heard good things.

I think Oculus has it right, though, in that any HMD that does positional tracking needs super low latency to feel natural. Should be a little less problematic since the whole world wouldn't lag, but I'd be disappointed if the virtual overlay had perceptible lag after using some of the better experiences on the rift.


As I understand it, the light is not merely being projected or reflected, but dispersed on a coordinate system, so the display actually illuminates where it was transparent before. That would interfere with natural light coming through (it's tinted as well) allowing the 'hologram' to obscure the real world. I could be wrong, though.


Unless there is some curious property of light I don't understand (and given my perplexity at radial polarization, there may well be), there's no way that external light coming into the glasses can be diminished by internal light emitted by the glasses.

For instance, if you're looking at a white wall in the real world, there's no way to render a black shape in front of it. You can only add luminance to it, in the same way a video projector can only add luminance to the screen it's projecting an image onto.


If the projection surface in front of the eye is also an LCD, it's possible to block out part of the background at the same time you project something onto it. I don't know what happens in this particular product, but from the Mars demo, it sounds like they can do some display of dark objects:

"The sun shines brightly over the rover, creating short black shadows on the ground beneath its legs."

It's possible that they are just getting that effect by making the Mars surface bright, but they could also be actively blocking light from the shadow regions. We'll have to wait for more details.


Certainly you can only add luminance, but consider that the goggles themselves are tinted, and the brightness of a display an inch from your eye will likely be far higher than the light bouncing off the wall. So while you can't "render" black, you should be able to simulate the darker part of the spectrum using negative space. That's not ideal, of course, but it's something.


LCD panels work by filtering out light emitted by a near-white backlight. If they are bouncing around th incoming light, they could be running it through such a panel to dynamically reduce light by color. They could then selectively add light using existing backlighting setup. Its at least theoretically possible, and I'm hoping that they've actually accomplished something like it.


This is correct. That's a big part of what Magic Leap is supposedly working on, being able to black out the background so that objects don't have the ghostly hologram look.


Yet the demo videos show some virtual objects that are darker than their background. That may be vaporware. So far, there seem to be no images on-line actually taken through the device. Has anyone seen any?

This matters. If it can only brighten things, it can only overlay bright things on top of the real world, which is what Google Glass did. Fine detail won't show up unless the background is very dark or very uniform.

If you look carefully at Microsoft's pictures, the backgrounds are subdued gray, black or brown, and free of glare. The press was forbidden to take pictures of or through the device, and their cameras and phones were confiscated for the demos. Microsoft used custom-built rooms for the demos, giving them total control over the contrast and lighting situation.

It could still work, but it's probably not going to look as good in the real world as it does in the demos.


If they're making it opaque I'm imagining that they're doing it the way I've wanted to for a transparent monitor. A second pass-through/reflective LCD just after the microdisplay is.

One like the ones on the pebble watch would allow you to selectively let light through from the outside or let it be reflective and show the microdisplay instead.


In the picture of the marscape in the article, you can faintly see an ordinary room overlaid on the marscape (most visible in the up-left part of the graphic). I took that to possibly mean that the holograms were transparent, not opaque.

Or maybe they're just trying to indicate the unreality of the marscape.


Its marketing, artistic rendition, mockup of what its supposed to look like in 3 years. Just like Project Natal had no lag and super hi resolution.

Or do you believe they somehow manage to project fullHD per eye with perfect tracking?


It may be set up like this mock up http://i.imgur.com/wD9b189.png

The OLED display facing away from the eyes, bouncing back from a secondary surface.


The demo video below specifically shows translucent as well as opaque. Watch when the woman is interacting with the rela motorcycle, she extends the height:

http://www.microsoft.com/microsoft-hololens/en-us

I'm skeptical we'll have anything this remotely usable or practical in our hands in the next 2 years.


Now it makes sense that Google just announced that they're stopping Glass from being sold retail. They did not want to have their device compared to the new product from MS. Actually I see this new device as more prone to success than the Glass. These devices are competing for the same markets: healthcare, education and entertainment. And here Glass is somewhat underpowered.


Google's response would presumably be their partnership with Magic Leap.


"holographic", "cinematic reality" - all just words used to describe augmented reality.


1 year to product announcement, but the whole retina-projection thing might be game changing.


I met a guy in 1998 who had done his Ph.D building full color direct to retina projection. He was doing VR with it back then. That's how long this technology has been in development.

He did it at MIT and if I understood correctly, the US military bought everything (he had no say in the matter).


Do any of the Google Ventures investments ever have partnership terms, or cross licensing?


Google's investment wasn't through Ventures.


One is sci-fi reality, the other is putting a screen right in front of your eyeball.


Do you mean that there are spies informing Google about Microsoft's new product?


It's possible.. they dropped their hugely incomplete Google Wave rotten egg on the public only a day before Microsoft announced MSN's Bing rebranding. Various project domains (waveprotocol.org) were only registered 2 weeks prior. Wave seemed like a weird and frivolous product until you look at it from this angle, at which point distraction seems like it could have been the only reason it was released.


There has been things in the past that indicate that large tech companies are somewhat aware of the big announcements before they happen. Look at Google & Amazon timing their cloud pricing announcements within 24 hours of each other, for example.


What are your thoughts about this? I'll tell you mine.

I just watched Google slowly, painfully, realize that Google Glass isn't commercially viable (in it's current form and to the general public). I can't help but feel that this is a larger, albeit more immersive, version of Google Glass.

I only make this point in regards to any plans for a sci-fi, everyday wearable HUD. There is obviously a great demand for this kind of immersion within the gaming community (although I would argue that Oculus will have market control for the forseeable future).

My opinion is that the kind of augmented reality that we all dream of, that sort of matrix-like constant data download, won't become a reality until someone figures out how to take it (visibly) out of human interaction, i.e. with smart contacts, etc. The current tech is just too intrusive in normal human interaction. My understanding is that this kind of tech is still a long way off.

I'd love to be wrong though!


I think this is all about how it's showcased.

Google Glass was to be used in personal interactions and that put a social barrier in the way of what was some cool technology. It seems that it was way too awkward to be 'glassing' in public no matter how useful or cool the technology was.

HoloLens is pitched to be used in the home and in the workplace, where you can comfortably immerse yourself in that experience without the social implications. This allows the technology to be judged on its own without mixing in the social implications.

I have high hopes for this, at least its a very different experience than what most others are doing (though its obviously taking cues from VR, augmented reality/Google glass etc.)

I agree for this to get 'really big' the form factor has to be a lot more portable, but this is a great stop-gap.


I think this product hits the sweet spot between the Oculus which covers/masks your eyes completely and Google Glass which had a relatively "small" and non immersive screen.

The glass front allows these holograms to appear more naturally to the user all the while still being aware of your surroundings.

The real challenge is not the hardware, but the software and "experiences".


I actually like the fact that it looks a little clunky. No one's going to want to wear this in a bar or restaurant, so the odds of a "Glasshole"-like backlash seem unlikely.


What I find interesting is Microsoft launching this technology just after Google dropped it. I think the focus on business is the right way to go in this case, not everything can start from the consumer, e.g. look at how PC started from a geeky thing for scientists and is now in our pockets and wrists.


I didn't know Google Glass combined a 3D Camera with full augmented reality vision.


I really hope the pinnacle of AR is not as intrusive and uncomfortable (for some) as placing plastic on my eyeball (this seems to be the goto for many people). Either equip the environment, give me eyewear (goggles, then lightweight glasses), and eventually (may be awhile) augmentation implants.


> eventually (may be awhile) augmentation implants

My sci-fi holy grail for this type of thing is a brain-machine interface in the form of a simple hat.

Early versions may look like this: http://i.imgur.com/y3v6hxB.jpg


I think you're exactly right. Smart contacts are going to be the next huge computing revolution (punch-cards/no screen -> screen in front of you/keyboard -> small screen in your hand - > virtual big screen on your contacts).

But we're just not there yet tech-wise. And there have to be some intermediate steps along the way (i.e. big ugly glasses/goggles) because it's just going to be too hard to make it commercially viable to go directly to smart contacts.

What Google has shows is that the public won't accept this until it's _awesome_ (and Glass was not awesome). So perhaps the gaming route (Oculus/MSFT) is the path we'll have to take.


I think you're right. If I'm home alone, I don't mind wearing something on my head that covers my eyes. I don't want that if I'm out and about with other people.


They have Minecraft-- wonder if they will be able to use it to introduce the product to the huge young generation who grew up on the game.


Well I'm impressed. I wasn't initially in the stock promo videos but once she put the actual device on and you could see the quality of the hologram, the spatial tracking, etc, pretty impressive. We're also not getting a feel for the sound system built into it. They mentioned in the demo the sound is also projected virtually from where the hologram is located.

I'm curious what the price tag will be? Battery life?


Apparently there will be both consumer and enterprise pricing, whatever that means. I'm guessing $500 to start at consumer level with lower resolution, less battery life, etc. Twice that for Holo Pro.

Sound is easy enough - a pair of decent headphones can produce fairly decent 3D sound, and combined with the illusion of depth provided to your visual system, I suspect it will be extremely convincing.


I'd assume for sound that it doesn't use headphones so much as bone conduction so that it doesn't get in the way of natural sounds. From the design, it seems that there's some kind of device-skull contact most of the way around.


Should cost at least like kinnect and oculus combined.


From the Q&A session that followed:

   Q: Will HoloLens be priced as a consumer product?
   Nadella: It'll be priced for both enterprise and consumers to use it.


What sort of use case is this for? The demos talked about in the article seem rather specific. I don't see how this will appeal to mass consumers.


The tech is cool. The UI as described is for campy unergonomic noobs, only.

"he is trying to see Project HoloLens as if for the first time" This is a rather significant problem when talking about the real world as per the article. Consider the electrician discussion in the article. Real electricians want / need / expect real tools, not noob friendly fisher price toys.

Its a tool for camp. As wikipedia says "based on deliberate and self-acknowledged theatricality." And camp doesn't appeal to everyone, all the time. If they were making a UI based on camp in a cultural era of camp ascendance, lets say the late 60s, early 70s in the USA, then this would be a win, it would be "groovy", it would be "boss". But... it isn't.

Another way to describe it is ergonomic problems. I'm used to expressing myself, however poorly, staccato finger gestures at 104 keys at a desk at 100+ WPM. Any failure is a failure of my own creativity, not the user interface of my keyboard, which seems fairly capable in better hands... Now I must downshift and do interpretive dance, or gang hand symbols to communicate. No, I think not. Aside from gorilla arms problems limiting duration and comfort (no 12 hour shifts at the computer, for better or worse). And the speech interface limits it to quiet home use, while alone.

The optical technology sounds incredibly impressive, I'd love to play Minecraft wearing it. Or Forza, or a zillion other games. Its just the UI that sounds truly awful.


Think long-term... better voice interfaces, brain-computer interface.

This isn't the end-game. It's going to be slightly weird, but the potential is there.


I'd imagine that the target is a replacement of the PC. You could (theoretically) have the same utility as a PC with a wireless keyboard/mouse paired with the headset. On top of that is all the holographic applications that will be figured out.


Virtual, not wireless ;)

Think this - size-adjustable holo keyboard, ergonomic natural of course.


For any serious work (coding), I'd still greatly rather use a laptop with a real keyboard.


Why the laptop? With the AR overlay you just need a keyboard.


Couple things that spring to mind:

-working in restricted spaces like planes

-shared gaming worlds

-CAD/architecture/3d sculpting at 'real' size

-virtual help (wired gives example of remote help with electrics)

-home improvement (virtual paint/furniture)

-weirdo ghost in the shell style window management


Remember the cat-petting game that was the first demo for the Kinect?


One general HUD use case I think would be beneficial is a driving aid: pedestrian and vehicle detection (or in rural areas animal detection). Whether the HUD is goggles or projected onto the windshield the extra data would be useful when driving. Automatic braking systems are great and all, but if you can see a deer on the side of the road in the distance via a HUD at night you can slow down well in advance (whereas the automatic braking system would engage when the deer crosses the road).


I would want to see more than a couple of independent tests verifying this is actually safer before it were allowed on roads. I can see the potential benefits but there are enough terrible drivers already without introducing more distractions.


Think about the holodeck from star trek. From an entertainment standpoint, you could immerse yourself in another world and solve a mystery. You could also learn all sorts of stuff with live instructions while working with physical tools when doing things like woodworking or electrical work etc.


Porn. The first use case is always porn.

And it seems like a pretty good use case here.


Tactile feedback is crucial here.


"Holographic" live shows may be where it shines.


Ten years ago, I would have said the same thing about a consumer smartphone.


Both portable cell phones and PDAs existed 10 years ago. The smartphone was a logical extension of those paradigms. AR Glasses (for lack of a better term) are still very young.


Ten years ago, HTC had phones with touchscreen, WiFi, camera, installable apps and miniSD support.


VR is going to be a real game-changer for entertainment, but it's AR that's going to change how we work and live our everyday lives. The key to making VR work is reducing lag, but this is even more important for AR, which carries additional complexity in sensing and blending the virtual with the real. The difficulty of sensing and correctly modelling the real world in real-time is immense. There aren't many companies I'd believe could make the leap directly to functional, useful AR, but MS's experience in gaming, with the Kinect, gives them a huge head-start. This could be the real deal!


>VR is going to be a real game-changer for entertainment

I think the jury is still very much out on this. Personally, I hate VR. Its claustrophobic, asocial, dizzying, and I do not like the idea of giving software my entire field of view either. Cheap HMDs are going to be nice, but I don't know how well they're going to do outside the hardcore gamer demographic. I can't imagine watching a movie with friends with each of us wearing these things or even playing a console game with the other players in the room.

AR gaming, on the other hand, isn't something that gets much press, but I can really see some novel uses. Imagine playing something like "Gone Home" but in your home. Clues hidden in your real-life closets, drawers, etc. Or peering into the mirror in your bathroom and seeing the main characters face instead of yours. Or something in the background that's not really there.


I wouldn't watch a movie in the same room as friends with a HMD, but I could watch a movie with someone in a "virtual theater" when they're 1000 miles away.

I don't think we should write it off so quickly; it's like saying "Text messages are pointless, they don't carry inflections and emotion like phone calls. I can't see myself texting anyone." There are things you can do in real life but not in VR and there are things you can do with VR that you can't do in real life.


What is the killer app that will get people to go out and buy these? I don't think Holo Studio is it.

Where is your imagination racing to?


Depending on the resolution, it could replace your monitors at work (assuming the headset is light, comfortable, etc). Using a wireless keyboard and mouse, you'd have your computer with you anywhere in the home or office.

As a developer, having a wall of holographic desktop screen space in front of me would be amazing. My home office would look much better too!

I have no idea how far away this version of HoloLens is from that reality though.


I think the 'depending on the resolution' is key. Its not really a matter of software to me, its whether or not the hardware can deliver the necessary resolution to replace a monitor that's typically 1.5ft from your face. If you can break the dam by executing on the hardware the software will flow. (I wonder how Hololens compares to Magic Leap's technology.)

I think it would be very cool to just have 3 tripods on your desk (a ball on a stick) that represent 3 screen spaces. You could reach out and move them around, telescope them up and down in the physical world and the headset could use them for triangulation to render an accurate monitor on each. You'd still have a keyboard and mouse in the first iteration and then slowly overtime give way to other forms of input.


Thinking about how our eyes really work - you don't need the same resolution. You only need proper eye tracking and the right amount of resolution in the center of your vision (or wherever). You only need it to render what you are looking at - I'm sitting about 2 feet away from a 27 inch screen.... I'm never looking at the entire thing in such a way that I need all that detail at every point. Sure, I need it to be there when my eyes dart around... but as long as that's done, it will look just as real.

Given something more adaptive, there's no reason you couldn't have a ginormous holographic wraparound workspace... or whatever your imagination can come up with.


Great point, MS have been working on Foviated rendering for a long time.


I hadn't heard of that - here's a Microsoft Research video: http://research.microsoft.com/apps/video/dl.aspx?id=173013

Sadly the video player is horrible, and I didn't see this video on YouTube.


problem with eye tracking is saccades

'can respond at frequencies up to 150 Hz or higher in response to individual action potential pulses of less than 3 milliseconds'


Very much this, being able to have 6 or 8 virtual displays projected on a wall would be great.

I know that this tech could do so much more (one giant display the size of the wall) but the different displays is nice bec it allows me to silo things into different categories.


Many years ago (close to 20) I saw a cool demo of a palm-able chording keyboard, and looking around it seems that a few brave souls have actually attempted to bring the idea into production.

A good keyboard only UI (I don't trust small scale pointer input), and you could move the entire traditional PC interface over into a very discreet package.

The web would be the one place where changing UI paradigms would be hard, the web pretty much insists on fine motor skills clicking one of hundreds of visible links on a page.


3D workspace is it!

It's what I first thought of when I tried the Youtube app on Google Cardboard. The app has you in a theater with a main viewing screen, but then littered 360 degrees around you are other videos you can watch or cycle through.

All I could think was how awesome it would be to have half a dozen virtual screens (that I could manipulate) for programming. It would well justify the cost of a good device because six monitors would be probably just as expensive.


>I have no idea how far away this version of HoloLens is from that reality though.

If the Oculus Rift is any indication-- This is still a long ways away, unfortunately.


After seeing that Autodesk is finally iterating on some of their software to support the rapid protoyping/3D printing community in major ways, I'd like to see what kinds of tools Autodesk could come up with. I see Holo Studio as the MS Paint of "Holographic computing" or whatever we're calling it now.

Is that a killer app, I guess not, but if this HoloLens device is really running Windows 10, and the API's are baked into the OS, that puts it MILES ahead of where Google Glass was, and developers will be able to do something more interesting than take photos and share them on Google+.


Sculpting in Virtual Reality :

https://www.youtube.com/watch?v=jnqFdSa5p7w


"Adult entertainment" and, no, I am not joking.

That would be a major market for this technology.


Most new technologies were initially funded by pornography. Photography. Wire recordings. Videotape. Maybe not telegraph but who knows.


>initially funded by pornography //

Can you expand and maybe source that comment. "Initially funded" to me suggests that the R&D was funded by people/companies for the purpose of carrying pornography.

"Popularised by" is often suggested, or even "first exploited commercially for" - either of these seem far more likely. I just can't see Daguerre, or whoever (Wedgewood, Fox-Talbot, ..), getting pay-checks from people/companies that want to publish porn?

I'm not saying, yet, that you're wrong - history is often surprising.

By wire recordings you mean the audio recordings on wires that predate reel-to-reel and such. Are there existing audio porn recordings from the 1890s? It seems strange with the cost of the tech that anyone would even want it as a recording when the people who could afford it, given the massive difference in income and the availability of prostitutes, could order a live rendition. Stranger things have happened though.

Telegraph? Morse or semaphore porn, ... I'd have thought that was really an exception to Rule 34!?

Pray tell, more details.


  THAT FEELS GOOD STOP NO DONT STOP STOP


Some of us remember "X-ray glasses" advertised in the backs of comic books. 'nuf said.



Where I work we sell shipping containers filled with products of various kinds. It would be incredible to fit a customer with these and walk them around the containers, open the doors, and interact with what's inside.

Normally you'd need a large open space (and a forklift) to demonstrate the product but with this? You could do it in a large room.

I think this is actually the killer app for anyone too.

Imagine shopping at Amazon and being able to manipulate a product in your hands before you buy? Want to buy studio monitors for the PC but don't know where to place them? Whip out your HoloLens and put them wherever you want!

How would that piece of art look on your wall? What about the other wall over there?

IKEA? Hmm... which couch looks best in my living room?

Imagine seeing a marker shooting into the sky, like an old movie premier spotlight, that marked the locations of your friends and family? Or just anything.

At an amusement park and don't know where the nearest bathroom is? Follow the blue arrows on the ground and you'll find it.

The possibilities are endless.


Star Trek's Holodeck. Enter a large laser tag room with these on. Now you can project anything in the room. Unlike Oculus you can actually run around and play in the environment because you can see where you are going.

Virtuix Omni is a hack to make Oculus physically immersive: http://www.theverge.com/2013/6/11/4419832/virtuix-omni-vr-ha...


They bought it a few months ago. Minecraft.


"I sculpt a virtual toy (a fluorescent green snowman) that I can then produce with a 3-D printer."

This sounds amazing! She didn't go into a ton of details on this, so I wonder if it's as cool as I imagine.

If done well it could be a game changer for 3D printing (and a "killer app" for many families).


They designed and printed a working quad copter. So there's that.


I argue it is going to be in the field of CAD editing and display. The whole field is totally set up for these "displays" (which at the bottom of it is what they are) and the bulk of 3D models are being produced in that field and are ready to immediately integrate.


Minecraft


Hmm... HoloLens + Minecraft = Oasis Console? [1]

[1] http://en.wikipedia.org/wiki/Ready_Player_One


A greenhorn tradesperson way out in the middle of nowhere looking at some broken machinery and having no idea why it's failed, calling up their boss with 40 years of experience, and having the boss, sitting on a comfy couch back at the office, look at the problem, explain it and draw diagrams of how to fix it in 3d space infront of the greenhorn.


Virtual hangout--do things with friends even if they're all over the world. It would also be extremely useful for remote work.


You could see their faces, if they're in front of webcams, but they couldn't see your face (because you'll be wearing these.)


Virtual avatars? They seemed popular in the past, but maybe the general public wouldn't want them today.


Gaming. One of better thing comparing to Oculus is that it can interact with real object.


Virtual holographic Cortana assistant :)


The 3D modelling is already done!


Is it really "holographic", or just stereoscopic?


It's "augmented reality", so multiple people can walk around and see the display from all angles at the same time. So it's beyond stereoscopic, but it's not a traditional hologram. (It's much better than the Tupac "hologram" which was a single 2D image!)


It works by stereoscopy. There maybe something "holographic" in the math used to compute what to show each eye, but there is no projection of light into space to form holographic images that people can walk around.


Nor is there with holograms. Holography requires creation of light fields from a flat surface - each point on the surface reflects a different amount of light depending on what angle it is viewed from, exactly mimicking the way light would pass through that plane if an object were there. No 'projection of light into space' is involved.

Since images are formed on your retina by focusing real lightfields, a true holographic display which produced a complete lightfield would be much more realistic and comfortable to view than a flat stereoscopic image is.


I don't know enough about holography to agree or disagree, I was under the impression that the "lightfield" has a 3D structure that e.g. the light coming from a movie screen doesn't.

In any event I don't think that the "holographic" goggles are actually projecting a complete lightfield. I'm pretty sure they just shine two more-or-less normal images into your eyes although the math to compute those images might, uh, be holographic.


You can capture a lightfield using a 2d sensor https://www.lytro.com/ It's like the way your eye can re-focus on different distances without moving. That's something you can't do with MS's new tech - everything in the image will be in focus at the same distance, even if your eyes are getting different images.


I doubt it actually has a holographic display; I assume that processing power of those glasses isn't enough and current display don't have high enough resolution yet. But then again the following description could be a holographic display (or it could be a description of antireflective coating -- those dumbed down explanations are sometimes worse than useless)

> To create Project HoloLens’ images, light particles bounce around millions of times in the so-called light engine of the device. Then the photons enter the goggles’ two lenses, where they ricochet between layers of blue, green and red glass before they reach the back of your eye. “When you get the light to be at the exact angle,” Kipman tells me, “that’s where all the magic comes in.”


Yes, that description certainly -sounds- like they're doing something more than just suspending two stereographic LCD displays in front of your face. Talk of angles suggests they may be doing something to stimulate the correct lightfield passing through the pupil of your eye... but really, need more technical reviews to know more.


the demo looked very holographic


mostly marketing


I just finished reading Vernor Vinge's Rainbows End [1], and then this comes out!! I want it now!!! This is so the way to go. If you are a SciFi fan, you have probably heard of Vernor's books, if not, I really recommend them. They have actually given me a brighter outlook on the future of humanity :) Apart from Rainbows End, check out Zones of Thought[2] if you want a outlook at what space travel could become in the future.

Back to the AR subject, I much prefer AR over true matrix style VR. Let us stay in the real world, move about the real world, extend it with the virtual.

[1] http://www.amazon.com/Rainbows-End-Vernor-Vinge/dp/081253636... [2] http://www.amazon.com/A-Deepness-Sky-Zones-Thought/dp/081253...

[Edit: Grammar]


How is this different than the Meta AR glasses? I have a pair, and I think they did a great job with the display and interaction.


Hey, did those end up being any good?

I wanted to buy some to develop on, but they didn't really seem to have their act together.


I tried some out at a hackathon and they look cool, the technology still has a long way to go. The resolution and display quality is pretty shitty.


I think they have potential, they are very clunky right now. But they have most of the same features at the Microsoft ones. It was really cool to "touch" a 3d object.


Hey! We're having our first Meta AR Hackathon in SF Feb 20th-21st. Come and hack on our Meta Glasses! Register here - goo.gl/b6BIWN


They were the first thing I thought of when I saw this. Seems a direct competitor with Meta more than it is with Google Glass.


The craziest part about this is that they were embargoed for four months.


Yeah, I'd guess that someone leaked to wired accidentally or on purpose (but the person wasn't cleared to leak) and to keep any info from getting out they gave Wired an exclusive on condition that Wired keep it's mouth shut until today.


It was almost certainly just an embargo in exchange for exclusivity until launch.


Skeuomorphic UI design makes a comeback ;) Ironic that after they push for flat UI that this device is going back to 3-D UI (obviously 3-D is the whole point). It might look bad if light sources of the rendered objects are not consistent with your surroundings.


It makes the Minecraft purchase make a whole lot more sense now.

I mean, it made immediate sense it's a cash cow, but long-term Minecraft is perfect, it's simple, easy, forgiving and fun - a perfect entrance into VR/AR


Why do I get the feeling this is more akin to the trailer to a movie. One that I really really want to like. And with decent editing to fit in a small demo looks bloody awesome. But on arrival will mostly just be boring.

I also feel that the videos give a very misleading sense of what it will look like to see someone using something like this. Unless, I suppose, they have worked out the "shared" experience would be like. (That is, two of these in the same room.)


http://qz.com/330921/this-is-the-difference-between-microsof...

More closeup picture of the device in that link. There's some sort of camera/light emitter above the left and right eyes. Two panes, and some sort of smaller HUD in front of the right (and left?) eyes, but it actually might be a part of the second glass pane.


Wow. This looks seriously impressive and based on everything we know so far, it looks far better than Oculus Rift and Sony's Project Morpheus. I can't really afford such purchases at the moment because of an upcoming wedding, but I am definitely going to get one of these when they become available. Seriously, look at that Minecraft demo, impressive.

Looks like Microsoft has just upped the virtual reality goggles race. My mind is racing with excitement over all of the applications this could serve. The tech behind how these glasses actually work is also pretty clever, I don't entirely understand it, but it seems to be more than just a OLED diplay, lenses and a driver like existing solutions. I legitimately feel more excited for this than I have been for Oculus and Morpheus.

Another little clever thing Microsoft have done here is calling it a holographic headset, not a VR headet. The different wording not only separates Microsoft from other competitors refering to their headsets as virtual reality headsets but it also makes much more sense (from a technical and branding perspective).


This makes me want to get a dev kit and create a prank exploit that puts a 3d clippy in the corner of your view that you can't get rid of. :D


Key feature: see-through glasses. The eye strain was a major problem with Google Glass, that's not present here.


Yes, and Oculus Rift is now less relevant as well (good timing for selling it to Facebook). Why buy a dedicated heavy wired non-see-throw helmet? To shit yourself when someone is tapping your shoulder?


To get fully immersed in another world. For example, playing games or watching a movie. I go to an IMAX theatre because it engulfs my senses, audio all around me, most of my field of view watching the screen. If the screen were translucent it simply wouldn't be the same.

This will no doubt have some incredible applications, most of which augment reality. Not quite the same goal as Occulus.


Correct me if I'm wrong, but couldn't Microsoft just release (or ship this product with) a sort of blinder to put around these glasses to make the dedicated sight come from the glasses themselves while darkening everything else?


Certainly possible. It'll be interesting to see how feasible it is for these goggles to render entire 3d worlds in this scenario, since normally they'd be rendering a small fraction of the space you're in.


If the speculation about creating opaque holograms is correct, you can just use a virtual blinder as well - a black hologram that obscures or creates a virtual stage.


That is an awesome hack that basically allows really low cost immediate context switching.


Or just turn out the lights.


Exactly. MSFT and Oculus are not at all targeting the same thing. The comparison is natural, but ill-fitting.

I bought an oculus and am excited by VR because teleportation IS FREAKING AWESOME. AR is cool, but a far cry from the sense of presence that we talk all day about in VR land.


I don't think Oculus wants to the the IMAX of virtual experiences.


There will still be room for full VR and its applications with a mature AR device like HoloLens.


This thing is AR, Oculus Rift is VR. Oculus' vision is Star Trek's holodeck.


Seems to me this is about adding onto reality rather than replacing it entirely.


Although they can easily replace it entirely. See the Mars Rover demo in the article, I believe it said that the entire image was replaced with Mars, and it was realistic enough that his legs weren't believing what they were stepping on.


well you could add motion sensing and a camera on the device to help with that.


Is the plan that you develop apps for this thing using C# (or any .Net language) plus special libraries? Or will the somewhat real-time nature of 3D imagery integrated with the real world using a lightweight mobile device require something closer to the metal?


"Developers can target all these device types, with one platform and one store. And stay tuned later while our device types expand." They will not abandon that mission, they just committed to it! (video 3:00)


Someday there could be gloves that allow you to feel these "holograms" using cables that prevent your fingers from moving up/down, etc.

I also think this could be a short-term way to introduce full-body physical constraints in full-on virtual reality. (In the more distant future, we will probably know how to stimulate the brain directly to produce these sensations.) I envision a full-body suit (fitted to the user with near perfection) with a bunch of cables going in every direction. So if you try to push a virtual wall, the proper cables will be set to resist. Doesn't seem very practical, but the glove version might be.


Here is fresh "back-to-reality" report from Engadget journalist who just tried working HoloLens device prototype (at the same event where it was announced):

---------------

"Does it work? Yes, it works. Is it any good? That's a much harder question to answer."

"I say this in the nicest way possible: Using Microsoft HoloLens kinda stinks. In its current form, it feels like someone is tightening your head into a vice. The model being shown today on Microsoft's Redmond, Washington, campus isn't what you saw onstage, but a development kit. The demos begin by lowering a tethered, relatively small, rectangular computer over your head, which hangs around your neck by sling."

"You can literally feel the heat coming off the computer's fans, which face upward. It feels like you're wearing a computer around your neck, because you are."

There doesn't seem to be full hand tracking (as suggested by the demo on stage). Instead it uses something they call AirTap which uses gaze for pointing and hand only for clicking:

"By looking at any of them and using "AirTap" (hold up your hand in front of your eyes, tap with your pointer finger), I could select any contact to call."

"While the effects of interaction were impressive, the actual interaction was less so. Rather than picking up a sheep with my hand by literally just grabbing it with my actual hand, my only means of interaction were voice (pickaxe! redstone torch! etc.) and the aforementioned "AirTap."

Overall impression is kinda mixed (especially compared to how well were received even very crude Oculus Rift early prototypes):

"HoloLens is clearly very early, and kinda sucks right now. It's uncomfortable. It's cumbersome. It looks and feels like a piece of hardware that's far from final."

"Is it bad? No. Lord no. Stop it. It's very impressive, but it's a brand new entry in a market that basically doesn't exist yet."

---------------

http://www.engadget.com/2015/01/21/microsoft-hololens-hands-...

---------------

Given Engadget description of the actual device, here is a screenshot from Microsoft promotional video that probably captured it:

https://pbs.twimg.com/media/B76axLVIUAA4egZ.png:large

https://www.youtube.com/watch?v=IPmAwvmOXKM&t=12m21s

---------------

Another journalist impressions (Andy McNamara from Game Informer):

"Important first impression. In the videos I thought it filled your entire field of view, but it's more like a screen floating in space."

"I'd say it's like a 16x9-ish monitor floating about 7 to 8 inches just in front of your face."

https://twitter.com/GI_AndyMc/status/558039828328357888

---------------

Gizmodo report was more enthusiastic, especially about ability of display to hide real view, though noted "tiny" field-of-view:

"It's one of the most amazing and tantalizing experiences I've ever had with a piece of technology."

"It's not like the Oculus Rift, where you're totally immersed in a virtual world practically anywhere you look. The current Hololens field of view is TINY! I wasn't even impressed at first. All that weight for this? But that's when I noticed that I wasn't just looking at some ghostly transparent representation of Mars superimposed on my vision. I was standing in a room filled with objects. Posters covering the walls. And yet somehow—without blocking my vision—the Hololens was making those objects almost totally invisible."

http://gizmodo.com/project-hololens-hands-on-incredible-amaz...

---------------

Verge folks also liked it:

"... you look down at the coffee table and there's a castle sitting right on the damn thing. It's not shimmery, but it's not quite real, either. It's just sitting there, perfectly flat on the table, reacting in space to your head movements. It's nearly as lifelike as the actual table, and there's no lag at all. The castle is there. It's simply magic."

http://www.theverge.com/2015/1/21/7868251/microsoft-hololens...


>There doesn't seem to be full hand tracking (as suggested by the demo on stage)

The demo on stage clearly mentioned that you used your gaze to select and then tapped with your finger. Which is also exactly what we saw the lady on stage doing.


Promotional videos look impressive. The demo, thought, not so much: gesturing seemed awkward, clicking in the air is not an interface. I guess this could easily be improved using haptic feedback.

Of course, the naysayer in me is just thinking this is a PR move to tell investors and stakeholders everything is alright and that MS got their backs covered by stepping into the future. Which is not that unusual, Google and Amazon are doing it too.


This looks just like Magic Leap.


Except Magic Leap is only a concept video right now, publicly at least. This is a product ready enough for reporters to try.


Kind of explains why they bought Mojang for Minecraft!


It was a bit irritating to see the kid running up to the model of the rocket ship with excitement and no goggles on. Why lie and make it seem like you'll be able to see these "holograms" without the goggles on? Everyone walking around with these big ski goggles strapped to their head seems worse than Google Glass, which is awkward enough as it is.


I understood from the video that it had been 3D-printed


This has the potential of making being rich kind of...obsolete.

Think of it like this: can't afford an iphone? There's a Holographic iPhone you can download. Can't afford a fancy big screen TV, there are thousands out there you can download, etc.

This will all depend on the quality / ease of use of Holographic Windows, but I can already see it's the future of computing.


The last few major purchases I made with my relatively plentiful disposable income: A pair of Doc Martens, two plane tickets to Kenya, yearly membership fee to my concierge medical clinic. None of these could be replaced by this technology. Arguably, I could make it look like I'm wearing new shoes when look down, and I could load up "Kenya" mode on my Holographic Goggles -- but nothing will replace the feel and protection of good shoes, or replicate the smells, tastes, and adventure of actual travel.

Also, I still own an iPhone 5 and don't have a particularly fancy TV...

Your same argument has been made (to a greater or lesser degree) with the advent of the industrial revolution, the Internet, cheap processors, 3D printers. There is no technology that will make wealth obsolete.


I wonder why they didn't demo an outdoor use case.

I'm curious if this technology is closer to oculus rift than to google glass.

I'm thinking maybe they use a front-facing camera to capture the scene and render the 3D stuff on top of the camera view? a user simply sees the composite scene via a display similar to oculus rift.


Maybe the tracking of hands for the controls do not work outside. At least with the kinect, the IR projection that it depends on to map the 3d space will not work well (or at all) in direct sunlight. When I was working with robotics that had kinects mounted for 3d vision we always had to put the blinds down when the sun was shining into the office.


They're translucent lenses. They presumably use the mounted cameras to map the world, but they don't use them to actually display the world to you.


I'm pretty sure he said in the demo that it did not use a camera.


seeing something like this without knowing how it works drives me crazy!


I watched the live demo and was really excited about this and Mary Jo Foley and Paul Thurrot got to use it and are really excited about this. They also mentioned that it will be released this year. They talked about it on TWiT. Paul said that it looks as good as the pictures on the holo website.


What is the resolution of the device? I really doubt it will be as HD as the promo video that they created.


In the live demo it wasn't has high, but it was still really good.


The HoloLens description reminds me of the VR technology depicted in Arthur Clarke's "Light of Other Days". Its interesting to see how many of things he predicted of the future we have achieved, through different means than what he imagined.


The technology looks very impressive but I do not see how this will be widely useful. The necessity to hold your arm out as far as it will go will only cause Gorilla Arm, and is the reason touchscreen desktop PCs are always abysmal failures. You need to move your arm far more than with a keyboard and mouse, so you'll never be productive. The technology looks impressive but I can't see it being useful unless used for holographic weather reports where excessive gesticulation is apparently mandatory. 3D modelling with that would be exceptionally painful. Perhaps gaming or something, but I wouldn't want to be sat there on my armchair waving my arms around like I am suffering some sort of uncontrollable seizure (no offence intended btw, just the best way of describing my actions to onlookers).


Might allow them to reinvent their "Windows" theme when people start interacting with their environment through some sort of lens. It will act as a 'window' to the world.


I don't understand why there is no comment about the probably most important aspect: The image quality. Is it low resolution or really high def and you can't see single pixels?!


engadget:

-field of view is extremely limited. There's a rectangular area in the center of your vision that acts as your "window"

-image was relatively transparent

-HoloLens is clearly very early, and kinda sucks right now.


A problem I have with this is that blocks so much of the face that it hinders subtle facial gestures.

Notice that on all the video calls they show the other person isn't wearing one?

(That and it is rather ugly, too.)


Hey! We're having our first Meta (YC S13) AR Hackathon in SF Feb 20th-21st. Come and hack on our Meta Glasses and have a hands-on AR experience! goo.gl/b6BIWN


Anyone fancy a game of dejarik? https://www.youtube.com/watch?v=mO6M4ngKRp0


Curious about skype (inc Lync which will be rebranded to Skype)

Seems like a strong use-case but how will they render people on the call if participants are wearing a headset?


I think Microsoft just gained back its cool factor.

Now lets hope .Net also becomes 'cool' to use among developers. The current perception is pretty bleak.


They should have gone all in on ironruby and it would have been a huge competitor to jruby to hip coders for the enterprise.


This is truly amazing. I’m super excited. It might be time for me to get my hands dirty with C# again :)


Unlike the usual meme, in this case the goggles do something. Be interesting to actually try one.


This is one of the very first times I've been happy to be a .NET developer.


I know this is the future, and I feel somewhat weird in saying this but I am not quite happy. I feel like technology is coming to replace the physical world instead of augment and improve on our actual experiences. Either way it's some awesome work that was done.


This is 100% designed to augment our actual experiences. That's why they said "this is not VR". It's putting digital objects in the real world. That's augmenting actual experiences.


This discussion is happening quite often in the VR world. Essentially the accepted opinion is that 1) AR is harder than VR and 2) solving the problems that VR has will get us closer to usable AR


[deleted]


They're calling it "HoloLens".


I'd settle for that holographic puppy at the end of the video!


"all the fun of schizophrenia in a hat" ?


Was anybody else slightly disturbed by the Raytheon product placement? http://i.imgur.com/ekogOBL.png


It seriously took me several minutes to realize this wasn't a joke. Even Peter Bright on the Ars live blog had to stop and say "wat".


The name doesn't quite sound right, which triggered the same response in me.


Probably because it's more AR than Holo. But meh, still wicked cool.


I'm going to take a guess, Microsoft is going to give out these to devs at build this year.


Now, this is going to be the most exciting news of 2015 for me. Well done, Microsoft.


The future is here.


holographic nipples everywhere


It will definitely stamp its name on the page of history.


I might be ignoring something, but this appears to be the first ahead-of-the-pack innovation from Microsoft that I've seen in at least 20 years. Congrats to them.


Kinect was pretty surprising too, especially at that price point. EDIT: Granted they didn't develop 100% of the tech themselves, but still: depth perception, human skeleton recognition, in real time, on a crappy 360, for 150$? That was impressive.


Same guy behind this and the Kinect project though - Alex Kipman.


I can't see this becoming a consumer product.


Your all plebs


Google Glass made you look like an idiot.

HoloLens does the same but if you believe their product video: you _are_ an idiot.


That's pretty evil/clever of them to throw Minecraft into their marketing images.


They have a vision, Minecraft was fitting that vision. So they consumed it. It was smart. Project Spark makes more sense now as well.


Totally agree. I meant it as a compliment.


Spark any use? I just saw free and assumed some ptw scheme.


Apparently one of the demo experiences is HoloLens Minecraft, so it's not really evil/clever but actually what it can be used for.


Once again, Microsoft is slow to produce a product.

In this case, the product is a "moonshot" at changing the UI.

No wait, the product is a PR event that just says. "We can innovate too, you know"

If this fails, they will be late again in canceling a VR/Augmented reality product that should not have made it past beta. See Glass, Google.


Haha alright Captain Buzzkill, let's give it more than one day before we start burying this one.


This is really cool, but I'm kind of wondering when Microsoft will cancel the project. Because, you know, that's what they do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: