I've been working on AR and related technologies for almost the last decade and I've been part of the first handful of people working on Google Glass. Bottom line I've seen a lot of promising AR technologies come and go.
My personal take on this is that they may indeed have some very good, if not revolutionary, display technology. However: The big, big obstacle to delivering credible AR is latency. Contrary to VR, true see-through AR needs to have total latencies (device motion --> display photon hits the retina) of no larger than 10 - 15 ms max. The reason is that in see-through AR you're essentially competing against the human visual system in latency and the HVS is very fast.
Moreover the HVS is also extremely good in separating visual content into "layers". Whenever two things in your field of view don't move in perfect continuity with their surroundings (as it is when there AR content overlaid with latency) your brain will immediately separate them from one another, creating the impression of layers, and, in the case of see-through AR, breaking the AR illusion.
So right now I'm a semi-believer. Iff they can sort out the latency problem and deliver stable yet ultrafast tracking in a wide variety of conditions (also by far not a trivial problem) then this has a bright future.
The first iteration of a good AR system could simply sidestep the latency issue by embracing layers.
Magic Leap should skip the fancy stuff (mixing virtual scenes with real), at least at first, and focus the many other useful features of a great head-mounted display system - think mobile notifications, video calls, web browser, etc.
It could easily replace smart watches and later cell phones and computer monitors without solving the latency issue.
Would it be possible to artificially delay the world by 15ish ms? A person would have to fully have a headset on (so it'd be more like VR than AR), but perhaps it could deliver a time-delayed view of the world only once the augmented pieces are ready to render.
Edit: you'd still have the motion-sickness challenge, but perhaps at least the 'layers', so-to-speak, wouldn't appear separately.
No. The important thing is keeping your sensory inputs in sync with your vestibular system. There were some research questions about hacking the vestibular system a few years ago.
But in VR we can have even lower latencies for synthetic content.
Because we have the head tracker recent history, we use prediction on pose trajectory, and can effectively know where the head pose will be at the time the current rendered frame will be displayed. And use that predicted pose to render the scene. That type of optimization won't be possible with see-through VR or AR.
The second optimization is timewarp, where the rendered scene is distorted in screen space after the fact, based on post-render tracker data (just a few ms before display). I wonder if that type of optimization would create artifacts in AR.
Since you're an expert: what about these videos is hard? The things that jumped out at me are:
1 - the robot moving behind the table leg (ie you have to do depth recognition of objects in the scene)
2 - the user's hand interacting with the artificial elements in the scene. Some code had to recognize a hand and figure out which element it was touching.
What strikes you as the hard parts of those videos besides the real-time requirement?
Well the second video is a mock-up. In the first video notice that a) the observed things are floating in space and b) the camera motion is very smooth. This is how they sidestep the "layering problem" in the video. The desk leg occluding the robot is probably done using a depth sensor.
These two things are non-trivial, but not particularly hard in themselves. However, doing them at ultra-low latency becomes quite a challenge. Doing anything at ultra-low latency is already a challenge, but especially so when what you're trying to do is running a deep neural net for entity recognition or gesture recognition.
Training an ANN is computationally intensive, using a trained ANN is not. No context switching for system calls, no memory management, just matrix math.
well, first you need to know what image regions feed to ANN, and that can involve some segmentation and pre-recognition, otherwise you're going to evaluate the net at all feasible subwindows — and that's a LOT of matrix math for you. Very big GPU can help, but they have latency in themselves, and FPGA at such performance levels are inordinately expensive.
Done at scale though ASICs seem to be the sure-to-work way.
I'd be very surprised if a modern cpu couldn't handle the task, especially if you were clever about detecting regions of interest, predicting head movement and cache maintenance. But I'd also be surprised if they go to market with an x86 under the hood.
I remember reading a while ago about how smart tvs were using ANNs for upscaling, so it has been done at scale. rimshot
(1) TVs don't have strict latency requirement. I've hard latencies of 100 ms are common.
(2) Upscaling ANNs process rather small image neighborhood radius, and required processing power is on the order of O(r² * log r), and if a minimally recognizable cat is 50x50 px and for upscale you use a very large window of 16x16, that's 14 times already.
Latencies of 100 ms may be common because TVs don't have strict latency requirements.
16x16 is a very small window, I have no idea what they're using for TVs, but 128 isn't uncommon in post production ANN upscaling. Also consider the fact that ANNs have not received anywhere close to the level of attention in optimization that compilers have, so there is also a lot of potential slack to be taken up if real time processing demands it.
1 or have premade 3d environment model and do accurate position tracking. Position tracking is a LOT easier to do realtime.
2 bullshit CGI "this is how we hope it would look like if it was real" demo
Few months ago their apparatus was one color only, stationary and the size of a desk. Now all of a sudden can be strapped to a camera and does colors? color me sceptical :(
It's easy to think of reasons why this isn't a sound investment. However, here are some thoughts why this may be sensible:
* This is largely Google's investment to cover possible future success of Facebook's Occulus or Microsoft's Hololens.
* They could have valuable non-tangible patents or employees. This is way past the "acquihire" funding levels, but perhaps the technology itself is valuable. Perhaps they get around that valuation with 100M/year in patent licensing. For perspective, IBM Research provides ~O(1B)/year in revenue from licensing patents.
* Magic leap has a technology that is going to revolutionize entertainment consumption. It could simply be good execution of augmented reality, but I don't think this is sufficient to get the market excited and stop using their mobile devices or TVs to consume a lot of entertainment. It seems like at best here it is a "better mousetrap" than Occulus or Hololens.
I'd love to hear other thoughts why this could be a useful investment.
I guess this makes me a pedant, but it bothers me to see big O notation used as a shortcut for "within an order of magnitude." It would probably be fine if everyone on HN had taken an algorithms class and knew the formal definition. In reality, a lot of people here are still learning what that capital O thing means, and if they see it used as "within an order of magnitude" we're doing them a disservice. (As someone who wormed their way into the field without a CS degree, I had to learn it on the job, so maybe it matters more to me than it should.)
The point you were making, however, is well taken. There's just no way for us to know what patent-related assets are involved in a deal like this.
Yeah, O(1) or O(1B) is the same. However, 1B is o(0.01 * (age of the company)), even though the sun will have exploded before a company who's revenue is 1 cents * its age reach 1B.
As I compete with Magic Leap, I'm probably in a pretty good position to answer this question.
The main reason why ML is able to attract this level of investment is that their technology is literally decades ahead of what anybody else has. I haven't had the opportunity to try their headset out, but I know some people who have, and I haven't heard a single bad thing about it. The closest metaphor I can think of is seeing a GUI for the first time or seeing a TV set for the first time. I've used almost all of the other technology on the market or in development, and nothing comes anywhere close to what Magic Leap has.
Not many people realize this, but Microsoft, Apple, Facebook, Google, and everyone else is pouring millions into AR/MR technology. You only hear about Google and Magic Leap because internal R&D budgets aren't public, but this is a huge area of research for everybody. A number of the largest tech companies have recently opened up small research shops in Seattle, Israel, and Cambridge, where AR/MR researchers have traditionally been based.
Proper AR/MR won't revolutionize entertainment consumption. It will revolutionize all of computing, especially productivity. That's one of the reasons why I love AR/MR so much more than VR, even though I've never experienced AR/MR that's anywhere near the quality of good VR. Good AR/MR is a significantly bigger leap for computing than iOS and Android were over Blackberry, Palm, and Windows Mobile. Again, the best metaphor I can think of is the GUI vs. text-based computing.
Lastly, ML does have an amazing patent portfolio and a world-class workforce that would be a dream acqu-hire for any large tech company, but top-notch VC's don't invest in startups for their patent portfolios.
> haven't had the opportunity to try their headset out, but I know some people who have
It's easy to say technology is decades ahead when you haven't even tried it. Rumor/second-hand retellings can be very powerful. I'm really excited to see this tech, but I'm also really skeptical.
Definitely true (the Segway is a great example of this), but I've also talked to people who were involved in developing the technology academically and have studied their patents in-depth, and I'm pretty certain they're the real deal.
Recently on the voices of vr podcast, Dr. Thomas Furness mentioned that he also tried it, and it was based off his (Human Interface Technology Lab)'s virtual retinal display technology/patent. I'm guessing this is what you're referring to?
That must be intimidating to be going up against a company with that level of investment, hype and technology that you state as being "decades ahead of what anybody else has." Are you able to share what company you work for and how you are planning to compete?
Also, what would you consider the best consumer AR and VR experiences available on the market today, or expected to be on the market next year? Super interested in this space, but so far have only been able to demo the GearVR in a BestBuy and have an Unofficial Cardboard 2+ for my crappy Nexus 4 at home.
Can't currently share any details on who we are or what we're working on, but we have a technology that's almost as good from an experience perspective even though they're way ahead of us from a technical perspective. Laserdisk was way ahead of VHS and Betamax technologically, but VHS still won because it provided a better overall experience and was significantly cheaper.
The best consumer VR in my experience is the HTC Vive. I recommend you check it out if they take it on tour to where you live. I haven't tried the production version of Oculus, but I've heard excellent things with it, and I thought Crescent Bay was quite good. Playstation VR isn't as good in my opinion, but most people in the industry expect it be the most popular with consumers as it doesn't require a high-end gaming PC.
There isn't any good consumer-level AR experience that will be on the market in the next year.
Neal Stephenson is also paid by them. And he writes fiction. Thoughtful, interesting fiction. I remember when lots of credible people said that the Segway would change life, and whole cities would be built around it.
Great points. While it's market is limited, I think Skully [0] is an interesting case of where AR can be more than just an entertainment device, in fact becoming a much more valuable safety device.
I think this is really under rated. Imagine how many telephone engineers who look at one of those junction boxes and want to know which telephone wire is hooked up to Mr Jones house? Or a mechanic who wants to know what spanner to used on a bolt. Or a nurse who wants to know a sleeping patients bio rythms. Or emergency responders who want to glance at the disaster and see routes through it to victims.
What do you mean by revolutionizing computing? I assume you mean the way we interact with computers, not something like computer processing speed.
I definitely agree that AR has huge potential in changing how we interact with the world. Although I'm much more interested in technology that leverages things like projects to augment my reality without having to wear a headset. Very related things though.
I personally think headset-less AR is a pipe dream. You're either using projection mapping without any real sense of object/depth or you need real motion holography which doesn't even exist in research labs. I've tried CastAR and it's a huge disappointment.
In what specific ways is their technology better, from what you've heard? Hololens supposedly has a narrov FOV and you need a relatively dark room due to the additive display. Have they really solved these problems?
They're using laser-scanning rental projection, which could hypothetically produce objects indistinguishable from the real world. The biggest two benefits of this are real depth (you can focus on individual objects and it avoids the vergence accommodation conflict) and ultra-high resolution.
I have a source that's told me they've solved FOV, but I can't figure out how their technology could do that from a technical prospective. Hypothetically you don't need a dark room for ML's technology, but their demo video was darkened which makes me wonder if there is some limitation of this.
I haven't updated that website in almost three years, and most of the content on it is much older. I should probably take it off my HN profile. I'm currently a college student with a AR/MR startup
> Perhaps they get around that valuation with 100M/year in patent licensing.
Patents expire after 25 years, so at 100M p.a. deal for investors is spend 1B to get $2.5B back, two decades later. I'm not sure that even beats fixed interest.
Even with continuation it's still around 10% return.
And they start from time-of-approval (or filing?), not when you first put together a salable proof-of-concept, so for patents like which are not trivially possible to implement, you won't get all 20 years of it.
> For perspective, IBM Research provides ~O(1B)/year in revenue from licensing patents.
That is a (common) myth. If you parse IBM statements closely, they talk about that number for intellectual property licensing. I'm not aware that they have ever made a precise statement on patent revenue.
Magic Leap has a patent application for "contact lenses" technology [1]
Imagine if instead of having to put somewhat ridiculous and obtrusive glasses in front of your face, you could just use contact lenses that had this augmented reality capability.
The main problem is power. There's no way to project light into the retina without power, and it's difficult to get power to the lens without using wires. Induction is promising, but the falloff is dramatic. Meaning your inductive power generator would have to be pretty much on top of your eyes for it to power your contacts.
Glasses plus contacts might be a good idea. The generator could sit on the frames. The contacts would allow you to use a small amount of light to get the same effect as an Oculus, which means much smaller power requirements.
Still, to project light, you need some way of projecting light. It's hard to imagine something that can be embedded into a contact lens which is also transparent. And if it's not transparent, it's not really augmented reality.
Fun to imagine. Hopefully someone will come up with something.
Do you need to demonstrate any kind of working model in order to patent something? As far as I'm aware, you don't. I look at this as an aspirational patent - frankly, exactly the kind of thing you'd see from a company with so much money they can spend all day dreaming.
I read it the same way, that the contact lenses would act as filters for an external source of light.
Edit: Specifically: "Referring to FIG. 17, in one embodiment it may be desirable to have a contact lens directly interfaced with the cornea, and configured to facilitate the eye focusing on a display that is quite close (such as the typical distance between a cornea and an eyeglasses lens). Rather than placing an optical lens as a contact lens, in one variation the lens may comprise a selective filter."
Let's hope so. We see a lot of promises with this and HoloLens, but somehow production versions are still not here. I'm still not really sure if the latter isn't just a prank, or at least mostly post-recording CG.
> they're basically building Tony Stark's home computer interface in Iron Man.
Since you're bringing up Marvel - I fear the reality, at least with first released products, will turn out somewhat like this:
HoloLens has been seen and used outside of the company, however. They have been touring several cities and showing off Hololens units to developers. I am not an MS employee, but I did get to try it out and do some Q+A with Hololens developers.
Yeah that was the first example I thought of too in terms of recent cautionary tales. I think there are some key differences though, namely that as a Life Science company, the scrutinity and examination of Theranos' tech is much much harder to prove with certainty. It's why drugs have to go through such extensive trials to be brought to market.
There's a certain amount of seeing is believing to Magic Leap. If I were to wear a Magic Leap headset and was able to play the game that they demo, I would sign on in a heartbeat. You can fake a demo, but you can't fake the game. And I'd imagine investors at this valuation/amount of money (which btw is 10x what Theranos raised total!) are definitely getting into the nuts and bolts, making sure that the tech isn't being faked.
The rumor is that they intend to scale up a silicon photonic chip fab, which is very ambitious, and expensive. It will either be a spectacular failure, or a bold step into the future.
In the first video you will notice that all of the augmented reality images are presented as added light over the background. That is much more plausible than the the second video where there are objects that can be darker than the background or even opaque. Maybe they have some solution for that but I would not expect it based on my knowledge of the area. In the next few years, I would expect any solution to those problems to come with other problems not visible in the second video.
In the patents they have filed, the occlusion is done with moire patterns. Based on other acquisitions and job listings, the patterns seemed to be tuned into occlusion masks with layered MEMS optical shutters.
Doubtful the second video is real. Weta Workshop (http://wetaworkshop.com/) which appears in the top right, "[HAS PROVIDED] DESIGN AND MANUFACTURING FOR FILM, TELEVISION AND CREATIVE INDUSTRIES FOR OVER 20 YEARS."
Along these lines - is there an example of a technology company that succeeded after raising so much money without proving their product in the market?
Screw the market. Personally, I don't care if this even sells (to make the general population buy it, they'll have to invent most silly possible applications imaginable). The question is - does it really work as advertised? Both Magic Leap and HoloLens look way beyond the state-of-the-art, I'm still not convinced that what they show us isn't just plain marketing fabrication.
Their first (older) video was clearly marketing fluff. That may be their long-term vision, but I'm skeptical they are anywhere close. Even if they had the capable display, code & computing power for a game in a dynamic environment like that would be incredibly challenging.
The second video (newer) though looked legit. Simpler interactions, and the disclaimer was pretty explicit.
There have been a number of flying cars over the years, such as the Model 59H AirGeep II. They're just expensive and impractical, so nobody ever builds more than a handful of them.
I never thought the "flying" part was the core technology... it seemed to be the invention some non-traditional, low-noise, propulsion & power source that could levitate heavy objects. I don't think we have that ... if we did, the expensive and practical part might be solved and we'd have them everywhere :)
To be fair... consumer-grade lightfield displays (or whatever you want to call the MagicLeap display) don't exist yet either -- so it's a technology problem too.
Except that Magic Leap's technology problem has had $1B in investment capital thrown at it -- much more than flying cars, no?
Yeah, that's pretty much my point. $1B might solve the technology problem, but no amount of money will solve the market problem (which I suggest Magic Leap, and hypothetical flying cars, don't have)
I agree and interestingly I think the first technology problem that needed to be solved was not the flying element, but instead with autonomy self driving capabilities, which we will soon see within the next 3-5 years.
Looking cool on TV doesn't mean it'll sell. Movie OSes have every keystroke and action make a sound and use 48pt font for everything. I wouldn't bet much on such an OS selling in the market.
That's not a very convincing argument and doesn't answer the question at all. I already can't think of a reason I'd buy one, and telling me it'll be full of spam isn't helping.
I'm also a little disappointed they've raised so much money if their long term goal is just spamming people in "augmented reality".
Oculus got acquired without having released a product to be fair, and that looks like a phenomenal acquisition.
By the way, how good has Facebook been at acquisitions? Instagram alone has already paid off in gains all the total dollars spent, and they also have WhatsApp and Oculus which are both huge, important "companies". Good for Zuck.
Glaciers are melting; forests are burning; coral reefs are blanching; ocean levels are rising, water tables are dropping. The ecosystems that sustain our food supply chains are collapsing; those that aren't collapsing, are being poisoned; and the tiny niches relatively immune from these threats are being bought by hedge funds and private equity firms. Our education system is being gutted; so are our retirement plans. Our healthcare system seems to resiliently resist nearly all efforts at meaningful reform. Every movement we make, physical or virtual, is being relentlessly tracked, recorded, indexed and archived by an exponentially growing number of surveillance systems, private and governmental, for perpetuity.
And these guys are getting $1.4b so we can... shoot imaginary robots at the office?
Augmented reality may actually be the solution. Think about it, virtual items replace real ones this is huge on its own for the planet.You no longer need a phone , laptop, tv,random toys, etc. No more driving to work, offices are no longer required and replaced by virtual ones you can be in from anywhere with all office equipment and staff virtualized. The list keeps going on and for everything you add to it less of the planets resources are being used.
I think AR and VR along with lab grown animal protein of multiple varieties will reverse a tremendous amount of our planetary fleecing. This vision is at least 20+ years away ,but it is definitely a good answer to many of our problems.
Don't underestimate the positive environmental impact that replacing monitors and mobile phone screens might have. If this is successful and high-quality enough, we there would be less stuff manufactured that requires nasty chemicals, less to recycle, and less energy consumption.
Not sure there's a positive tradeoff in terms of resources. And in terms of chronic physical strain (vision, neck muscles) there may be a negative tradeoff.
If we all lived in huts, subsisted on rice and beans, and travelled by bicycle, humanity wouldn't be threatening the stability of the biosphere; but of course, we're wired to crave richer experiences, we hoard, and we seek to acquire status which is demonstrated by how much excess cargo we have to waste.
The more aspects of the human experience we can ephemeralize, the less we need to go out and muck up the physical world to get what we want.
A just released Pew research study has shown that 1 in 5 are online almost constantly. Which is to say, their experience of the world is already largely virtual, and the role that their physical presence plays is, by extension, diminished.
In that case I would also like to hear your opinions about stock market :D, in all seriousness there are people and companies who are trying to tackle some of the problems you mentioned and getting funded. Look on the positive side, tools and research by these guys can be used in education and medicine!
I understand most of focus in on the entertainment market but we can't blame particular set of people for that happening, magic leap is essentially promising something we have been looking forward to from so many years(since we saw stuff in sci-fi movies)!
I'm sure the technology will find practical uses, eventually. It's just that if immersive shoot-em-up games are to be understood as it's main selling point for the foreseeable future (which going by the video, is apparently the takeaway they'd like us to have), then you can count me as decidedly... unenthralled.
Except that all of those problems I mentioned are basically man-made, and the solutions are manifestly available to us... if only we'd care to get our act together.
In the face of the Black Death, our species was basically helpless (and there doesn't seem to be any scientific consensus yet as to how, exactly, it "ended.")
Augmented reality has been the sci-fi dream forever, but I'm really, really skeptical about its applications in real life. Not that they don't exist - the shoot-em-up game demoed in that video looks fantastic. The Gmail app looks horrendous. Worth billions? Depends on the implementation, and a demo video is very far from the real thing. I wouldn't be investing.
It is a very niche market...But if you look back in technology history, you'll notice that the tech always seems to come before the applications.
There has already been usage of Oculus Rift headsets during surgical procedures and medical consultations, so that patients don't have to fly halfway around the world to see the specialist for their extremely rare disorder or whatever. I can see contact lens-based augmented reality being very helpful in a surgical procedure.
It's applications are virtually limitless. Everything from manufactoring to gaming to content consumption can change with a good augmented reality device.
Didn't people say the same about Google Glass before it came out? I know, I know - this is very far from Google Glass. But all we have is a really broad concept and some flashy demo videos.
"It's going to change content consumption in a limitless way" doesn't really mean anything. Based on past predictions, the way we consume content is supposed to have been revolutionized about 100 times by now.
It would work great for commercial, medical, and military training purposes. You can use it to train personnel how to perform tasks that would generally be considered too dangerous, expensive, or risky to jump into first hand training. Doing so could also help lower insurance costs.
That's what I figured. I wish there was some SaaS-ish fund/Kickstarter where multiple small investors could go together for this kind of thing for a nominal fee.
That would probably get shut down pretty quickly, as it would be essentially allowing a group of small-time players to make non-sports-related proposition bets.
Nothing, as far as I'm concerned. Unfortunately the regulatory landscape disagrees.
How betting on the performance of a company is any different from betting on the performance of an athlete or a horse just boggles my mind. Sure, in the short term the variance is lower, but it's still a fucking gamble. It's also the only legal way to gamble in all 50 states.
Wrong? Nothing. It just tends to attract the attention of regulators when done at scale. Even Nevada doesn't allow casinos to run proposition bets on anything but sports.
The money flowing into Augmented Reality in general isn't so much about 'augmented reality' as a thing today so much as it's a big audacious bet on it being the next wave of technology that replaces mobile phones.
Agreed. A functional heads-up display in normal-looking glasses would render almost every smartphone obsolete. Add a good way to do text input (air typing on an AR-projected keyboard?) and most laptops and desktops can go away too.
If we're still poking at tiny screens in 2025, something has gone very wrong.
Meh, I'm not so convinced. AR is certainly a great way to interact with real time information, but there's a time and a place for that. I don't want to be out at dinner and be interrupted with notifications while I'm having a conversation. Laptops and Mobile phones have one huge advantage that AR and even desktops don't have: they're easy to put away. Close the laptop, put the phone in your pocket or in a drawer and it's gone. I suppose you could do the same with glasses, but they're still more intrusive.
All evidence points to the contrary. There's been a pendulum swing in the younger generations away from omnipresent technology such as mobile phones and facebook. It's a mistake to think all subsequent generations will march toward your definition of progress. For all we know, those generations' definition of progress will be a focus on the physical world over the digital one.
are you joking? In which world does the younger generation swing away from technology such as mobile phones? Facebook is simply replaced by Snapchat and other social apps among the youth.
Yep, imagine a phone UI you don't have to hold in front of you as you walk across the street, and never have to pull out of your pocket.
Then imagine it putting annotations into the real world - review scores of wines floating next to the bottles as you browse the wine aisle, or yelp stars floating next to the restaurants, GPS arrows floating in space, etc. etc.
All assuming they solve the ridiculously hard power, miniaturization, object recognition, latency, etc problems. Even if you have to be tethered to a PC, though, it has a lot of potentially revolutionary uses.
Funny you mention that. Was just thinking about the book "Rainbow's End." In the near future, it had most people interacting with tactile "muscle-movement" based input, and contact lens displays.
Wearables/wetwear are almost certainly going to be the future at some point. Our grandchildren will laugh when we tell them about having to hold a device in your hand with a tiny screen where you hunt and peck at an even tinier keyboard with your thumb.
Of course we also won't be used to being bombarded with the inevitable massive, vision-filling AR ads that will come.
I might sound like I'm joking, but as soon as I get an AR headset I'm building a computer vision system to detect billboards and overlaying a black rectangle over top. It's already possible to fill in items of a given class in images with deep learning, and it'll only get better.
Agreed. I personally can't wait for the inevitable ad blocking war that ensues.
Unfortunately, if I had to posit a guess, it would be that the main "application space" for AR won't be the free and open web. It will be the walled garden appstore(s). You'll have your staple apps, but they will all be self-contained, and the owners of the hardware/OS will have strict controls around the ability to do things like, block ads from their ad network (which might be the only one allowed on the device).
Big tech companies don't like the users being able to control what they see through a browser that can install whatever plugins the user wants. Make no mistake, they want full control over the experience, and have no qualms about doing whatever it takes to own that experience. To not do so enables their competitors to claim a foothold in their territory.
It certainly is a morbid round of funding. But seems like they have a lot of current and future value. Reminds me of the time when DST invested in Facebook and I thought these guys are out of their mind. But if I look back, they are of course much smarter bunch who knows how much they will get back. Similarly many are in it for its potential technology and many are in it for the returns and they all know what they will get.
I have reliable sources who have tried the tech. It is real, but many years out from mainstream (5-10 seems like a probable range). However, their first teaser video* is almost definitely fake (if I were to place a bet). It looks like a prerendered video, not an interactive application, or even a static lightfield video.
Those video look pre-made as fuck. I don't care if it says it's not. If it's not it has some magic tracking going on as well. Would love to hear about that.
I can't get past the fact that the software, in order to interact with the 3D world, will be expensive and cumbersome to make. Also, people (your income) born before AR tech generally don't want hardware on their face.
This, and Hololens, and similar, will fail for a while, and I'm thinking it will be decades. I'm placing my bet now.
If they see it through, the software will eventually be used by almost everyone. And the thing about software is that it's essentially free to duplicate, meaning that it need not be expensive and cumbersome at scale.
If they get it working well with glasses or contact lenses, it will win through. The ability to spin up displays as needed (massive TV in your living room, a screen in the kitchen, shower, ceiling of the bedroom, outside, etc) will mean that fewer and fewer rely on physical displays.
Not to mention the availability of more contextual information (tourism, sports, researching, gardening, socialising, etc).
Yes, it's true that the app could hand-track "recoil", just like it's seemingly tracking his gestures in the 3D GMail.
I still think it's staged. The motions look way too "casual" and not deliberate enough to be interpreted easily in software without some revolutionary tech, in addition to the unrelated revolutionary tech needed to do the AR "live".
The video was created by Weta Workshop who specializes in special effects. "Providing design and manufacturing for film, television and creative industries for over 20 years."
I would not put much stock (no pun intended) into this unless there are more videos like the first one in the article mentioned.
The gun appears to be a real, physical device. Given that there must be a camera looking forward in order to interpret not only the scenery but recognize gestures, it's highly likely that what triggers the shot isn't the subtle pressing of a trigger (unless there's some kind of bluetooth connection here) but rather it's the "fake" recoil motion that the system recognizes as a gesture in order to trigger the shot.
Now it would not surprise me much even if they go public (IPO), before releasing the actual product to the market. That would be a nice precedent for the tech startup industry.
With ML's technology, you're not focusing on something close to you. That's a big part of what makes them worth so much money. They're able to project light at real focal planes.
My personal take on this is that they may indeed have some very good, if not revolutionary, display technology. However: The big, big obstacle to delivering credible AR is latency. Contrary to VR, true see-through AR needs to have total latencies (device motion --> display photon hits the retina) of no larger than 10 - 15 ms max. The reason is that in see-through AR you're essentially competing against the human visual system in latency and the HVS is very fast.
Moreover the HVS is also extremely good in separating visual content into "layers". Whenever two things in your field of view don't move in perfect continuity with their surroundings (as it is when there AR content overlaid with latency) your brain will immediately separate them from one another, creating the impression of layers, and, in the case of see-through AR, breaking the AR illusion.
So right now I'm a semi-believer. Iff they can sort out the latency problem and deliver stable yet ultrafast tracking in a wide variety of conditions (also by far not a trivial problem) then this has a bright future.