I am biased as an AR developer, but I think Chris underplays the impact of AR as a computing and communications platform.
AI - and eventually AGI - is an application of computation. It's revolutionary and I think will transform everything but it's not a computing platform. That is, AI itself doesn't have an interface, so it still needs a platform to run on. AR and eventually BCI and wetware are the platforms for interfacing with it.
Cars aren't a computing platform either - they will certainly utilize and be transformed by new waves of computation capabilities, but cars aren't a replacement for a personal computer.
Same with Drones and IOT. I think wearables will fall to AR as well - or rather integrate with it as a peripheral.
So when asking what is next for computing, in the context of the evolution of interface/platform from Mainframe to Micro computer to Smartphone, AR is unquestionably "next" as a platform.
I'm not quite sure what to call it, other than operations research.
People focus a lot on consumer/entertainment applications, but I think heavy and high tech industry will rapidly adopt AR. Imagine a mechanic working their way through an interactive checklist while completing some maintenance task on a jet engine or other complex machinery. Even with hardware as rudimentary as google glass that's useful. But then consider Boeing in the moment that said engine unexpectedly caught fire. Imagine that they can go back and review footage of every time folks touched some component on that engine, or use basic machine vision techniques to confirm the position or state of some part? Now imagine they have that kind of visibility into the majority of what people's hands do on the shop floor or in hangers.
There will be social issues and debate over this, which we're seeing with police body cameras, but I think ultimately safety will trump people's reluctance to have their every task recorded. Certainly in industries with high safety/risk implications, we'll see similar strong arguments for going there.
Slightly less complex, consumer-focused possibilities:
- Cook like an expert (with an AR), you don't have to glance at a screen or cookbook, all the steps are visible in your view/whispered in your ear as you need them, reminders to take the pasta off the boil at the correct moment, etc.
- Repair or perform maintenance on a car - when you open the engine bay, the steps to change the air filter are available for your model of car.
"Cook like an expert": There's much more to cooking than just following steps. AR don't help much preventing you to slice the onions instead of your fingers. The wrong temperature, an egg that is larger or smaller ... it's timing and knowledge/experience, not a rule book that makes a great dish.
Same for the car maintenance: Perhaps you could do it yourself and save a few bucks. Would you repair your breaks yourself with AR but no knowledge and understanding about cars and maintenance if the life of your children depends on these breaks?
Both scenarios might work for an expert or at least someone who knows the fundamentals, but not for an unprepared consumer.
But for cooking, the AR can find out the temperature (you can do this with some tools already minus the AR), determine sizes of food and how long to cook them for, maybe overlay a line on food you're cutting to help you cut it better... etc
One part of this that will need a lot more work is the visualization of all this data. All the seminal dataviz works (Tufte, etc) emphasize the 2D world.
Not sure if there is a single one, just like there is not one killer app for the smartphone - though arguably it was the integrated camera that was the breakthrough for the iPhone.
We are betting the killer app for mobile AR is for e-commerce/shopping.
Killer 'app' for the smartphone is pocket sized PC. It has enough compute power (coupled with the internet) to take over mapping, entertainment, entertainment planning (let's find a restaurant near us,make reservations, then see if there are any shows or movies nearby). It changed everything. I just wish the damn things didn't have a phone built in.
What's the shopping angle for AR you are envisioning? We can take photos of QR codes or products, or just talk to the thing and it will bring up whatever I'm interested in (I do this all the time in a store to figure out which of N products I want to buy).
One of my hopes is facial recognition, to help me remember who this person is who send to remember me so well. Maybe a couple notes on the person that I keep on the side as well...
I often totally fail to recognize people outside of the contexts and places where I usually see them. So a fix would be amazing.
A killer app would be something that you use 5-10x a day like maps, camera or email. I think it will be contextual based on what you are doing. At home it will be assistance (search, instructional) and entertainment based. At work it will replace monitors. In transit it would replace phone/dashboard GPS.
The big problem i foresee with AR is that frankly it needs to be always on to be effective.
But have you tried keeping a smartphone out of sleep for a day? You will be lucky if you get through a full work day without needing a charge. And thats on a device you can put down while doing something else.
It seems like the problem you are describing is battery life, not the use case of being always on.
You are correct though that AR is best when it's persistent. Consider though that for parts of the day (at work for example) you could plug it in while in use.
That depends on the type of work. AR at the desk is superfluous.
AR at the assembly line, on the shop floor, or any other job where standing and walking is done for most of the day is a very different thing. But then plugging in becomes a real problem.
That said, perhaps we will see some development in hot swapped batteries. I recall seeing a video some years back about a no-brand tablet someone found in a Chinese market. It was powered by two Nokia batteries that either to be hotswapped while the other remained in place.
Hardly. Maximize and customize your space how you like. You can literally change everything about it. For example, if you work in an open plan office, you can put a virtual divider up between you and the persons next to you temporarily so you can focus. You can also take your whole "desk" with you everywhere with the layout you like - like moving your laptop - except everything else comes with it.
AI - and eventually AGI - is an application of computation. It's revolutionary and I think will transform everything but it's not a computing platform. That is, AI itself doesn't have an interface, so it still needs a platform to run on. AR and eventually BCI and wetware are the platforms for interfacing with it.
Cars aren't a computing platform either - they will certainly utilize and be transformed by new waves of computation capabilities, but cars aren't a replacement for a personal computer.
Same with Drones and IOT. I think wearables will fall to AR as well - or rather integrate with it as a peripheral.
So when asking what is next for computing, in the context of the evolution of interface/platform from Mainframe to Micro computer to Smartphone, AR is unquestionably "next" as a platform.