One of the best I’ve seen is for manufacturing and inspection. Eg, it highlights the next set of holes to place bolts into, then gives tightening order, etc. It eliminates the need to think about each step.
If you're eliminating thinking, you might as well use a robot.
I don't know about the ideal use cases, but it certainly appears that the SDK and dev environment are going to be the best and most accessible. So if a great use case does appear, it will probably be on the AVP.
You can use a robot for screws, but things like wiring can be difficult or impossible for a robot. Not to mention for low volume, setting up robotics isn’t worth it.
I think one of the better potential use cases for AR is for manufacturing work instructions. Typically, these are in binders or displayed on screens, but highlighting assembly steps, part orientations/locations, etc virtually would be especially nice for training or infrequent operations which don’t happen at a seated workstation. I believe Glass and some other products were exploring this space, but it could be that it’s a lot of work to set up.
Fundamentally though I see AR as how we give our AI personal assistants our eyes ears and real life action space evaluator.
Everything from highlighting objects you’re searching for to giving instant information about anything you look at that you want to know about.
It will also give you perfect recall
Hey siri where did I leave that toilet brush two weeks ago? How many pages of [Book] did I read on average last week? What was that pattern that [person] wearing when we were at the farmers market last week?