I've seen it demonstrated on Nokia devices before, but it's more available to the public with an iPhone.
I find it clever that they were able to circumvent one major obstacle of doing augmented reality. Usually, you need to interpret something about the video view from the camera in order to figure out where to place the 3D models in the scene.
Instead, they rendered it entirely ignoring the scene, and just used orientation and distance to display the subway signs. That works great for cities like New York or DC, where it's pretty much a grid with no dead ends.
Which, in a way, makes it seem less impressive to me. They could've dumped the video feed, and it would function the same. So it's less augmented reality than clever use of the accelerometer and compass.
The best thing about this is how simple it probably is underneath, and how awesome it looks anyways.
Current user location and database of subway station locations. It's a small enough list that you just rip through all of them and sort by distance. Project badges onto a sphere/flat on a circle depending on the tilt of the phone. Orient on compass location. Overlay display on video feed. Done.
There's no fancy image processing going on, no actual pathfinding, just a nice combination of the new 3GS apis and publically-available data. Good show.
One caveat: The iPhone API does not allow for a live image overlay from the camera. If you need a picture, you make the API call, Apple's own UI takes over, and you get a file in return.
To get the "live overlay" you need to hook into undocumented API. Considering how long we've been asking Apple for precisely this feature, I doubt it's going to happen.
So is apple approving apps like this now? I was under the impression that they made use of undocumented APIs, and as such, aren't getting into the app store?
If they are approving them, then I'm getting right on this bandwaggon!
This is an amazingly creative use of all the iPhone technologies, and New York is one of the densest places in the world so this would definitely be very helpful. It's not helpful for locals (who have the lines memorized), but for travelers who get confused and turned around in the maze of skyscrapers... this will be a great help.
I wonder if the technology is easily extensible to take new maps and locations. Post that openly and watch the world start creating further content... not just more subway maps, but more geographical locations (restaurants, hotels etc).
When I lived in New York I looked into mass producing a compass that was marked "Uptown/Downtown" (in place of North/South) and "Eastside/Westside" (in place of East/West) with the North/South line correctly aligned at 29.8 degrees from the actual.
You can hand these to visitors and they'd instantly be able to tell which direction to walk when stepping out of the subway.
I smell an iPhone app. Uses GPS to determine what city you are in and then displays direction in local terms. I want one. This could even work in Boston where the North End is mostly east of City Hall and the South End is a couple of miles west of South Boston
GPS in a well-covered area is accurate to around 10m or so. Combine that with a smartphone's internal compass and it knows what direction you are facing, so can pop up a floating box in roughly the right direction. Simple idea, and quite a nice indication of where things in the Augmented Reality space are going.
Sure but getting a decent GPS fix in the metro canyons of big cities isn't a given (even with A-GPS).
You could make a lot of money by solving the problem of how to reliably determine location in major cities indoors and out (and not just relying on known wifi-hotspot triangulation like skyhook does).
Just off the top of my head: Google can probably do this quite easily, especially in an augmented reality application.
They already have street view data that's highly accurate, they also have A-GPS data on roughly where you are. They can refine this by comparing the image coming from your phone to known street view images... The computing power behind this would be pretty insane though.
Yeah, especially for a mobile device. It's a non-trivial problem and even your solution has the problems as the skyhook solution. The data goes stale and needs to be refreshed continually.
I find it clever that they were able to circumvent one major obstacle of doing augmented reality. Usually, you need to interpret something about the video view from the camera in order to figure out where to place the 3D models in the scene.
Instead, they rendered it entirely ignoring the scene, and just used orientation and distance to display the subway signs. That works great for cities like New York or DC, where it's pretty much a grid with no dead ends.