Hacker News new | past | comments | ask | show | jobs | submit login
Google Glass API Documentation (developers.google.com)
220 points by aray on April 16, 2013 | hide | past | favorite | 75 comments



"Each step in human technological advancement provides improved methods for the distribution of cat photos. Project Glass is no different."

On a slightly more serious note, I'm glad to see this since I feared that they might keep the API for later, instead opting to try and make a "product" first (a la iPhone 1 or Google+).

This API is definitely not what I was envisioning though - I expected another API add-on to Android, where you can take over and do what you wish with the display, so perhaps it is a bit like webapps-as-apps on the iPhone 1. I'd be interested to see what early adopters do with it and if they find this API too limited.


I was also fairly disappointed, it looks like 5 out of 5 of my ideas for what I wanted to do with Glass can't be done with this API. But I'm hoping they wanted to get some basic / easier to implement API out as fast as possible while still working on more interesting support on a side (native apps, in particular).


This is... pretty seriously disappointing, if it's supposed to be the whole thing (or close to the whole thing). I'll give them the benefit of the doubt and assume that they're going to publish the rest later, and not deliberately hobbling it to prevent the public from getting creeped out by the hardware's capabilities.


Keep in mind this is Google Mirror API documentation not Google Glass apps API.

Google Mirror API is for puting stuff in front of user eye. As far as I know you can not get information from Google Glass with this API.


you can get location information and interact with any media (text, photo, video) along with contacts via the api.


Glass is an entirely new kind of device; to have good ideas I think we need to wear it and experience the world through it. The API is fairly constrained, but so is the device (eg always in the field of view). And strict constraints, when enforced for good reason, often lead to interesting products (eg Twitter).


See the work of Prof. Steve Mann, specifically "EyeTap". I understand one of his ex-students is on the Glass team, however "an entirely new kind of device" it is not. It has a heritage of ideas and implementations that are around 30 years old.


It would be easier to take Steve Mann seriously if he would just release more/any schematics.


Check his book "Intelligent Image Processing" for info on the EyeTap, plus lots of other interesting stuff.

(edit: typo)


Take him seriously about what? That he has built the devices he claims to have built? That he actually wants widespread wearable computing?


That's true, but Glass is the first time something like this has been mass-produced. It's clearly a version 1 product, even though there have been other people doing similar stuff before glass.


Plenty of the awesome things you can do with android weren't available when it came out. Everything new needs to start small.


There's no augmented reality, but from the docs it looks like you can stream audio and video to and from a service, maintain awareness of the users location whether they're using the app or not, and push interactive notifications. About all it's missing is a HUD, which will no doubt come as soon as the batteries can handle it.


This is so exciting. Just how a completely new user experience paradigm was uncovered when high quality touch screen devices like iPhone etc. first launched, this is yet another milestone in how we will continue to interact with the petabytes of data that we as mankind have digitized.

It's worth pondering how significantly new I/O devices change the game -- the first tty, the commercial keyboard & mouse, the touch screen, multi-touch trackpads, and voice activated smartphones.


Amazing technology aside, this is a pretty disappointing API release from a developer standpoint. Basically no access to Glass's amazing hardware, nor any way to receive user input other than a swipe/tap on the side? I'm really hoping it gets more comprehensive.


The Glass will surely be hacked six ways to Sunday but I doubt the official API will go much deeper than it already is.

The still-disappointing Plus API pretty much tipped Google's hand when it comes to how flexible they want to be on providing APIs for future products and on top of that there are some pretty substantial privacy issues with giving developers low-level access to the vast amount of personal data Glass will constantly be collecting. I'm already worried enough about Google having that data that I'm sitting out Glass for the foreseeable future (despite the fact that I suspect it will be useful for a lot things), but if random third parties could access that data at a low-level I'd be even more worried.


Give it time. The first iPhone only supported web apps, and besides, it's likely that battery is limited on Glass.


Still better than the SDK shipped in the first iPhone, isn't it? And see where the iPhone is now...


I'm willing to bet that they'll have some android integration announcements at I/O. The Motorola phones are suppose to have touch controls on the back, which would be a pretty convenient way to interact with glass.


Actually from what I see the only way to receive user input is via user selection of menu options. There don't appear to be any callbacks from swipes or taps on the touchpad.


> It's worth pondering how significantly new I/O devices change the game -- the first tty, the commercial keyboard & mouse, the touch screen, multi-touch trackpads, and voice activated smartphones.

This comment actually explains my complete disinterest in Glass :)

I don't see how voice-activated smartphones have changed the game yet, IMHO they're in the same league as Glass - "this might be worth it later" (especially dubious for Siri). I use two touch-screens and a trackpad every day and yet I could probably go back to only a keyboard and die happy.

What has changed my life was connectedness, and Glass does not do more than a smartphone in that area.


"What has changed my life was connectedness, and Glass does not do more than a smartphone in that area."

Yes it does. It completes the connection between what you see and the rest of your digital world. That's significant.


You do realize that wearable computers are over 30 years old right? See Steve Mann: http://en.wikipedia.org/wiki/Steve_Mann


Can you go out and buy a pair at a store right now?


Can you go out and buy a reel-to-reel tape recorder at a store right now?


Yes.

(http://www.ebay.com/itm/Rare-Akai-GX-4400D-Reel-To-Reel-Tape...)

What do I search for to buy a wearable computer?


(You go out to the Ebay store? Can you tell me where that is?)

My point was, just because something isn't for sale in stores right now doesn't mean it never existed. As far as I know, Steve Mann's EyeTap glasses have never been for sale in stores, but kaolinite's argument was just silly – hence my comment.

If you don't like the 'reel-to-reel tape recorder' example, imagine I said 'enriched plutonium'.


My point wasn't that it didn't exist because it wasn't in stores but more that taking a device like that to mass-market is a much bigger challenge that making it for one person (not that I am saying his work isn't incredible).

When discussing UX and how this will affect the general population, this technology is basically brand new.


Fair enough, I agree.


What would be nice is if Google released an Android app that does the same thing as Glass (ie. location updates and push notifications), for testing purposes. It wouldn't be as nifty as having the thing on your head, but pretty much all the use cases covered by the API would work on that.


They do, sort of: https://developers.google.com/glass/playground

It's a jsfiddle-like sandbox that behaves like a Glass (device) frontend.


Yeah, I assume that works if you want to test how cards display, though there does not seem to be any interaction.

Considering the API only allows viewing cards, taking pictures, sending your current location and taking textual input, there's nothing that prevents them to have a glass implementation on Android to test things out, other than the time/resources to develop such an implementation.


You mean like Google Now which seems to show the same cards as the glass demos http://www.google.com/landing/now/


I've been a Google Glass skeptic. But I just got back from Mexico, where I was walking all over waving my phone at various signs so Word Lens could translate them for me... skeptic no more! Word Lens is a killer app for the platform. Except now I see that there's no API to access the camera. Seems like a huge mistake.


One of their example API uses includes users taking photo's with the built in camera and sharing them with your service. See "add a cat to that" https://developers.google.com/glass/stories


If the app has to take a photo at the user's behest everytime a word needs to be translated, it's going to be very clunky very fast.

The whole point of technologies like Glass is that they should be as unobtrusive as possible and just work by themselves when you need to.


No camera API access? That's ridiculous! Not only does it restrict about 70% of the possible usefulness of a head-mounted computer (so it's basically just a fancy news-feed display), but it'll allow competitors to move into the arena having a clear advantage. The only reason I see for Google to be doing this is resource allocation, but I feel like image processing could be offloaded to a linked smartphone, if necessary.


It looks like there will be one standard way to take photos, and you'll register an Intent to handle what you want to do with those photos (see 'Add a cat to that'). Considering people will be wearing these things 23/7, it's not unreasonable that they wouldn't give arbitrary apps shutter control right away.


  The Google Mirror API allows you to build web-based services, called Glassware, that interact with Google Glass. It provides this functionality over a cloud-based API and does not require running code on Glass.
This API is focused on pushing info to Glass, rather than interaction (which I assume will be later)


Apparently, it is possible to setup your future Google Glass today: https://glass.google.com/setup


I mean, you can go through and set up your network... but the setup process gets blocked when you need your actual device to sync.


> but the setup process gets blocked when you need your actual device to sync

Not entirely, it's obvious that it's just pinging some endpoint for a protocol buffer. The server will return a protocol buffer with a "continue" message, but you can just spoof that.

Also: who will get the first glass xss bounty?


Is anyone else having issues accessing the API? https://code.google.com/p/google-glass-api/issues/detail?id=...


Yeah, having the same issue. Initially, I thought it was US only, but even with a VPN and fake US account it doesn't work.


As per the update to that issue, the API is currently only available to developers with physical access to Glass. It is a shame they didn't make this clearer on the developer documentation page itself...


Look hopefully they'll improve on this a lot, but lets be honest, this API is pathetic. If there are significant technical (power, weight, etc) reasons why this is all that can be done, then they probably aren't ready for prime time. I'm trying to stay positive and imagine the future for this product is bright, but wow wow wow this is bad.


So screen resolution is 640 x 360px. Looks like lot of interesting applications can be build with that real estate! Weather, Maps etc. are some to come to my mind right away.


I can't find any info on how to write a Glass app that actually interacts with the camera or does geolocation. Am I missing something?


You can get a user's geolocation (https://developers.google.com/glass/location). Couldn't find anything about the camera.

However, the API doesn't seem to offer any way to run code on the actual device aka "apps".


It looks like the Mirror API allows you to register callbacks, like Android Intents, to handle events like 'position changed' or 'new photo'. They're really locking down user interaction on the device itself, probably due to processor power, and to keep the experience consistent. Not that I agree with them, but they definitely look afraid of getting a bad name if some devs produce crap - hence the very limited scope.


This might be what you're looking for: https://developers.google.com/glass/media-upload


Parent's link is how to get media onto the Glass device. Here is how you get media from the Glass to your server: https://developers.google.com/glass/v1/reference/timeline/at...


The specs on the display are pretty bad. In the API examples, the one with the most text is the shopping list with 5 lines of short text. I want to look at longer text on a HUD device. Is this a limitation in the ability to create HUD hardware with higher resolution. Also, can someone with experience in this area explain the pros / cons in the "how it appears" spec? I'm talking about the spec where they say "looks like a X size display X distance away" Here's 2 HUD specs:

Glass: 640X360 25" HD display from 8 ft Vuzix M100: 400X240 4" mobile screen at 14"

If I place a ~4" mobile device 14" from the top right of my field of vision, I think I could live with that amount of obscured vision, but is it feasible to create that with 720p resolution? Why would you want a 25" display 8ft away? That seems like it would just be good for placing display ads and not really for most useful things aside from quick notifications.


Client libs for Java, Python, Go, PHP, .NET, Ruby, Dart but no JavaScript?


Go? This is awesome to hear. Unlike Android, Glass will be an instant win for us Gophers. ( Well, once they have crossed 1m+ devices-sold so that the potential audience for hackery becomes actually interesting )


This could be a chance for Go to become more mainstream, if there wasn't so many languages supported, but I guess is not Google's agenda to disseminate their own lang.


I believe the Go library is automatically generated off the API, effectively just extending the existing Go/Google API client library. Which is not to necessarily down play it in any way (I love Go!), but just that it's not vastly exciting.


Is anyone else having trouble enabling the Mirror API in their developer console? It doesn't show up in mine after creating a new project for it.


It doesn't actually show up as a service yet - follow the instructions on this page [0], see "Getting started".

Basically, create a project in API console, create an OAuth 2.0 client id, add [1] to the valid JavaScript origins and then paste your client id ({number}.apps.googleusercontent.com) into the playground [2].

[0]: https://developers.google.com/glass/playground-usage

[1]: https://mirror-api-playground.appspot.com

[2]: https://developers.google.com/glass/playground


I'm having the same issue


I believe those who have Glass are the only ones who can get an API key.


I had been wondering about this spec:

  Send full screen images and video at a 16x9 aspect ratio.
  Target a 640x360 pixel resolution.


I'm hoping there's an easy 'be quiet' option. They're 100% right that apps shouldn't be spammy, but we additionally need a standardised option to mute or uninstall apps with a couple of swipes. It's the most walled garden ever, but for something like this with so much potential for spammer abuse, I think we need it. At least they'll be hackable.


I wonder if they also plan to allow native apps eventually.


So... no augmented reality with this API. Sad.


Does this mean they are going to start shipping for the glassexplorer program ?


Yes. They will be shipped in batches, as they come off the production line.

http://techcrunch.com/2013/04/15/first-google-glass-devices-...


I am sure glass has GPS, does it have accelerometers as well?


and compass for direction would be useful


Can anyone get the demo to work? I'm getting a 500. https://glass-java-starter-demo.appspot.com/


hm... only support for java and python. I hope they release a version for Go since it is Google's language.



To be clear, there are only example apps in Java and Python, but an assortment of client libraries are available.


It would be nice to see one in Node.


I used to think it was ethically questionable to add cameras and trackers to wild animals just so we could investigate their habits. Now human animals are doing this to themselves voluntarily.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: