Hacker News new | past | comments | ask | show | jobs | submit login
A hacker creates his own version of Google Project Glass (thenextweb.com)
122 points by celebration on April 10, 2012 | hide | past | favorite | 46 comments



I certainly don't want to bad mouth a guy who builds something cool, but there's a significant difference between this and Project Glass. He states in his blog post (http://www.willpowell.co.uk/blog/?p=210) The Vuzix glasses are driven by stereoscopic feeds, which are fed by the HD cameras.

What he's seeing is coming through the cameras. He's not looking at reality with an overlaid display. The glasses he is using are not see-through, they are displays. That's a very different experience from augmented reality and reduces your viewing to HD quality.

That also explains why at the beginning of the video we see him put on the glasses and there's no transition to seeing what he sees through them.


This got me thinking about whether "fly-by-wire" vision will ever be viable (or preferable) to human vision and how various sight impairments would be dealt with using such a system. Certainly myopia would be a trivial fix since the image could be placed directly in front of the eye, but how would you correct for hyperopia without a convex lens between the eye and the display?


>"without a convex lens between the eye and the display"

I hardly see that as even a problem :)

prescription displays? Sounds like a big profit winner.


Now that's a great idea for us vision-impaired geeks; reason enough to own a pair


With a holographic combiner.

We already use Fly By Wire Vision in some cases, even with the engineering drawbacks of current devices: namely night vision for the military.


Contact lenses mandatory for users with impaired vision?


Consumer partially transparent head mounted displays are sort of available. SiliconMicroDisplay has one that is bulky and not really transparent enough to walk around in, but could be used to approximate Project Glass: http://www.siliconmicrodisplay.com/st1080.html

A useful wearable augmented reality experience would require a retinal scanning display, which is presumably what Google's prototypes use. Brother has been showing off a product like this, but it has been nothing more than expo fodder for the last few years: http://www.brother.com/en/news/2010/airscouter/


http://www.youtube.com/watch?v=9I0hF0cbw8E

Retinal scanning display. Hm… I wonder how good this really looks? I didn't find a review video by a independent party. So I remain doubtful as long as I don't experience it myself.


More importantly, it explains why both the world and the overlays appear in focus. I have yet to see an explanation for how Project Glass does the same. Apparently it's based on the same technology that went into Babak Parviz's contact lenses, which use Fresnel lenses to make the overlay appear in focus when your eye focuses on real-world objects several feet in front of you. Fresnel lenses also impair image quality. And you'd need another lens on the opposite side to undo the projection on light coming in from the world around you.

I'm wondering whether the reason that Google's glasses only use a small display in the corner of one eye is that the see-through image quality is simply not good enough to justify covering the wearer's entire field of view. If you look at the photos of Sergey Brin wearing them, they are not very transparent, at least from the angle the photos were taken: http://www.theverge.com/2012/4/6/2929927/google-project-glas...


One of the professors in my school, Steve Mann[1], is notorious for working on wearable computing devices for three decades now. There's a small group of niche hackers[2] who are into this kind of thing, but it's definitely not a new phenomenon.

[1] http://en.wikipedia.org/wiki/Steve_Mann

[2] http://www.wearcam.org/computing.html/


Thad Starner, shown in [2] is actually a member of the team working on Project Glass.

http://www.cc.gatech.edu/~thad/


I actually saw Thad in school walking around with his "glass". The Google version looks way better and a lot less geekier (and less creepier) than the thing he used to wear to say the least.


Thad's been walking about Georgia Tech and other places with a wearable computer since 1993. I remember how big his previous version was, but the current version isn't too bad:

http://3.bp.blogspot.com/_sqPHIOY4PBQ/TTtAjzFy6TI/AAAAAAAAAv...

http://hci.stanford.edu/courses/cs547/Resources/Pictures/tha...


I'm not sure the last time you saw him with it, but it seems he's iterated a couple times. The most recent version isn't too far off from the Glass concept. http://www.sciencephoto.com/image/349511/large/T4200406-Wear...


I've been wanting to build myself an EyeTap for the longest time. It must be vindicating to Steve Mann to see this idea of wearable computing going "mainstream"


If Google used Mann's "diminished reality" idea, they could replace ads we see in everyday life (billboards and the like) with some ratio of information chosen by the user and ads from Google. Thus, using g-Glasses, one might see a net reduction in the amount of advertising one is subjected to. And Google could tell billboard companies to pay a toll if they ever want to get through the filter.

What about that HNers who lamented that a Google wearable will be plastered with ads, what if it reduced the overall number of ads you saw?


IIRC Mitch Altman (Noisebridge co-founder and all-around general awesome dude) is pretty into VR stuff: https://en.wikipedia.org/wiki/Mitch_Altman

>"it's definitely not a new phenomenon" Agreed 100%, but Google and Apple could push it into mainstream...and for most people, I'm guessing that it'd be their first encounter with AR / VR.


Mann is a Media Lab alum. The Media Lab did extensive work on wearable computing a decade ago [1]. Rich DeVaul [2] is another alum who worked extensively on wearable computing there, and is possibly involved in Project Glass.

[1] http://www.media.mit.edu/wearables [2] http://devaul.net/


I'd love it if, instead of 'close menu', you just said 'thanks' to finalize an action and leave the current context (or hide the entire UI). That seems more natural; it's what I'd do in person if I asked someone for the weather or the time.


But then if you're in a discussion with someone and say thanks to them at the end wouldn't it annoy you when it closed the menu item you were in?

I think they're trying to ensure the phrases don't cross over with regularly used phrases


Well, in just the same way you could be in a restaurant, and have finished ordering food.


The marvel of what Google is trying to accomplish is in making the product small and consumable, which is a challenge the article doesn't address in its 'David vs. Goliath' language. It takes a lot of effort to keep size, capacity, and power consumption in mind for a product with more mass appeal—not to mention the UX and style elements involved.


Exactly this. Everything needed for iPhone/iPad already existed off-the-shelf years prior as well. It takes a lot of oomph to condense all that into a beautiful little device that doesn't suck during real life day-to-day use.


I still think Google made a big mistake showcasing their product a year earlier. It reminds me of when they showed ChromeOS one and a half years before it actually launched. What's the point of doing that? Sure you might get some benefit from the feedback, but I believe the disadvantages of showing it so much earlier are much bigger than the benefits.

And user feedback doesn't even mean that much when you're trying to build an entirely new product category (ok, maybe the basic idea existed for a long time, but I think we can all agree their concept of the AR glasses is a lot more modern and practical, and might actually turn into a popular commercial product if they do it right).


Apple understands this, not showing a new product or concept until it is either unavoidable or available. What the Glass video shows is an idea whose time has come: all the pieces are ready, just needs someone to fuse it all together in a robust affordable manner which - and here's the hard part - will behave the way people will want it to when they see it done right. Having released the video prematurely, everyone will be jumping on the concept trying to beat Google to the punch (hey, all the pieces are out there ready to assemble); question is who is the precognitive telepath able to grok what users will want when they're given what they say they want.


I think your statement is somehow not well thought out, and by "your" statement I mean "almost everyone's" actually, I just semi-randomly picked out yours to comment on because it captures my counter-argument well. You're saying Google released the video prematurely, based on your assumption google does not want everyone to try beat Google to the punch. It's obvious that Google is well aware that every piece to realize this tech is out there already. What I don't understand is how it is not immediately obvious to everyone that Google wants this idea out there, for people to experiment with it, for people to share their thoughts about UX concerning such a piece of technology which is what they really need at this point to develop this product.

The linked video and the comments it generates are an excellent example of this.


These glasses do look a bit silly with how far they come off his face and the large microphone hanging down but still, bravo. I love seeing people hack together things like this that large companies are touting about as the next big thing yet don't actually release any real evidence that they work.


At least it is accurate and being built in a day, it's more than sufficient. There is really no honest truth as to how well Google's device even works.

That being said, this is a perfect example of how Google entirely screwed up with this. This would have been the ultimate keynote release to Google IO. Not saying that it won't still be, but now other competitors have a great idea to build off of. Google should have just been quiet until this product was at least to a manufacturing stage.


I don't know if you knew this, but Google, Inc. isn't Apple, Inc. Two separate companies! I know it's surprising, it took me a while to figure out myself. You may be also surprised to know that there are actually more ways to release products than how Apple does it. In fact, there many ways, each with various tradeoffs.


The Apple way being the one that results in multi-day campouts and kidney sales. Of course the trade-off to announcing/shipping simultaneously is…?


Cost of keeping engineer/supplier secrecy [financial, intellectual, and morale cost], unable to use real people as beta testers, more difficult to involve outside research centers, encourages insular culture. "Oh, is that public now? I don't even know anymore."

In the case of something that has a developer culture around it (say, a chumby or something) you have a higher number of developers ready to go 'at the start' since they've had time to think about killer apps.

One could point at how early-stage multiplayer videogames are developed for a good counterexample.


A better nicer world.

A world where people hack together cool things using Google tech, or hack together cool things a bit like Google tech, or hack together cool things better than Google tech.


Awesome effort. I'm a bit concerned that the reporter effectively says "to all naysayers, Will says it's true so it must be", and am slightly skeptical as Will hasn't put up information for others to build their own / sourcecode, which I'd expect unless he plans to sell the product himself. That said I think I believe this is genuine, and full kudos to the guy for his efforts if so.


I can absolutely see how this is legit. Did you see the AR system he's the lead developer on? CEO Vision: http://www.willpowell.co.uk/blog/?p=194


Agreed; I'm a believer ;)


I think I believe...

Your confidence is overwhelming.


Perhaps I'm a little over cautious (then again, perhaps not ;) ).


What he did is very impressive, and I understand he was attempting to replicate what appeared in the Project Glass video, but why does all the graphical interaction have to obscure the user's vision (for Project Glass and his live demo)? The icons layout horizontally too close to the center of your view. If the whole point is to have this augmented experience, the user still needs to have a mostly unobstructed view of their surroundings. For the most part, fake video game HUDs handle this pretty well. Project Glass just has the interface getting in the way of everything. Why not just sit down somewhere and use another device at that point? You obviously won't be able to do anything else. I'm not walking down the street or through a store with a big icon in the center of my field of vision.


One difference between this and Google Glass is that Glass is monocular. Unless you're blind in one eye, you will be able to see "through" the projection.

I also don't think the representation in the Glass demo video is accurate: I'd expect the images it projects to add to the background rather than replace it entirely. I don't think they could make an "opaque" projection if they wanted to.


The Glass video is a publicity/concept video from an ad agency, not a product demo from the engineering team.


As far as the hardware goes, a lot of this stuff is already out there in different forms: Epson Moverio, Vuzix (as shown in this demo, ...), albiet not in the form factor that would allow for mass market penetration. That's where Google, and hopefully Apple & Microsoft, will do well and allow for the app makers to come in and add value.

That being said, there are still a great deal of Computer Vision problems that will need to be solved to successfully implement "Strong AR". The video was a nice proof of concept, but that's what the Google video was too.



I wonder if the real-time video was recorded with the AR glasses themselves and if so if there's similar stutter in what Will Powell sees.

In any case, pretty cool 'prototype'!


What? This seems about as legitimate as the guy who built the flying suit. I would have thought there'd have been more skepticism from HN regarding a product like this that was built in a day.

At the very least, he puts the glasses on his face, but the camera is filming from above his head...


What a really liked about this was the UI, finally an OS that doesn't look like Windoze, etc

Clean, simple, it just works and flies off the desktop when its not needed.


Thats the Google Glass os, he just copied that. Its not his own UI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: