Hacker News new | past | comments | ask | show | jobs | submit login
Google relaunches Glass for businesses (x.company)
484 points by tsycho on July 18, 2017 | hide | past | favorite | 264 comments



This is so !@%!@ cool.

Check out the A/B test of a technician with / without the software referenced in the article:

https://www.youtube.com/watch?v=E5gXuZp25f0

Then here's a video that gives a sense of the software's interface:

https://www.youtube.com/watch?v=z5HOHNECW20

Very workflow oriented with nice communication and lookup features.

This is the kind of small optimization stuff that is going to be revolutionary to driving macro productivity.

Amazing!


Is there a particular reason the manual and the tools have to be a flight of stairs away from the technician? I'd wager that the comparison would lose some steam if the tech could just turn his head to the right.

And as far as the software goes, there's a lot more there on display than just Glass. For instance, there are at least two cameras in this screenshot that are just magically implied to exist - one taking high quality video of the supervisor, and one taking high-quality video of the employee: http://i.imgur.com/qLjCoou.png


"Is there a particular reason the manual and the tools have to be a flight of stairs away from the technician? I'd wager that the comparison would lose some steam if the tech could just turn his head to the right."

I've never worked on planes, but quite a bit on cars and semi-trucks. You can get yourself into some tight spots. There may not be room to bring a manual with you, let alone turn your head to look at it. The closest I've had to glasses is bringing a smartphone to look at a manual or photos - but having it right on your had and hands free would be a huge help.


Yea, also notice how in the Glass wearing video the technician brings up the tools he needs and sets them on the floor whereas in the other video he sets each tool back and forth.

Pretty lame comparison.


Reminds me of the 401/Windex/etc. cleaning commercials where they compare "other cleaning products" with their own. It's all staged.


Could it be that the glasses also tell you which tools you'll need, allowing you to plan ahead, whereas the alternative is to fetch a tool when you need it?


Generally when you're doing shit like this you read the destructions first and grab the tools. That is if your destructions are good and actually list them and sizes which most aren't... Which wouldn't help if it was or wasn't in glass, that's a content problem not a form factor issue. I will say I'd rather bring these hardened devices in than get oil on my smart phone which is what I did last week.

I do like sticking my smart phone in holes to take photos there I can't otherwise get.

Edit: this is for personal automotive work


Wait, isn't that the same person in both videos? In that case, this is just a marketing spot demonstrating how it "could" work, not an "A/B test".


Looks like it could be the same person, but wouldn't that make more sense? If it's different people then you're not just changing one aspect, the glasses in this case.

The worker presumably has to do the same work many times. So he isn't doing the work in the video for the first time regardless. It's something he does regularly. They could do the work with glasses a couple of times and compare it with non-glass work times. I think this is (as in the video) is the best A/B test way.


If the worker did the Glass test after he did the non-Glass test, his productivity/efficiency increase could be attributed to memory and not just Glass.


Yeah. Hopefully they would do it multiple times. To not have memory be much of a factor.


This is aviation work, you always always always check the work you are doing with the manual. One mistake could be someone's life. The entire aviation industry is checklist and procedure driven.


They even have a specialized English dialect called simplified technical English to help eliminate confusion.


Have they done any studies on the benefits of using the simplified language vs the time investment it takes to produce the manuals to spec?

I'd imagine most of the gain comes from companies with a high number of ESL workers - but it sounds like it may be a tool that a bureaucratic, control obsessed, management culture likes but is largely just busy work parading as productivity gains.

It could also make the technical manual writers lazy or use less editing because they lean on the language instead of investing real thought into making their communication effective and easy to understand.

As machine translation of english continues to improve, I'm curious how useful it will be. And the feature of "Reducing ambiguity" (according to Wikipedia) is something that can be solved in many different ways without having to invent a whole new simplified language subset for all communication.

Either way that's an interesting example of how serious they take this stuff.


I'm not sure.

I don't work in aviation, but you can produce much better documentation by following at least the spirit of this vs. the specific grammars and vocabulary.

Engineers and IT people are often not very strong in writing ability, and there are many non-native speakers in technology. I've seen situations where "Cute" documentation with TV references (infrastructure placement was captured by types of disney vs. looney tunes characters) and lots of implied context. Having a style guide that forces simplicity can be a high value.


Given what I know about the aviation industry, it's extremely unlikely that this was the first and second time he has done this task. We might be looking at the millionth time and million-and-one time. Memory is probably not a factor.


I suppose if those technicians are meant to go step-by-step through a checklist, and not to just work from memory, then this video still makes sense as an A/B test.


You'll still anticipate the next step which will speed things up. Pilots must follow a checklist but they also memorize the steps have have their hands in the right spot when it's time to call off an item.


Fair enough. Though as 'skinnymuch points out, if they're doing this regularly, then memory shouldn't play any role in the speed.


Yup. If it's an experienced technician, then memory is a factor for both tests.


I assume this wasn't the first time that person has done this task, with or without Skylight. And maybe the statistic is based on an average of them doing this task multiple times, both with and without Skylight? And the video we see is just showing an example of each?


"A/B test" was clearly the wrong phrase yes, but I don't see how that reduces it to a "marketing spot". Demonstrating a products effectiveness through a meaningful comparison that shows it actually working in practice is a fair representation.


Is there a good reason they don't put that large manual up on a rack by the actual work area?


I've built a few assembly lines for automotive parts. These machines do one job, and probably only handle a couple part numbers: They might press a gear onto a motor shaft at one station, press on the plastic shroud at another, and add the grease and fasteners at the third station. These do have the operating instructions (Graphical, not English) laminated and displayed prominently above the machine. Someone with zero experience can walk up to the machine on their first day, rotate every couple hours to avoid task blindness, and assemble the parts. After lunch they'll be on a different line with new instructions.

But the bay pictured looks more like a mechanic's shop than an assembly line. You can't pull a car into a vehicle bay and refer to a big picture of "how to fix car" on the wall - the information is in dozens of 3-ring-binders, vehicle repair documentation, and on the computer. On that stand and for that jet engine, there's far too much information to put it on the wall.


Good info there -- but I think I wasn't clear, I was suggesting moving the ring binder onto a table/rack right by the work-space, using something like a music stand.

Seems like if the mechanic just had to turn around to access the ring-binder, instead of walking down a flight of stairs, the efficiency difference would be less.


Likely because the platform he is on is not static, and is likely a standard part for multiple technicians, and possible with more than one on it at a time. It looks like the engine is stationary, and the station moves to where he is needed. If that's the case, they could use some semi-specialized platforms for different uses, some with tables for manuals and parts, some without to accommodate more people at once. Then again, it's possible this worker needs a different set of tools an manuals for a different job a couple times a day, and having a bunch of work platforms might be less efficient in other ways than having a separate mobile tool/manual bench and work platform. That one isn't relocated on top of the other often might because it causes accidents when people change the loadout of the platform.

Or maybe it's as simple as them having determined in the past that making people move to shift tasks makes it less likely that they will attempt multiple at once from memory, and mess it up. Sometimes what looks like an inefficient process is actually serving another need you haven't considered, and is more efficient in the long run. Having technicians document each task as done and take a picture to confirm it might yield the same benefits, without the forced short-term inefficiency.


Or maybe they're exaggerating a minor issue to make the product look better like a late night infommercial.


>Or maybe they're exaggerating a minor issue to make the product look better like a late night infommercial.

Most of those infomercial products are actually designed for the disabled. I don't bring that up just because it's a really important lesson (which I think it is), but it also has a parallel here. Minor issues for people in one situation may be huge encumbrances to someone else. What's more likely is products have their audience exaggerated and widened, but very often a real problem is being solved. It just may not be a problem all or even most of us have.


Or maybe you're minimizing a real issue due to anti-marketing bias.


> Or maybe they're exaggerating a minor issue to make the product look better like a late night infommercial.

Sure. I'm not trying to suggest it is an enhancement in the work process, just that it may be even considering the GP's sentiment of "Seems like if the mechanic just had to turn around to access the ring-binder, instead of walking down a flight of stairs, the efficiency difference would be less." by providing additional information that may not have been considered.


Interesting thoughts, agree there's a lot of hidden complexity in optimizing work flows. Would love to see further discussion on this if you have any good sources.


Most of my experience comes from actually working closely with people to provide solutions for the in-house webapp I maintain for work. Many a time have I implemented what I thought was an definite enhancement only to find people weren't using it. After sitting down with people to find out exactly why, I would invariably learn a new facet of the problem that changed my understanding, and made the solution I implemented at best a trade of one problem for another problem of roughly equal or worse annoyance.

Rinse and repeat through a few different business areas in my career, and you can't help but learn a little humility and come to respect the power of truly understanding the problem space before embarking on a project of any magnitude. A powerful lesson, but unfortunately easy to forget.

That said, one of the replies to my comment (that subsequently deleted it) was a HN regular that said "Kaizen principles tell us that there are many optimisations that are known to the people doing the job, but not to the people who have the power to implement them." I wasn't familiar with the term, but I am familiar with what it refers to, and my bet is you'll find good information branches along that path.


You're totally overcomplicating this -- all they need is literally a music stand that they keep with their binders -- when you take a binder out, bring the stand with you to the platform and mount the binder on the stand on the platform, so that you don't have to go back and forth.


Even if I immediately accepted that a music stand would be sufficient for what might be a 20 lb technical manual, when placed on a platform that jostles and moves (as is visible in the video), that completely ignores the second half of what I noted.

I don't think it's overcomplicating an issue to look at a situation that appears to be sub-optimal and search for reasons why what appear to be simple solutions might not actually yield the benefits you would assume, because in real life, often they've been tried, and they don't. It could be Google is purposefully presenting an unrealistic scenario, or it could be that this large multi-billion dollar company hasn't bothered to optimize this integral process in this simple way, but when I see what appears to be an easily fixed problem in an unfamiliar area of expertise I prefer not to immediately assume everyone else that looked at the same scene wasn't capable of seeing what appears obvious to me.


Instead of a music stand, you could use a modified camera tripod or something -- I don't think that's too complicated. Anyways, point is a lightweight device capable of holding a 20LB binder isn't new technology -- you could probably buy some sort of off-the-shelf thingy for under $100 which would do it.

You're right -- you're not overcomplicating things so much as trying to give this video the benefit of the doubt. But it looks to me like the benefit of the glass is greatly exaggerated by having the binder with instructions be located off the platform. Obviously glass is a superior solution here, but not as much as presented IMO.


I think the question of where the binder is located is missing the forest for trees. Who wants a binder when you can have the information right in front of your eye? Especially when you've lugged binder #3 up there only to find it refers you to some spec in binder #2.

To be able to check the torque speck on a bolt while getting it started and tightening it up would be so nice.

Yes, the example video about the time going up and down the ladder is stupid - but I think tech like this has a huge future in shops and I'm sure a bunch of other work places.


As an amateur musician...I don't think so. Big heavy binders fall off music stands easily. They get stuck on page turns. The music stand itself is either very top-heavy (for the professional ones in concert halls) or it's flimsy and won't hold a heavy binder (for the folding travel music stands). These are all the last qualities you want when working on an extremely expensive, precise piece of equipment.

One of the most useful smartwatch apps I got was a metronome app, simply because it's hands-free. I wouldn't spring for consumer Glass (simply because it was an order of magnitude more expensive than a smartwatch), but it's even more useful for musicians. Imagine never needing to do a page-turn again!


I feel like this comment missed the point of my original comment. By "music stand", I meant some device capable of holding the binder up. Consider instead a camera tripod with the top modified to hold a binder.


There isn't plenty of iPad apps for that?


The point is that you don't want to have anything on the music stand other than the music, because it has a tendency to fall off the music stand as soon as you turn the page. And if you could avoid having the music on the music stand as well, that'd be great, since then you don't need the page turns at all.

Smartwatch metronomes are also great in that they can tap you on the wrist instead of making an audible click. It's sometimes hard to hear the click if you're playing loud, and it limits their use to practice since you don't want anyone else hearing the click.


My PPOE was selling tablet computers for vehicle diagnostics 20 years ago that automatically detected the type of car and displayed the correct information for it.


PPOE: Previous place of employment? Never seen that acronym before and mistook it for PPPoE at first.


So he needs an iPad? And it wouldn't obscure some of his field of vision.


The cynic in me thinks they did that on purpose, the further they put the manual from the task the greater the efficiency gain of their software.


Actually if it was next to him, the traditional way might be faster due to possible problems with voice recognition. I would imagine it would be especially bad in loud places.


If it was right next to him, it might be faster but they might have found that it was likely to cause people to merge steps, and that lead to an overall lesser quality and accuracy in a system where those are at a premium. It's possible that what looks like an inefficiency was a purposeful bit of process engineering to hack human nature for a better overall end result. With step level verification through photographs, it's possible they might get similar results with this system but without the forced inefficiency.


When you have one wire in one hand out of a braid of 90 and wonder where to plug it, you don't want to have to get up, flip through the manual, find the right wire reference (try remembering "AX45Y", "45YT5Y" etc all day long), read the corresponding graph and go back to your station.

The alternative with Glass is, read out loud the reference you have in your hand and it tells you where to plug it right where you are, hands-free.


This is really cool, but when this technology trickles down to burger shops, I would be worried.

http://marshallbrain.com/manna1.htm


Reminded me of the same thing! Came here to post the exact same link, scary.


That sounds very efficient. I worked in fast food and it was always a frantic mess with high school kids trying to spit out custom orders while people pour through for a few hours at dinner.


Here's a Boeing video on how they use Glass for their wiring harnesses. Pretty fascinating.

https://www.youtube.com/watch?v=qTblKJjTadQ


This video is really good for grokking how the tech can help. Showing practical examples with the workers. Better than the other videos linked that are a bit too much marketing, less legit info. Not that this video isn't marketing also of course.


I recommend watching this one - it gives concrete examples of how what the workers user Glass for, and how they access (verbally) and retrieve information (visually).


Am I the only person who immediately thought: WHY ISN'T YOUR BOOK NEXT TO YOU INSTEAD OF ON THE CART ONE LEVEL BENEATH THE SCAFFOLD? Was this seriously the A/B test?


"Watch how much faster a technician wearing Glass is, than one who isn't wearing Glass and is also missing an arm and is hand-cuffed to a hyena."


and who lost his arm because Glass obscured his view of that bandsaw blade.


"What's the wifi password? My torque wrench lost connection again..."


That was my reaction... about 3 to 4 years ago when I saw the Epson Moverio BT-100 engine maintenance training demonstration (https://www.youtube.com/watch?v=eVV5tUmky6c). It's not as sleek as Glass, but it's in the same spirit.

Point being that Google isn't quite breaking new ground by selling augmented reality to businesses. Rather, it seems that they're trying to make the most out their Glass product now that it tanked as a consumer product.


It wasn't a consumer product it was a dev kit.


Glass isn't a consumer product either, it is just a dev kit.


I think he meant that Glass was a dev kit, which is fair enough. But it intended to be eventually released as a consumer product. Somewhere along that path, the idea lost it's charm to the execs at Google.


The tech was not ready for consumer glass yet going the business root will keep it alive. in 5 years it has a better camera processor and larger battery. Now I expect it to permeate into a consumer product organically in the next 10 years as the tech keeps improving and it is adopted for other stuff. I would love to have something like this while watching sports in the stadium where I could see live stats. With 5G internet, machine learning/AI it is something that will be possible. Something that you see as the view of a robot in movies like terminator. By then most of the 20-25 year olds will be the facebook generation who have been monitored all their lives in different ways and wont have the same privacy concerns.


Glass is not ar.


The article says that a high skill, high knowledge, safety critical task performed on expensive machinery saw productivity gains of 10%.

It seems for low knowledge/low skill tasks the gains would be less (because there is likely less need for operator to switch their information context during the tasks).


Low knowledge tasks aren't a bottleneck in contemporary manufacturing though, at least in so far as "low knowledge" correlates with being easily automated.

Tightening a screw or riveting a steel frame for example require little knowledge, but on the other hand they're easily automated. Or rather, more easily automated than reassembling and disassembling an engine.


Human-centric workspaces and robot-centric workspaces are vastly different due to different capabilities of humans and robots. Using those glasses to improve reliability and speed of a worker on a "low-skilled" checklist-following job may be cheaper than redesigning the entire workspace and workflow around industrial robots. This would make sense as an incremental improvement for tasks already done by humans.


Yes, that's what I meant. Any productivity gains from a HUD are likely to be in high skill/high information tasks and the article states that so far they were modest.


Oh, I see.

To be fair, there might be some more productivity gains lurking around the corner if augmented reality allows a tighter integration of each process in the factory line.

Any time lost involving searching anything in the factory floor could be reduced with augmented reality by peppering the world with "quest markers".

Besides the 10% gain on searching which parts of the engine you're supposed to either screw in or unscrew out, there might be a gain when searching your wrench, when searching Joe, when trying to find out where exactly the new parts are and so on.


I was expecting something a bit more amazing, like image recognition of what the person is seeing and overlaying holograms over and around what the person is doing to indicate information and next steps. This is why it feels like to me that MS Hololens has more potential than Google Glass.


Amazing, at least until someone adds Giphy search to the thing!

Later there's running and screaming...


did he do the work for the first time with glass or after doing the work multiple times with the manual ..it looked like he still went to check the manual even with glass at some point?


after using youtube many times to help with car problems I could see this being helpful....or any instructional video in when you are working on something away from a computer..


It's easy to armchair quarterback this stuff after the fact, but this is where Glass should have started. The price point, social stigma/issues, and use cases all screamed "business applications!".

Consumer tech may be where the glamour and scale are, but it's not always the best market entry point.


Ha! I interviewed at google and told them glass was a horrible consumer product and that they should have started with a market that wouldn't mind looking stupid (or skiers who already wore goggles so they wouldn't look stupid at all). The point I made to them was that they engineered an incredible device but hadn't validated the market. I suggested it would have been better to start with something like skiers because they already wore goggles and helmets. So, you wouldn't need to get the product so small to see how people reacted to it.

Needless to say, I didn't get the job. It was really funny to watch Glass (which is a cool technology) totally fail because it's a stupid consumer product. And, I'm no Steve Jobs. It was absolutely predictable that Glass would never work as a consumer product.

And, if you want to make it work for manufacturing, I think you'll need to scale up the form factor. Why wouldn't you make it a safety goggle too? Think about all of the engineering effort that went into shrinking this into a small, completely impractical, package.

Anyway, I think that this was not only predictable but it also shows a company culture where people can't tell the emperor that he's naked. It's not like I was the only person that took one look at glass and knew you couldn't leave the house wearing one. But, that message was pretty actively suppressed at Google.


Xoogler here. You are very, very wrong if you think there is no dissent within Google. There's plenty.

That said, this is a company that has massively succeeded with other bets that were "absolutely predictable they would never work." So just because something looks questionable doesn't mean it gets shot down.

(Now, IMO, Glass was a mistake - but for entirely different reasons than you describe. An elite fashion product cuts against Google's reputation as something for everybody, even IF it had succeeded.)


I can't say whether you failed the interview because of that but Google is a company that believes that their superior technology validates itself.

Sometimes it's the pressure "if you can't sell them in millions, we don't want to do it at Google". This means Google is a great company to make acquisitions that they can scale e.g docs, maps, YouTube, Android.

Google has so much money than they know what to do with it. They can't experiment like a startup because failed products hurt their brand.

Their best bet is to invest in startups that use their stack and if successful, acquire and scale the shit out of them.


> Google is a company that believes that their superior technology validates itself.

This is a good point. I bet the team that made glass looks at what a technical achievement Glass is and rate the project as a complete success. I don't know how many millions they spent to get the form factor so small but I really would be surprised if anyone caught any flack for it.


Not sure why you got down-voted, everything you wrote is true.


Except for the parts where he claimed that his opinions were the reason for him failing the interview, or the fact that he failed the interview was indicative of some company culture.

Unless you believe that he's an omniscient psychic who somehow has insider knowledge of his hiring process.


Or maybe it was just said in jest. Not everything has to be so black and white.


Wasn't that the point tho? Basically Google went:

"We have this cool thing, but we're not sure were it's most applicable. Lets just put it out in the world and see what people do with it!"

Which is what they did. The data they gained from that experiment no doubt led to this.


That's what they did after it was clear it was done as a piece of consumer hardware.

It's easy to forget how hard Google pushed Glass to consumers. They parachuted people out of a helicopter onto the roof of the Moscone Center wearing them. They invested a vast amount of money building out massive floating barges to use as showrooms, which they quietly mothballed and sold. They allowed Robert Scoble to take a picture of himself in the shower with one (some might say this was the worst crime of all).

If Google wanted to "just put it out in the world" they wouldn't have invested so much money in their consumer push. Now that I think about it, that Google I/O in 2012 was a bit of a disaster all around for consumer hardware, because it had both the Glass and the Nexus Q. At least Glass actually shipped.


Promoting it to a room full of developers is not a "big push to consumers".

Robert Scoble is not really a consumer tech reporter either. He's more like a futurist. Most of the things he likes to talk about are things you can't buy.

Maybe because you got marketed to you are thinking it was consumer marketing, but consider that you were being marketed to as a developer not a consumer.


If you followed the news at the time there was a ton of coverage in mainstream media with bold claims about how society was going to be transformed and it was way outside of just developer circles. For example, the New Yorker is not typically considered a developer site and yet they have a bunch of articles like http://www.newyorker.com/tech/elements/glass-before-google and http://www.newyorker.com/magazine/2013/08/05/o-k-glass.

Similarly:

http://www.vogue.com/article/the-final-frontier-google-glass... http://www.vogue.com/article/fka-twigs-throughglass-google-g...

https://www.nytimes.com/2014/03/25/technology/biggest-eyewea...


Of course those articles were written with the goal of getting people to read it so it's not like they were grounded in reality.


I'm not sure how that connects to the question of whether it was promoted outside of developer circles.


"If you followed the news at the time there was a ton of coverage in mainstream media with bold claims about how society was going to be transformed" You cannot control what the media decides to hype their purposes. It was not as if Google itself made those same claims - that wouldn't be confirmation that the goal of the product was to transform the lives of every consumer. If they were throwing a bunch of ideas around to see what sticks that is a far cry from a purportedly failed "massive consumer push"


They were marketing to developers, but they were marketing it as something that would be useful for consumer applications. They've admitted that that was a mistake, at least in the state Glass was in when it was introduced.


The king of successful consumer products, Apple, doesn't ship out public prototypes to developers to figure out what it can do. it's a terrible strategy for launching a consumer product and one reason Google sucks at consumer products.


Good thing this isn't a consumer product then.


And the glamor. If you started on the manufacturing side, it wouldn't have any caché when walking down the street. But if you started with a bunch of models on a runway wearing these, you would.

They tried going lux consumer and failed. But if they went the manufacturing route first, they would've burned any chance of the other.


I don't agree.

They could've used the manufacturing side to enhance the tech, get the components to a cheaper price point, maybe even slim down the whole thing and then when they felt ready to enter the consumer market, spin it off as something new under another consumer-only brand.


Or they could have let someone else take all of the "glasshole" flak, then arrived after that with a trusted, safe, Google product that addressed the concerns.

As they say, you can spot the pioneers because they're the ones with the arrows sticking out of their backs.


ITYM cachet, not caché.


Except you don't really know that. Hindsight 20/20 means that the current place seems inevitable. Here's an alternate future:

- People were initially apprehensive of the high price, but a small dedicated fan base bought it up - Shortly after launch, Google drops the price by a $200 - People deride it's feature set, commenting on the things it "should" have included - Despite that, the interface is very good - A year later, the next version is released, fixing all the missing problems people complained about - The item becomes a must-have tech, front page of Wired articles are written about it, etc, etc


The difference is the original iPhone was actually useful for things people actually wanted to do, and reasonably priced for those uses. They sold 1M units in the first quarter, making it one of the most popular phones in the world at launch.

Apple spent many years creating the iphone. First they created a tablet computer, Jobs rejected it as not ready, not good enough. They then built a phone based on ipod touch wheel. Jobs rejected it as not good enough. He let the tablet team try to build a phone and eventually after much work and improvements, finally shipped it because it actually worked well for consumers, and could do many things consumers wanted (phone, texts, music, web, etc)

Google got glass working and said, what's will consumers use it for? We don't know, so let's dump prototypes on developers and have them figure it out!

Eventually some company, probably Apple, will build something like Glass that consumers will want, but it won't be Google. They don't get consumers (except for high functioning types) and their product development/approval process is a mess.


Were you intentionally drawing parallels to the iPhone here?


Yeah. I really don't get why they focused on its gargoyle feature, which (rightly so) raised serious concerns about privacy, and downplayed AR. The obvious target market, other than geeks, would have been bikers. They'd find both gargoyle and AR very useful for swarm coordination. And military, of course. Perfect for infantry. But that's probably in quiet development.


> which (rightly so) raised serious concerns about privacy

I always thought the privacy concerns were overblown:

If it's about users surreptitiously recording others, wearing the really conspicuous gadget known to contain a camera on your head is a really ineffectual way to do that. A smartphone in a shirt pocket would be better, and surveillance devices intended to be concealed better (and cheaper) still.

If it's about Google or app makers getting recordings, that doesn't seem nearly as bad as the various always-listening voice recognition tech in popular use today. Perhaps it might if people were wearing them 24/7, but they don't have the battery life for that.


It was never about secretly taking photos. It was all about having an obvious contraption on your head that might not be recording or not. That makes people a) uncomfortable and b) conclude that you either don't care or just lack the social skills to realize that. It's not like recording secretly with a concealed cellphone, it's like overtly aiming your cellphone camera at someone while interacting with them, whether it's actually recording or not.

One might argue that people should get used to that, and at some point they probably will, but turns out it's a somewhat more difficult task trying to adapt people to your product than vice versa.


It has a light on the front that comes on if the camera is active. Do you think most of the people who were uncomfortable were unaware of that, or didn't trust that people using it wouldn't disable the light?

The former would make a lot of sense to me. That it has a camera is obvious, and knowing how the light works requires a modicum of research. The latter, not so much, as we're back in the realm of secret recording.


Almost certainly almost everybody would be unaware that there's a light.


Anyone who wanted to record routinely would disable the light.


Overblown or not, they were there.

I do agree that ubiquitous smartphones are just as bad.


I agree in principle, but I would personally never return to a doctor who had one of these things on their face (like the one in the article photo). I see all the use cases for field operators, but it remains (to me) as creepy in any interpersonal setting as it did with the consumer launch.


> this is where Glass should have started. The price point, social stigma/issues, and use cases all screamed "business applications!".

You've put the cart before the horse. The price point is not fixed. You don't make something and then look around for a convenient market. Google glass was also designed backwards, they maximized screen quality and watched it fail in the market due to high price and shit battery life when what they should have done is designed up from a 12+ hour battery life.


I agree that this is the better starting market for Glass.

I think they saw the difference between Apple and Blackberry and believed that penetrating the consumer market first would drive demand within the business market.


Many people said this on the initial run/introduction of Glass. Industrial applications are definitely where they should have started. I'm just glad they've made a comeback and will hopefully get broader use in the medical field and military as well.


On the few rare occasions I saw someone wearing Google's specs, the first thought that crossed my mind was "douchebag glasses" :|


I've never understood why they insist on having the device be visually asymmetric. Just put a piece of plastic on the other side that is non-functional, and the "cyborg effect" basically goes away. The human brain hates asymmetric faces. Such a stupid oversight, may have been enough to save the consumer effort if they did this from the get-go.


Leave that 'piece o'plastic' stuff, the extra room can be utilized as extra battery.


That's how the Snapchat Spectacles work: one side contains the camera, bluetooth and other electronics, the other side contains the battery.


Helps balance things too, you'll be less weight-constrained with symmetric design.


that'll increase the weight though.


I think it’s better to have balanced weight, even if it means more. While it might be annoying to have more weight on the bridge of your nose and over your ears, it’s even more annoying to have an off-center moment of force.

It’s like carrying one 12-pack of beer rather than two.


I think having a piece of nonfunctional plastic on the other side would be silly and, by definition, useless. I think they look pretty sleek, especially in an industrial setting - also, asymmetry doesn't always look bad. I'm reminded of BMW motorcycles:

https://www.youtube.com/watch?v=N9SIS4izLqE

and:

http://www.motorcyclenews.com/news/2009/march/mar2709-histor...


I feel about 30% confident that a "silly" piece of plastic on one side would have made it possible for Glass to deliver on its original vision of a mainstream consumer device, since people would feel way less awkward using it if they didn't look like they were part of the Borg collective. (And before you say "but cost", most likely a mainstream consumer version of Glass would be priced way below the initial explorer versions.) In that sense, it isn't "silly", and my guess is that arguments like yours are the reason it was never attempted.

Design requires fitting form to function, and that often means you add non-functional, ie "useless", parts to ensure the interface between human and device is ideal. Aesthetic appeal is a huge part of that, one could argue any component put on a device strictly for aesthetic appeal is "useless." Yet, in many cases, this "useless" component is strictly necessary for a successful design. (Ie, one that people actually use.)

For something you wear on your face, Glass screams out that it was under-designed. Humans are hardwired to prefer symmetric human faces, studies have shown this. So why would you design a device you wear on your face to be asymmetric? Seems obvious.


This + strapping a goddamn camera to your face that could be recording all the time.

People walked into bars and clubs with these. Then bouncers kicked them off.

If people use your product and they are labelled assholes, clearly you have not done market research.

Apple would have made it aesthetic as hell before a rumour even got out.


No one (that I know of) is strapping BMW motorcycles to their faces, though.


>>Glass is also helping healthcare professionals. Doctors at Dignity Health have been using Glass with an application our partner Augmedix calls “a remote scribe”.

My primary care doctor has a human scribe. The scribe is a recent graduate (BS), planning on going to med school next year. Being physically in the room, watching the doctor work is a great benefit to her. I'm not sure she'd benefit as much from watching a live stream.

Additionally, as a patient I wouldn't be comfortable being recorded.


Just disrobe while I record you with my cyborg glasses. Now, please describe your embarrassing medical problem. (However, it seems like a great idea for other situations.)


I wonder if there would be room for a public key / dual encryption for the video produced that both the doctor and the patient would need to sign off on to view the video, or something like that, to add some kind of viewing trail/authentication/authorization.

I mean, who's to say any given doctor's office doesn't have a hidden camera/microphone somewhere anyway.


Well, you have a reasonable expectation of privacy in a doctor's office (when the door is closed), so even in single-party recording states this would be illegal. Also, HIPPA wouldn't allow this, so the doctor's office would be fined pretty heavily for every patient recorded.


> I mean, who's to say any given doctor's office doesn't have a hidden camera/microphone somewhere anyway.

If you're having paranoid delusions, visiting a doctor's office may be a good start, despite the small possibility of hidden cameras and microphones.


Ha yeah that's not gonna work.

Well, I say that, but thinking long term, it does kind of seem inevitable. Somehow we're going to have to come to terms with having every embarrassing moment of our lives outside of our home being recorded. Unless the UN comes up with an effective digital bill of rights this is an espionage disaster waiting to happen.


My primary care doctor struggles to take notes while talking to me in the 10 minutes we have allocated for an appointment. This would be a huge win.

I do share your concern about being recorded as a patient though.


I'm not sure about that. What is your expectation? That the doctor or someone else will take the time to re-watch the recording of your interaction? How is that better than just extending the interaction?

As an aside, is this a common thing? I had an appointment to get a tdap booster a few weeks ago and spent a good 30 mins. chatting with my doctor.


Honestly I don't know what I would prefer. certainly I would prefer more quality time with my doctor, but at what cost, I don't know.

Here at least (uk) yes. Al of the practices I've visited have strict 10 minute appointments and the doctor will cut you off at 10 minutes. If I go in for a flu jab(I'm asthmatic and at high risk), I expect to be in and out in about 2 minutes, and I don't see the doctor; it's normally a practice nurse that does it. You then get sent back to the aaiting room and told to sit tight for 10 minutes in case you feel weak, but that's it.

For consultants in hospitals it's different, though.


Interesting. I probably have a skewed view because I currently have very good employer-provided health care (US). I don't think I've ever spent less than 15 minutes with my doctor for an appointment. She also takes calls and texts and I have made same-day appointments (within hours) in the past.

However it is perhaps worth noting that the practice charges a $250/year "concierge fee" and from what I've read that's a very low fee comparable to other offices that use that model. Perhaps I would be less likely to go there if I had a high premium to pay in addition.

But even with previous employers and plans across the US, I can't recall a doctor who has ever rushed me out of appointments.


My doctor is an NHS practitioner. If I have an "emergency" and need an appointment, I can get one that day. Otherwise normally it's a weeks notice give or take. I don't pay anything to the doctor, or to the practice, (other than my National Insurance contribution which is taken out ore tax).

I do have private health cover but it doesn't help much if I have a chest infection. If I need an MRI or to see a consultant, I can effectively skip the waiting lists and go to a private hospital, but if I have a heart attack or any immediate emergency, I'll be going to an NHS hospital regardless.


> I don't think I've ever spent less than 15 minutes with my doctor for an appointment.

It's always going to vary wildly based on what you need to discuss - a shot takes a minute or two of lead up, being diagnosed with a chronic ailment may take 3+ visits with 30 minutes of discussion each time.

Either way, I'm not comfortable with my doctor wearing a wire into the room. It's hard enough to trust that the notes they take in confidence will stay confident without throwing faulty computer security into the mix; yes, I am already weary of electronic medical records for exactly the same reasons.


> That the doctor or someone else will take the time to re-watch the recording of your interaction? How is that better than just extending the interaction?

You can outsource the data entry parts of the job to someone in a centralized location, who can be assisted by voice-recognition on an initial pass. This saves time for the doctor and patient, and lets the doctor focus on the patient rather than data entry.

Incidentally, you might not currently be recorded on video in the doctor's office, but there is already a very good chance your doctor is dictating notes describing your medical history using scribes in India or elsewhere.


The doctor would probably be even more uncomfortable being recorded, for the same reason that cops don't like wearing body cameras: Any mistake is recorded and can be replayed at trial.

As a patient I would be quite happy for doctors and nurses to wear body cameras if it was part of a systematic approach to eliminating errors (similar to the way airplane cockpit voice and data recorders have been instrumental in reducing plane crashes).


I suspect that it would only be recording audio; video recordings chew through the battery and would be unnecessary in most cases like this (perhaps the occasional photo of some symptoms would be very useful). I'm totally happy for the doctor to record my conversations with her; they'd be more reliable than the notes typed up hurriedly during or after the consultation.


> as a patient I wouldn't be comfortable being recorded.

Also sent to google, so then when you get home you get relevant ads for your medical issue. Yay!


But the point here is to reduce the time doctors spend on mundane tasks.


I would be totally cool with being recorded. I think its beyond time we get past the ridiculous false modesty that seems to be considered a virtue, but with no real benefit.


It has less to do with "false modesty" and more to do with the fact that a recording of you is being sent somewhere, probably a Google server, which leaves the door open for someone to get a hold of it. That doesn't concern you at all?


Your medical records would be far more secure on Google's servers than almost any hospital server.


That kind of information should never, ever leave the hospital/doctor's office LAN in the first place.


Don't records and info need to be available to other medical personnel? What if you have to go to your local hospital? Having all that info on your doctor's office lan won't help you at a potentially crucial time. Or am I missing something?


Unfortunately it can and it will. At this point in time, while the best option would be not to get recorded in the first place, I'd trust Google servers much more than anything a hospital deployed with help of random contractors.


How does that make any sense?

So when you go to another hospital, your records should be unavailable?


In the US, a lot of the time you can digitally access your records yourself


The desire for privacy is not "false modesty"; given the sentence you just uttered, I'm not convinced you even know what false modesty even means. People have a hard enough time even talking to doctors about embarrassing or sensitive issues, put a camera in the room and they simply won't, privacy is a need for most human beings, it's not false, it's not fake or done out of a desire to appear a particular way, it's a psychological need. Most people are not exhibitionists, they do not like being watched, they do not like being recorded, and it's not remotely something they do for appearances, a.k.a. false modesty.


Right, and it also prevents positive outcomes. People chronically lie to their doctors about their actual lifestyle behavior and then are surprised that the Dr. didn't catch all the warning signs for some disease.

More data = better outcomes.


Cameras won't add more data as they'll simply discourage people from admitting anything or even going to a doctor. I would not allow my doctor to film me, nor would I imagine most sensible and normal people who have an ordinary sense of privacy. Your doctor works for you, not the other way around, you decide what is acceptable behavior, not them.


You sound like you live in a country where evidence of a pre-existing condition isn't a huge liability. In the United States, a recording of you admitting you've been sick for a while can literally bankrupt you (well, not while we still have ObamaCare but a repeal is too likely for us to let our guard down).


It's not "false modesty" to desires some privacy away from cameras when you're getting medically diagnosed with a condition.

Just because you're fine with it doesn't mean that there are people with sensitive medical conditions who would rather not have a video recording of themselves being examined.


If everyone fell under the same umbrella, I would agree (I would even potentially argue that we would all be better off for it). But I fear there will be a large class divide between who we can see naked, and who we can't. Additionally, I wouldn't mind so much because I'm quite happy with my body. Many people aren't, and I'm not sure that modesty is false for them.


The engineers at Google also don't seem to understand how the vast majority of people want to be treated.


It's not about benefiting the graduate


> The mechanics moved carefully, putting down tools and climbing up and down ladders to consult paper instructions in between steps... Fast forward to today, and GE’s mechanics now use Glass running software from our partner Upskill, which shows them instructions with videos, animations and images right in their line of sight so they don’t have to stop work to check their binders or computer to know what to do next.

The article makes it sound like Google glass is the first to do anything like this, and it was all paper manuals before that. In fact, aircraft manufacturers have been using smart glasses for years to augment workers.

http://www.engineering.com/AdvancedManufacturing/ArticleID/1...

Maybe Glass is a significant improvement, but it's not unprecedented.


Smart glasses in aviation manufacturing was actually a key plot point in Michael Chrichton's 1997 novel Airframe. [1]

[1]: https://en.wikipedia.org/wiki/Airframe_(novel)


I'd be really interested in hearing from someone who uses this on the floor. Is it really all they say it is? The marketing and PR looks good, but do mechanics really love it?


>"It took a little getting used to. But once I got used to it, it's just been awesome," Erickson says.

From a pretty good NPR article from a while back.

http://www.npr.org/sections/alltechconsidered/2017/03/18/514...


I was wondering the same thing as the person you answered to. Thanks for the article + audio snippet. I know the tech isn't as new anymore, but still exciting the more it gets used. Can't wait to see what comes about in a couple years for broader usages (i.e. how someone like me could utilize glasses to work better)


Thanks, that was a good read.


This feels like it's getting us one step closer to the vision of the early stage control by AI that is described in Manna: http://marshallbrain.com/manna1.htm

Obviously the AI part isn't there but we now have a fabulous interface to have complex tasked aided/guided by AI. This in combination with what's already going on in Amazon warehouses and we are pretty close to the description of how fast food restaurants are run in the story.


Some of this stuff is so obviously cool but remember the cost of efficiency. 30% time saved means one person does more in their day. This has a personal toll because working less intensively gave you time to think and physically rest. You're at it non-stop now.

And that also means you need 30% fewer employees to manage the same workload. That's going to be the trade-off here. How many people will have to go just to offset the hardware and software costs?

I don't know what I'm arguing here... I'm finding it hard to avoid quoting Ian Malcolm in the context but I think we have to remember there are definite downsides to treating people like underutilised machinery.


> working less intensively gave you time to think and physically rest

I don't buy this argument, honestly. By extension, you're saying we should go back to manual book-keeping instead of computers to do accounting, because it would employ more accountants, and let them work at a more relaxed pace.

Looking up documentation, for example, is not an intellectually demanding task. Instead of going back and forth between printed reference, being distracted, forgetting what you read and having to re-check it, this helps reduce the feedback loop and that makes work more intellectually engaging and interesting because the worker can focus on things that actually require thought, rather than menial tasks.

People will still take breaks and slack off, it's human nature, and they should be able to, if you demand continued unbroken attention, you will end up with people who make a lot more mistakes, I expect (citation needed).

I am much more worried about automated performance metrics and gamification of work, those do offer levers for the employers to push the employees beyond sane limits.


No, I think automation helps humanity a lot in certain places. But my (tangential) experience and main concern really applies to healthcare. Especially in the UK. I should have added this before but couldn't find the words to explain it without going off on one.

To put it briefly, the utopian "we get to go home and see our kids on time" world Augmedix is selling is just the sort of stuff that gets bought by the NHS to make GPs handle double their workload.

There supposedly are protections in place (contracted and EUWTD) to stop doctors and nurses working without protected breaks, but you find me a single competent (eg) med-reg who manages to regularly take theirs.

There is a ton of quite low-hanging fruit... But half of it is a poison in some professions and most of that relies on the reason it was bought.

Really, chasing this thought process can get pretty philosophical, especially when you consider the widespread skills we have already lost because we outsource and automate. They are things to consider too but immediately I'd worry about the people being told their workload is doubling because they've got a fancy gadget now.


The invention of automatic switchboards annihilated the jobs of thousands of switchboard operators.

The development of advanced calculators (computers) removed the need for rooms of hundreds of engineers fiddling with slide rulers.

Microsoft Excel increased accountant efficiency by ungodly amounts, which makes their lives... harder... because they have less time to think and physically rest?

Yea I hate to say it but I have no idea what you're arguing either.


This sort of thinking would be more useful if it was specific to Glass. As is, it's just a general worry with most ways to make humans more productive.


> And that also means you need 30% fewer employees to manage the same workload.

God forbid. Didn't we learn our lesson from the plow, the cotton gin, or the combine? All those jobs needlessly lost. If we don't learn from history, we're doomed to repeat it.


It seems like that this may actually take a lesser physical toll on the body for some applications - the example given in the article stated that mechanics no longer had to constantly go up and down ladders to consult documentation.


I see it as same thing as using a programming IDE that shows you e.g. available method names and argument types for the API calls as you type, versus having to look them up in API docs in a separate window or monitor.


Being able to work more had never really changed a thing in the grand scheme of all things ( Ford,.. ). Everything automated like a human is a real challenge, in my perspective


> treating people like underutilised machinery

A huge percentage of business software is built for this exact reason though.


This is cool and all, and I'm glad the concept didn't totally die, but will consumers ever see this type of wearable tech again? Some people shelled out over $1k when Google initially offered glass prototypes, only to be left with unmaintained devices.

I honestly though Glass would have done better if it had no recording capabilities built-in. It would have substantially reduced the creepiness factor.

It's sad that no one has come in and tried to tackle the heads-up wearable market. Sony has some glasses that looked terrible, and I guess the battery life issues are still too big for many manufacturers to overcome?


> Some people shelled out over $1k when Google initially offered glass prototypes, only to be left with unmaintained devices.

Are they completely unmaintained? There was a story here recently how it just got an update for the first time in a few years[1]. I'm not sure if that was a fluke, or a renewed commitment...

1: https://news.ycombinator.com/item?id=14608894


> will consumers ever see this type of wearable tech again?

Sure. When many thousands of these devices have been produced on a growing economy of scale, when there are multiple options available from various vendors at various price points, when ordinary cellular network speeds and capacities are closer to the wifi in these businesses, when there's education, experience and toolchains for building apps that run on them, and the hardware has been battle-hardened, I'm sure they'll make more sense for consumers.

Computers were first affordable to businesses only - first as mainframes, then as PCs, then again as laptops, and once more as tablets. Consumers didn't buy the first cellular telephones and car phones, businesspeople did. CNC milling machines and rapid prototyping machines were once reserved for specialized, high-technology machine shops, now hobbyists can put a CNC router or 3D printer in their workshop. You used to have to go to a dealer with an expensive computer console when your check engine light came on, now a $10 OBDII reader can read and clear codes for you.

> I guess the battery life issues are still too big for many manufacturers to overcome?

They probably will be a problem for a long time. Unfortunately, battery capacity seems like one of those areas where we're not simply too low on the technology pyramid, it's just a question of raw physics. I would love to see this problem sidestepped by the compromise of making the batteries on these devices easily swappable. Even if they only got 6-8 hours of battery life, I'd happily unclip the battery on the earpiece at lunch or when I get home, swapping in the freshly charged one from my bag. What I don't want is an undersized battery that's glued in and decays down to 60% capacity after 18 months.


> I honestly though(t) Glass would have done better if it had no recording capabilities built-in. It would have substantially reduced the creepiness factor.

Outside of the tech community this is a distinction I imagine few would make - most people upon seeing a camera are going to not unreasonably assume it can record. I still see large numbers of people taping over their laptop webcams even when they are turned off.

I think the simpler explanation is that strapping a camera to your face is simply inappropriate or off-putting in some social settings for many people.


It isn't just outside of tech that people cover their webcams. Mark Zuckerberg does it[1]. EFF sells webcam covers[2]. James Comey does it (although maybe he's not exactly tech)[3]. I've seen articles claiming Snowden does it, but I can't find a good source, that might just be rumors.

[1] https://www.theguardian.com/technology/2016/jun/22/mark-zuck...

[2] https://supporters.eff.org/shop/laptop-camera-cover-set

[3] https://www.npr.org/sections/thetwo-way/2016/04/08/473548674...


I probably should have been clearer - I have no issue with the practice of covering webcams, it's simply a great example of how "camera conscious" many people are.


> I think the simpler explanation is that strapping a camera to your face is simply inappropriate or off-putting in some social settings, at least with today's typical social norms.

Didn't help that the only people who did/could/would buy the first glass prototypes were nerds who apparently started using them in bars and clubs. That solidified its image as "creepy".

Contrast it with Spectacles, a product designed to record as much as possible. But since it was marketed/targeted towards "cool" people, it never got the creepy trait.


The typical use case for Spectacles, at least as I see it, is a little different though. Spectacles is much more like a Go Pro, in that I will be out doing some kind of activity I want to share with friends. People also typically don't wear sunglasses indoors, my expectation of privacy is very different outdoors vs indoors.

The approach with Google Glass was very different, where Google were arguably prototyping a device intended to be worn all the time, including indoors and in scenarios were people typically would not normally expect to be photographed or filmed.


Such as "in the bathroom", which was the experience at Google I/O a few years ago when a quarter of the people there had Google Glass.


How are Spectacles selling? I assume they're doing better than Glass even though I've never seen them in the wild.


I've no idea, but in central California I often see at least one person wearing them any time I go near a busy beach, which is most weekends. If the article below is to be believed, approx 60,000 pairs in Q1. Given the enormous disparity in price between the two I'd assume they are outselling Glass as well.

http://www.businessinsider.com/snap-took-in-8-million-from-t...


> I still see large numbers of people taping over their laptop webcams even when they are turned off.

Good. That isn't a demonstration of technical illiteracy. There have been plenty of vulnerabilities shown in webcams and microphones, and even the indicator LEDs can be disabled remotely

https://security.stackexchange.com/questions/6758/can-webcam...


> I still see large numbers of people taping over their laptop webcams even when they are turned off.

Like Zuckerberg does for his mic and webcam:

https://imgur.com/zxDHM

Or direct image link: https://i.imgur.com/OxWY3FV_d.jpg



This is precisely the application I first imagined when I saw it. Being able to pull up exploded views of an assembly, having reference information for something you've got both hands inside, etc. is invaluable.

I can't count the number of times I've had to extract both my arms from inside a machine (in doing so losing the information of where precisely i'm holding stuff and the bearings that provides you), wipe off all the grease, dust, grime, etc., thumb through a pile of papers that still get dirty and then mentally translate a 2D drawing to what I'm working on, only to then lose my place and have to work it all out again. Having something voice controlled and right there in front of my eyes would be invaluable.

Industry really is the perfect environment for this. Safety issues notwithstanding (which you can work through), it's really the best application of this technology and you can quickly quantify a RoI from its implementation.


Sounds like they've found a great enterprise usecase, and I hope they can keep improving the device to bring it to consumers once again.

I was able to snag a Glass for a good price when they killed support (before selling it off again after a couple months). I enjoyed using it, and being on a college campus at the time reduced some of the social awkwardness. I could push notifications to my face with IFTTT and the voice recognition worked reasonably well. Ironically, I found the most useful feature to be the camera. It's liberating to be able to wink and get a snapshot of whatever is in your field of view, whether it's some info you want to remember or a small moment you want to share. I'm on vacation now and find myself fumbling with my phone to take snaps of interesting things I want to share way too often.


> I'm on vacation now and find myself fumbling with my phone to take snaps of interesting things I want to share way too often.

I just got back from vacation, and I agree. The whole process of taking pictures with my phone feels so cumbersome and annoying, especially for short lived scenes. It's locked and in my pocket for security and safety because I don't like to walk around holding my phone when I'm trying to enjoy the moment. It feels like I could either have the phone be an appendage and then have fairly easy picture capability, or actually be present in the moment but then it's substandard for taking impromptu pictures.

I wouldn't have as much of a problem with a dedicated device, as I wouldn't necessarily be as worried about breaking or giving someone access to the device that contains details about every aspect of my life.

Alternately, something designed to be an appendage but in an unobtrusive way (to the user, at least) might be just as good or better, as you suggest.


I believe the device you seek is called...a camera


Yes, but I was being generic to be inclusive of both a camera and Glass. Not to mention, camera has became almost as fuzzy a definition as phone, considering all the features they pack into some of them these days...


I'd be more exited if it looked like Google had addressed some of Steve Mann's critiques from the initial announcement, but as far as I can tell, the critique is still not addressed:

http://spectrum.ieee.org/geek-life/profiles/steve-mann-my-au... (For discussion on Glass design, look for the paragraph starting: "I have mixed feelings")

FWIW it appears Mann is working with a company on a different system for mediated reality:

https://www.metavision.com/


Those appear to critique glass as an AR system, which it doesn't appear to be marketed as.


I was an early member of Pristine (the Google Glass company that Upskill recently acquired) where I began as a developer, and then landed our first paid deal at the end of 2014.

We used to buy the glasses for $1500 a piece and had a probably 50 pairs of them lying around by early 2015.

The engineering team was great - while I was there it felt like we were flying blind wrt Google’s official support. From a business perspective, I’m not sure product market fit was really ever achieved, though after I left the company expanded its horizons beyond healthcare / telemedicine.

Good luck to Upskill :)


That makes me want a pair that's installed with an app that shows relevant stackoverflow answers as I code.


But why would you want to wear something like this, for that? I can see how an editor/IDE plugin might be useful (a la racer for rust etc) - that would update a section of the screen with tips - but I don't think I'd like to have this on a completely different device.

Now, with a full vr headset, it might make sense to be able to place such things outside of the normal field of vision (where your text/code editing resides) - but so far I don' think working full time in VR with text is a great idea, with the current generation of headsets.


> But why would you want to wear something like this, for that?

can't have other folks knowing you're looking things up on SO. it'd ruin your mystique.


Has anyone here worked on a Glass app? How is the UI programmed? Does it require a special programming paradigm like VR, or are these instruction manual HUDs basically just PDF viewers?


Initially you could just write Android apps. Every OS release broke the standard features more and more, however, and you were pretty much forced into their special programming paradigms, yes. For example, originally swiping emulated the d-pad buttons, but that was removed. For a long time you could pair Bluetooth keyboards, mice, and trackpads, but then they broke that for a long time. It may have been added back recently.

Modern era there is a "Mirror API" that's a very limited web based API for serving "cards" to the device and there's a a native SDK with a few hooks for registering for pre-selected voice commands and showing activities with various amounts of liveliness or graphics capability vs. static cards.

So if you are just publishing cards with standard menu actions you can write server side only in Python and several other languages and just communicate with Google's servers. If you want a more native experience you write in Java using an SDK originally based on Android.


Someone else posted this somewhere in here: https://news.ycombinator.com/item?id=14608894

Basically they pushed an update out a couple weeks ago (after almost 3 years of nothing) which added Bluetooth device support.


I tried developing back when it was Glass XE. It's Android, except closed source and with a custom app replacing the home screen. It was the worst development experience I've ever had, by a large margin.


I haven't written an app but I've used Glass. It's not AR. It's much more like an Android Wear watch just help up in your field of view.


It seems disingenuous to pitch it as an improvement over a ring binder of documents. The question is whether it's an improvement over an iPad on a stand beside your work.


No info on the HW, that's the interesting part. How did they solve battery life issues etc


In an industrial setting they don't really need to solve that problem. Battery low? Swap out the glasses for another pair.


Yep. There's also external battery packs, which probably not a problem for some jobs.

However, there were some other issues I encountered in previous (early) Google Glass development that I wonder if they have been resolved:

- Google Glass devices were prone to overheating to the point of shutting down, so it was not possible to do anything terribly computationally intensive.

- Support for more corporate oriented wifi protocols (eg LEAP) was marginal; the Glass even had trouble making an initial connection to a hidden SSIDs on a standard WPA2 network.

- Barcode scanner support is based on camera image capture; unfortunately small barcodes were difficult to impossible to scan with the Glass as a result (both due to the need to position the code close to the camera, and the nature of the camera that made closeups of small barcodes very blurry).


I'm not sure that's true. One of the examples has a tech working at the top of a wind turbine. Maybe he's supposed to bring up a backup battery pack?


Just plug it to the wind turbine's output for a couple seconds :P


Even when given to developers the chip was last-gen IIRC. If they took a modern processor with the same power (or possibly more) I'd assume they could lower the power draw by a fair bit.

Did glass have cellular? Dropping that could do a lot too.

It's a good question.


Glass did not have cellular, just BT & WiFi iirc.


The battery is slightly larger on the new Glass, but the processor is also better (Intel based). So the operational time is appox the same with much improved performance.


For the manufacturing example, why is this better than having a simple 7-10 inch tablet? For the doctor example, why is this better than a simple body cam, or even a camera that is wall or desk mounted?


> For the manufacturing example, why is this better than having a simple 7-10 inch tablet?

Hands-free operation.

Honestly, I'd love to use it all the time, instead of my smartphone. There are so many cases in which you want to have the screen in your FOV, but your hands free, not just at work...

> For the doctor example, why is this better than a simple body cam, or even a camera that is wall or desk mounted?

It probably isn't, unless the doctor can also utilize a display for something. Otherwise it's just an expensive (and quite likely crappy, compared to other available options) camera.


Wouldn't you still need to use your hands to control the text?

To be clear, I'm not saying there isn't a problem to be solved, I am questioning that "small screen that you can look through" is the right form factor for any of these problems.


It's nice to have a camera that follows your perspective, because you might need two hands to be able to see what you want. For example, if you're parting someone's hair or looking into their mouth with a tongue depressor.


hard to overstate the importance of POV for some applications (including remote medicine).


A few questions:

- Can only businesses buy Glass?

- What is the pricing model? Is this sold as a product, or is it paid for as a subscription service?

- Will there be a "play store" equivalent for software for Glass?


it will be through VARs (there is no play store).


Nice. This is what the original launch should have been like. Help doctors use this to provide better care and no one is going to be calling anyone a "glasshole"


Exactly what I thought


I have always thought that the perfect application for wearable head mounted displays would be mechanical repair. I would love to be able to pull up the maintenance manual for my vehicle and have the glasses overlay part names, fastener sizes, and torque settings for whatever part I was looking at. This is a step in that direction.


I'd love to have something like this to show me step by step how to do repairs on my car or an appliance. Recently I was fixing the turn signal lever in my car by referring to youtube videos. It would be awesome if I could just download to the device a file for whatever thing I need to fix and it just walks me through the steps.


It's probably not related to this article per se, but isn't it weird that "x.com" is owned by Elon Musk while the actual company - which is a branch of Alphabet - has to use "x.company"?


I actually am really sad Glass didn't take off. When it was available, I was unemployed. I'd love to get my hands on a new version of it, if google were ever to release a new version for the public.


This seems to me the obvious use-case for AR technology, but Hololens looks further along than Glass; I'd be interested to see where they have got with any comparable Hololens projects.


Hololens, in its current state, is much bulkier and more expensive to deploy...

It's probably also way more expensive to design & build the experience and content for Hololens' interface..

Glass just needs video, text and a 2D navigation interface (I'm being reductive here but you get my point)..

You could of course do that kind of interface for Hololens too, but then why not just use Glass?


Hololens is really for AR, Glass is more for "HUD" type applications. Glass doesn't have a way to track the real world or align it with your view the way Hololens does.

Glass is basically just a tiny phone on your glasses.


Having worked with a hololens, it is horrible to integrate with.


Super excited, that top image is literally me in the garage sometimes, albeit with safety goggles.

Can these provide eye protection from bits of flying metal while one is drilling?

Will keep digging through the page.


Kudos to Google for continuing to invest into this product, it really has long-term potential and we need all the big players in this space to drive competition forward.


I'm sure standard software TOS apply. I.e. Software vendor is not responsible for any errors omissions or mistakes it presents to the mechanic.


This was completely predictable, and what everyone suggested as Glass failed for consumers; the fact that they're doing this now doesn't necessarily represent any success or particular efficaciousness discovered during the pilot programs. The fact that it took this long to roll out and announce publicly is a bad sign, though. They may have just run out of time, and were forced from above to make their best try at it.


SAD, its black and white! I recall Mondo2000 challenging every color scheme ever created. Lime greens and oranges!


I don't know what it is about it, but the design is so cringey. Even on a professional, it just looks... bad.


In a world where people are forced to wear hi-vis and safety goggles, I don't think people on a shop floor are going to particularly care about looking good if it makes their job easier.


Don't forget those formal white (OK, heavy canvas or leather) gloves to complete your outfit, nor the essential steel-toe boots! They're at the peak of high-fashion footwear. You can look even better if you're only on the shop floor part-time and get to don colorful Oshatoes over your dress shoes: https://www.oshatoes.com/collections/toe-protection/products...


"looks bad" is subjective. If it's useful, people will wear it, and after people start wearing it and you get used to how it looks, it'll stop looking bad.

You know what else looks bad? Hard Hats.


Would you cringe about wearing safety goggles, a climbing harness, or a hazmat suit? It's supposed to be a tool, not a fashion statement.


I don't know if google glasses is comparable to safety goggles / climbing harness or a hazmat suit. I think it's more comparable to my laptop. There are a lot of companies that provides macs to all their employees, because employees think they're more stylish than their windows counterpart.


I know Glass, or something like it, will be great. But I don't know which combination of features + killer app will unlock the thing.

HN has probably mentioned this before but is there a reason Google makes announcements on Medium and not on Blogger?

I would do the same thing if I had a choice, but Google could just make the formatting on Blogger better.


For me it would be facial recognition crossed with something like linkedin. I really struggle with recognising people and it has a large impact on how well I can network at informal events such as tech meetups. Others will recognise each other from similar events and sometimes might recognise me "Oh weren't you at <X>?". At which point I look very rude when I have no idea whatsoever who they are.

So some tech to aid recognition and the ability to add a few notes such as their line of business would be something I'd happily spend a lot of money on.

But early on glass said "no access to facial recognition APIs" which killed my interest in it's first incarnation.


The problem is - most of the cool and useful personal applications of glass fly in the face of various social expectations. The very idea you could be recording someone caused a backlash the last time, and that's nowhere near doing facial recognition...


Is facial recognition creepier than recording?

If it's done in real-time, i.e. the scanned images aren't saved then surely a "This is person <X> you've met and tagged as X" is less creepy than actually videoing someone? You wouldn't get any information you haven't yourself added to the device (although it would probably need to lean on external data-sets for the training).

I wouldn't suggest facial recognition should recognise anyone you haven't met yet, I think that would be a bit weird (although I think it is the future anyway), but a way to effectively add a tag on someone you know would be great. Most people can do this without the technology to varying levels of accuracy and and breadth of their acquaintances.


I think it is creepier by definition, through the very fact that the other person can't verify you aren't recording. A camera is a camera. Even if the product officially doesn't record anything, who's to say I didn't mod my glasses' firmware to dump the video buffer? Not to mention that once video stream goes into cloud, you lose control over what happens to it.


> HN has probably mentioned this before but is there a reason Google makes announcements on Medium and not on Blogger?

They mix it up: the "Transfer Appliance" announcement also on the HN front page is on googleblog.com. Different teams of course.

https://cloudplatform.googleblog.com/2017/07/introducing-Tra...


> They mix it up: the "Transfer Appliance" announcement also on the HN front page is on googleblog.com. Different teams of course.

Not just different teams, different companies. Glass is X, and they don’t use Google Blogger. Transfer Appliance is Google, and they do use Google Blogger.

Both Google and X are Alphabet subsidiaries, but they aren’t the same company.


> HN has probably mentioned this before but is there a reason Google makes announcements on Medium and not on Blogger?

X’s blog is on Medium, not Blogger, but X is not Google, it’s a separate subsidiary of Alphabet.


Automatically send the last 24hr of visual to fast-track airport security?


This sends a clear message to all would-be assailants: wait at least a day after your planning is complete.


The history of you.


We can already obsess about memories using social media, but this would take it to level 3.


Oh man, if you haven't seen that black mirror episode, you should stop any other binge watching and watch it.

It's amazing.


I have seen it! That's why I made a reference to obsessing about our memories. I am a huge black mirror fan.


Headline is neither source headline not technically accurate; while X started life as Google X before the Alphabet reorg, it's a separate subsidiary of Alphabet from Google.

This story is about X, not Google.


x.company interesting domain name! one letter 'domain' six letter TLD


interestingly x.com is owned by Elon Musk. He tweeted about it four days ago.. https://twitter.com/elonmusk/status/885776126148083712


very cool!


I hated it when they abandoned it, because I was so excited about the tech for so long. I think maybe the public just wasn't ready for it then. They probably still aren't now, but this could be an excellent way for them to get more comfortable with it.


[flagged]



This is such bullshit. I applied for their private beta and never got a response.


Aw...how cute they look like little borg lite.


Looks like they are using the strategy of Microsoft HoloLens here which I think makes sense. There isn't enough wide-spread value add in these augmented reality headsets for general consumer use yet, but businesses will help drive innovation until that time comes.


A lot of fluff here, but not much substance. I see how having large manuals or paper lists in your field of view could be very useful.

Does it work well for employees with classes?

I assume they've updated the chip inside to something less power-hungry. Does it get reasonable battery life now?

Why do doctors need Glass to record notes in the background? Couldn't any computer run that software?


It is not about recording the interactions, it is about processing the content. Every time a doctor has to interact with a computer it is time they are not interacting with their patient.

Augmedix has teams of medical scribes who watch the video stream from the glass and actively scribe the data and entering it into EHRs (electronic health records), so the doctors are free to spend the time actually talking to their patients.

Doctors have used voice recorders for decades, but those are not as effective for several reasons:

1) They need to be transcribed later, and since they are audio recording without the video it often means that doctor has to do it since they are the only one with context. That can add several hours of work a day. By sending it over to a remote scribe the patient gets more of the doctors time, the doctor spends less time on paperwork related to each patient, and the data is available in the EHR sooner.

2) If the doctor has any questions about past results they can just ask the scribe who finds the info and sends them a text message back which is displayed in the HUD. That takes less time than swapping to a laptop.

Now one could imagine that in the future a lot of the tasks could be done by AI rather than humans, but if you have looked at medical IT and EHRs you know we are a long way from that. In any event using glass would still be a win: the AI would probably work better getting the live video stream (to improve its ability to accurately chart, etc), and patients tend to be happier when their doctor is interacting with them and not a computer.

Disclaimer: My girlfriend works for Augmedix


> Why do doctors need Glass to record notes in the background? Couldn't any computer run that software?

I'm guessing this might have to do with voice recognition tech and the doctor being able to see the notes visually. If it picks up something incorrectly, maybe the doc just turns to the computer and corrects it?


Still sounds as though a laptop would be better. Perhaps it's now improved, but the Glass screen wasn't nearly as clear to read from as a MacBook Pro screen.


> Why do doctors need Glass to record notes in the background? Couldn't any computer run that software?

The doctor doesn't really use it. It's just transmitting the doctor's view of the clinic to a remote assistant who can take notes for the doctor. Doctors frequently find it difficult to operate electronic health record systems at the same time as doing the actual doctoring. So they either accept providing lower quality care in order to take notes during the session, or do the notes after the session with the patient (in which case they see fewer patients and make less money).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: