>It feels very prototype-ish and I can't help but feel that it's the precursor to something better Google might even be working on now as we try these.
^^ This is totally the point of this run of Google Glass. While this is a fair and worthwhile personal essay, I don't think it's a particularly impactful statement about the potential of the product. He's basically saying that, for $1500, he wants to be blown away now.
He doesn't want to pay a substantial premium "just" to be part of a privileged set of early-adopters. Everyone knows that this is still a beta, and improvements are on the way, so he's just saying the current iteration can't push him over the hump to hold onto the device (as opposed to recoup his cash).
> While this is a fair and worthwhile personal essay, I don't think it's a particularly impactful statement about the potential of the product.
This is an unfinished product and he wants his $ back because the user experience leaves him cold. He's not investing in Google - he's buying an actual product. And clearly he is skeptical that this particular product will ever live up to its potential.
Contrast to the iPhone, yet again (the original). It showed both incredible potential and was an excellent actual product.
>Contrast to the iPhone, yet again (the original). It showed both incredible potential and was an excellent actual product.
Or as someone else mentioned, contrast this to a console devkit. Imagine someone making a blogpost about how they are returning their Xbox One devkit because there aren't any good games for it.
Is Google Glass marketed as a developer kit, or even as a prototype or "first run" device? All the FEM I've seen is very polished (leading me to believe the company wants me to think the product is polished) and seems to be catering to a mainstream audience.
Compare this to the Oculus Rift, which explicitly sells a "development kit" and even requires checking a box that says "I understand this hardware is intended for developers and it is not a consumer product." And, as an aside, the Oculus Rift development kit is a much more polished and functional product than the current generation of Google Glass.
Yes, I think Project Glass has been pretty clear about this being a first run device. That's why the participants are referred to as "Explorers". The point is to start mapping out use cases and fine tuning the UX.
Source: My wife is getting her Glass in a few weeks.
They should have avoided being cute and simply called it a "Developer Kit" instead of targeting it towards "Explorers".
I don't know about you, but the use of "Explorers" is broad enough and appeals to the ego enough to be inclusive of a pretty large swath of society that is going to be disappointed by the device. "Developer Kit" pretty much describes exactly what audience it is actually ready for.
The real question is were it's capabilities misrepresented - did they sell you something that didn't do what they told you it would do?
Because if not.. and if it was only your personal assumptions that were off despite google being clear about what it was you were buying.... that's your problem, not theirs.
I do like that comparison - but let's add a twist:
He bought the console devkit, because he wanted to _play_ with the console and there wasn't a normal unit available. Okay, that was a bad idea and he (obviously?) isn't impressed with what is available right now: Should he still hold on to that expensive unit, or wait for the normal console to appear for a much more reasonable price (and maybe with actual features/'games' he'd be interested in)?
From the consumer point of view, sure, sounds great.
Why would the vendor be at all obliged to let them return the unit in the first place though? Outside of misrepresentation and fraud and things that are not fit for the purposes sold, sellers in general have no obligation at all to take a return.
You see retail stores always taking returns, but that's a business decision to stay competitive... they are under no obligation to do so.
If the terms of sale stipulates that you can return it for a refund, or if it was somehow completely misrepresented and then a fraudulent sale by default..... sure.
So if you bought a console dev kit, and it worked as described for the purposes sold (which aren't necessarily the purpose you bought it for) - the vendor is under no obligation to refund you.
no, it's not like that at all. It's like he bought the xbox one devkit & returned it because the OS was crap and the controls were fundamentally flawed and he didn't think there was going to be a way to make good games for it.
> Contrast to the iPhone, yet again (the original). It showed both incredible potential and was an excellent actual product.
Which was over twice the price at launch, and then fell off the face of the Earth once the iPhone 3G was announced, and dropped from support a year later or so. (The 3G was SOLD longer than the original was even supported) I think the original iPhone is exactly what he wants to avoid, since no one thinks Google will give him a retroactive refund like Apple did.
My first gen still work with the stuff I had in it at the time I moved on from to a different phone. My daughter will find it, charge it and ask me to install games for her on it so I keep hiding it from her.
> Contrast to the iPhone, yet again (the original). It showed both incredible potential and was an excellent actual product.
This amuses me, because I was building an iPhone app in 2008 for my then-employer, so I had a company phone. I barely used it. When I moved teams, I returned it and didn't bother owning a smartphone until 2011 when the entire market had matured.
I'm trying to understand why or how he got it. Was he an #ifihadglass winner? Was he just a lucky individual at Google IO that decided to plop down the cash for what was definitely a developer-device?
I can understand that he doesn't think it's worth the price - because for a normal consumer it isn't. But I don't understand why he would have gotten it in the first place when it was so clearly limited in scope.
And it was a "whim" and his idea was to write a journal (which presumably gets page views) and so he gets one, and now he's publicly returning it, and getting lots of page views and his money back. All in all, the cynic in me thinks things are going as planned. He is effectively out no money, and yet he's picked up a bunch of traffic. Internet win.
This guy actually does that for a living. He is currently testing a new product called Helpouts that allows you to charge for support using Hangouts in G+. Looks mildly interesting and more useful then Glass.
>>>>>I can understand that he doesn't think it's worth the price - because for a normal consumer it isn't. But I don't understand why he would have gotten it in the first place when it was so clearly limited in scope.
Also keep in mind more and more places are banning them over privacy concerns. How bad would that suck you drop $1500 and you can't even use it most of the time since so many places are banning them?
Let's not fall into hyperbole: there's a few places that have banned Glass, out of well, the entire world. That's a long way from "most of the time" or "many places."
There's a lot more (and by that, I mean orders of magnitude more) places that ask you to not use cell phones than places that ban Glass (restaurants, movie theatres, coffee shop lines, airplanes, customs, etc). But nobody would claim that makes cell phones useless.
> This is totally the point of this run of Google Glass.
Is there any indication that this generation of Google Glass is intended as a prototype rather than a consumer-ready product, other than the reviews which overwhelmingly seem to describe it as such?
The fact that it doesn't work for the >50% of people who wear corrective lenses would be a start.
Also, the last three months have seen numerous software updates to add features and change functionality. It is an unfinished product and they are iterating based on user feedback.
>* This is totally the point of this run of Google Glass. While this is a fair and worthwhile personal essay, I don't think it's a particularly impactful statement about the potential of the product. He's basically saying that, for $1500, he wants to be blown away now.*
And it's also something Google and Microsoft do all the time. Release or pre-announce half-baked stuff, just to get some publicity, years (or decades) before they are fit for consumption.
> "Yesterday I made the call to return my Google Glass. After months of anticipation, a trip to New York, and several weeks with the device, after much deliberation I decided the device wasn't ready for prime time and my $1,500 would be better spent elsewhere."
He shelled out for a beta product and was disappointed when it was, well, a beta product. More than that, even -- it was a beta version of a product which is the very first of its kind. I'm glad he was able to get his money back, but I'm not sure why he was surprised.
While I get this sentiment, in fairness, I think the reason we expect people not to be underwhelmed by beta testing is because it's beta testing. If you ask people to shell out $1500 for the privilege, you've now raised expectations - they're not just beta testing, they're buying a significant luxury good.
I get why G charged - when you get people to invest in an item, especially by weeding out the less-passionate, you bias them towards strong positive responses - but they did cross "beta testing" and "luxury goods" wires.
I don't get this. The current release is meant for developers, journalists and "technologists." Google was very up front about this. It's not intended to be a consumer product. Complaining about it at this point seems akin to someone getting a console devkit and complaining there's no games available and those cost 10s of thousands of dollars.
Really, the alternative is to not get access until the thing is done or to create some sort of secret selective list for early access.
Call it beta or whatnot... but Google was very up-front that this was the initial release of the product, and that they wanted to select a limited size team to get them out in the world, to people who would give them perspective.
He wrote that he'd use it and blog/journal about it and whatnot.
If he wanted an end product that was polished, he mislead google and misled himself.
In some environments (e.g., hospitals, military, probably police and first responders), $1500 is fair market value if the utility is there. Consider the rise of pagers. In 1950, at introduction, they were the equivalent of $100/month in today's dollars.
The Catch-22 for Google was establishing the value adds for wider adoption. They did an amazing job of launching the beta, considering that challenge.
If you want to find out if that console is fun to use? Yes.
Probably the 10k would be a bigger showstopper for people that are 'just curious' to even give it a try, but in this case this guy seems like someone interested in a sneak preview.
The devkit buyer hopefully plans to build stuff, _generate_ money with it. It should be an instrument to generate income, the money is spent on 'tools'.
Glass? That's a gadget. I said it elsewhere on the thread: I do like the comparison with a developer kit, but .. most people that seem to get one/use one (even the ones mentioned in the very article) look like end users, trying to get a sneak preview. Not 'I want to build my business on Glass apps' type of guys.
Yes it's a beta product, but it's a beta product that costs about 2200 bucks and a day+ of time in order to get to a Glass pickup center.
I just got mine, and I'm 50/50 on keeping it. I don't think anyone is faulting Google for the device, it's just that the value is negligible at its current price point.
First off, I do not have toys to communicate with Glass. I realize that Glass is too early. There are not enough data dumping devices available at decent prices. Withings, vehicle data, health data, all the things people try to keep private. That kind of data is what would be useful with Glassware.
The lack of a SDK and depending on RESTful services. I want to toy with sensors too. Delayed gratification does not work. Where are my long nights of programming?
Battery life and destroying my cell's data plan are also issues. I'll miss watching people's faces light up when they try it on.
OK, now you have got me interested. I really have not heard anything about this. You may have just made my week. I was getting quite depressed about the money I wasted.
I can only run APKs on Glass while in USB debug mode? There is no way to actually execute the program again without being in the debugger, correct?
You can install a third party launcher on Glass. Here's one that lets you switch between launching APKs and the standard Glass interface pretty easily:
There's also a Mirror API based Glass app that lets you launch apks, but it's a bit more work setting up. It actually requires you install a couple APKs as well: https://pontedivetro.appspot.com/
I've only run APKs in USB debug mode, but the main "OK glass" screen is just an Android app, which I think is started via an Intent, so I suspect you could hook this and override it.
Can't you create a feedback loop with sensors, an android app and the mirror api? I know it's a kludge, but if your mobile app posted data to your web server, your glass could poll it that way. Of course, the data and battery life concerns only get worse...
At this point, without voiding my warranty, I can basically poll data, reflect data through options, and use voice commands (that barely understand me) to reflect data.
I have seen interesting manipulation of the Mirror API to make voice commanded games. I tend to get unreliable ping pong times for real time related materials. Sometimes a card opens instantly, sometimes it takes up to 15 minutes.
That was my basic idea. Device A connects through bluetooth and uploads data to Phone. Phone relays to web service. Web service relays to Glass. It works. Google App Engine/Mirror API is just not my cup of coffee.
Glass, or some offspring, has the potential to be amazing. I really cannot wait to see what comes of this. A beta device utilizing a RESTful API... I knew what I was getting into, but once I got it, there just was not anything I wanted to feed to Glass. Glass hungers.
Imagine that you buy an Xbox or Playstation DevKit, they cost 10x as much as an actual console that is shipped. There is precious little software except samples and demos, and you are expected to be a tester of buggy and incomplete stuff as well as potentially develop.
And then you go and complain about how you shelled out thousands and there was nothing to do, and other devices will cost a fraction in the future.
Do people not understand that Glass is primarily intended for developers and dogfooders?
I think the #ifihadglass program was a mistake. They should have kept it limited to people attending Google I/O or other bonafide developers. I understand they wanted to have it tested by a diverse group of people in the real world and not just developers, but the wide ranging audience they were seeking also happens to be the people who don't seem to understand what devkits are.
Let me be the first to say: if you don't want it, feel free to sell it to me (seriously, email in my profile). I tried some out when I was visiting the Bay Area and I did think they were amazing and magical; YMMV.
edit: The author mentions that he wasn't interested in developing an app for Glass anyways; I thought that was half the point of the Explorer program (unless you're a skydiver/ballerina/whatever who can produce awesome marketing content).
Google reps have stated you are allowed to "gift" it to someone else when you pick it up, though. I'm sure there are plenty of people who would privately gift you some money back.
It would be amazing if someone could confirm this; even if Google offered me an official pair I couldn't afford the flight and hotel (going back to school in the fall!). I'd love to write some apps for Glass, and it seems like a surprising number of Explorers don't want theirs anyways.
In the summer of 2007 I was doing a bunch of research as to whether or not to buy a few shares of $100-$130 Apple stock. I knew the iPhone was coming, and I thought it was going to be big. The reason I eventually decided the iPhone was going to be a loss? Rumors of a Google phone, which I thought would be the end-all.
Heh.
A few years later working at Google, my boss discussed with me about how Google is just not a very good products company. Big consumer products require a polished launch, with products that speak to the consumer. Google on the other hand, rolls things out slowly, iteratively, and rarely takes them to a polished, user-friendly state. If this ever happens, it's usually over a very long period of time, and quietly. And that's just with software products - you could say this is even more the case with hardware.
I don't think glass will be the silver bullet of wearable computing. It will be the concept definer, the 'what', but the 'it' will come from somewhere else.
I can't help but have a little CueCat-ish response to Glass. I don't wear glasses and don't care to. So Glass saves me from having to take my smart device out of my pocket by having something on my face all day?
NB: I've seen plenty of these around but haven't actually played with one.
My personal bet is that Google Glass is the Newton of wearable hardware. It's directionally correct, but way too early for the available hardware and the existing ecosystem, and too expensive as well.
As a happy owner of a Pebble, I definitely get the value of going beyond the phone screen. And as a sci-fi reader, I fully expect that everybody's going to end up spending 99% of their time intimately connected to tech (and the broader world via that tech). But I don't expect head-mounted UIs to be popular outside of tiny niches for 15 years, if ever.
I've been wearing glass for a while now and am active in development (see http://openglass.us), there are a few select groups of people I'd recommend it to at this point: developers (it's a fun/simple/exciting platform to write for), researchers (just like with the kinect, it makes it much easier to reproduce experiments), people who are constantly running around town while being in contact with others (business/sales people), who drive all the time (much safer/easier than a phone), who would use a gopro on a regular basis (sports), and who travel often (in the US due to mobile data).
At this point that's really it. If you sit at a desk all day you're going to have a bad time with the notifications, "oh great I got a notice on my face a minute after my phone buzzed and my laptop notified me". They work much better when you are active, out and about when even picking up your phone is a chore. If you already know your way everywhere you go (you don't travel) then one of the best features, directions, is lost on you.
This applies to the current device/software/apps, part of what we are trying to do with OpenGlass is push it past being a "beeper on your face" because that only applies to the segment of people to whom getting notifications faster than you do with your phone matters. Once they allow devs to push to the Play store (coming soon) it'll be a much more compelling product for end users. I think it'll ultimately be an amazing product, it's just the current feature list targets what I see as limited audiences.
It is illegal, is not safer than a car docked phone and since it moves your eyes away from the road is extremely dangerous. Stop being so selfish and stop the car. Studies have shown that when you are distracted you are more likely to crash. So innocent people's lives are at stake here.
Glass is a perfectly fine way to get directions while driving. Just listen to the voice prompts and ignore the screen. Heck, one can push the prism up to the top of your head so there's no chance to see it.
(To be sure, one can also do this with a GPS app on a smart phone, but having left a phone behind in a rental car, I see value in using a device that's attached to me.)
It is not illegal in most states (IANAL). I agree it is less safe than fully focusing on driving (as is talking to a passenger), but I'd much rather have people glance up briefly and speak out a text than look down at their lap like they do today. Each state will figure out where they draw the line on this, I personally think it is safe when used with care (like a navigation system).
Your point about notifications is an issue that's been on my mind. Notification convergence and awareness of your current context is something that I've been trying to figure out for some time.
We have all these things and yet none of them seem to be able to each other (and/or without serious battery consumption) where I am, what I am doing, and how best to let me know a thing has happened.
We are working on something related for OpenGlass, using your location, images, and sensor info to better understand your context. We'll be posting a new video this weekend with details. The new motorola phone is moving that way too (knows when you are driving, etc). Also related is https://ifttt.com/.
I actually think industrial applications might be the hidden killer app for augmented reality. Imagine walking around a power plant and seeing your environment annotated: last inspection time for each inspection point, what is flowing through each pipe and its current pressure, etc. Construction sites could be truly amazing.
Call it a professional environment and I think that's where we will see the first true impact of wearable computing. Your examples are two good ones. And of course people have mentioned hospitals. Just consider every profession where a desk jockey is not the norm. That's police, firemen, first responders, military, teachers, etc. Heck, why not cashiers and even bar tenders?
Then again, I'm biased. We're building and selling our own version of wearable computing, but driven by your biological rhythms.
Also consider that in professional environs, various pros get away with all sorts of silly attire and add-ons - from scrubs and stethoscopes to other uniforms and utility belts. You look ridiculous, but you have a job to do.
Exactly, pros do it all the time. Eventually it becomes cool. I never buy into the hype about it looking bad for a few reasons:
I wear glasses. Years ago people thought all glasses looked silly. And they were, with giant thick lenses. Now people buy non-RX glasses to look cool, even.
I saw an animated sci-fi show from around 10 years ago (Denno Coil, Japanese anime) where in order to see the AR/VR world, you wore glasses. Every single kid wore glasses. You had to in order to see the VR/AR. Once something is common, the look factor doesn't matter.
I think we're just seeing people being disappointed that Reality is still decades/centuries/possibly impossible behind Fiction, and Fiction is only pulling further away as Reality gets better computers to render Fiction with.
I think it even has potential for a desk job. Having your workflow preserved as what you actually see instead of what you happen to save would be a huge time saver for forgetful people like me.
Especially for artists, designers, and maybe even engineers. I can't tell you how many times I've wanted to see what I was doing when I started to screw something up.
Totally. I kind of want to start every comment on every thread on HN with "Glass is not an augmented reality device."
I've worn one (very briefly) now, and let me be very clear: you can not superimpose labels or images on anything you see in daily life with Glass. This is a physical limitation of the hardware, and it is absolute. This hardware will never, ever, ever be able to do that.
And it's unclear to me if the hardware for Rainbows End/Halting State augmented reality will ever (or at least in this generation) exist if Glass-like heads-up-display hardware can not be successful in its own right.
Not really true, I have one that I wear regularly, and it could be used as one in limited fashion.
There's also a little Easter Egg in it that lets you see the entire Glass team, and look around at them, up, down, etc. and it's pretty engrossing, so you can use it as limited VR as well.
(I'm playing with it as an augmented reality device and have written a calibration library to get the display aligned with the camera - see https://github.com/matt-williams/Optometrist. I've been working on hooking this into OpenCV which I'm hoping to upload in the next few days.)
One valid answer to "why not" might be power/heat - if you run it for a long period of time, your battery dies and your hair sets on fire. ;)
For one, the positioning of the display and camera. The display is elevated so you don't see it directly in front of you. You have to actively look at it. The camera isn't centered in your face so it doesn't line up with your vision.
I agree the display is only part of your vision. That's suboptimal but I'm not sure it's a showstopper.
The camera not being centered on your face is not insurmountable. (In fact, you'd want it to be centered on your right eye, not your face.) The project I linked to above aligns the camera with your vision. It doesn't currently work for different depths of field, but I'm working on that - this relies on your image recognition being able to determine the depth of the recognized object, e.g. by knowing its physical size.
This was my experience when I tried Glass - despite knowing it wasn't AR, I still expected it to somehow enhance my experience. This isn't the case - it's merely replacing the screen in your pocket with a screen above-and-to-the-left-of-vision. I'm hoping the Glass team (or someone else) will take notice and create an AR device that can grant this experience that seems to be widely desired.
While those are novel applications and would probably work well in a repair situation, those things would be better managed by software that alerted you when things like "inspection time had been exceeded" or if "pressures were out of range". Having to look at every pipe and piece manually to check it's stats is error prone at best.
"You mean we have all this stuff recorded and in a database, but I have to go look at it to get the info?"
You're confusing the use case. If course you can already look up the data and get alerts, and every modern plant already does. But getting hands on data when you're at the plant isn't just helpful for repairs or inspections, it's helpful for lots of things including debugging problems, helping new employees learn about the plant and "map" the processes to the physical layout of the pipes. Nowadays people lug around papers or tablets, so this would just be an extension of that.
I had the unique experience of borrowing a pair of Google Glass for a multi-day hackathon. The very first thing I did was figure out how to develop and install actual Android apps on it, because the "card" system isn't conducive to developing interesting applications. Frankly I'd rather go with a smart watch if cards are all I could do.
During my time with Glass I prototyped 3D interactive software where you could load up models of human body parts, physical structures, etc, and swipe the side to rotate them around and view from all angles. This ran pretty well on the embedded processor, and I was importing relatively large 3D models.
As well, I tried running computer vision algorithms on the device. This did not go so well. I could not run a basic Canny edge detection algorithm at more than 1 FPS. FYI this algorithm - and similarly complex ones - are fundamental to doing anything remotely useful in image processing. The alternative is doing all your image processing offline, which may be fine in the future as internet becomes super fast. Some people are doing facial recognition like this, at a whopping 2 FPS.
About the product. There's the issue of the screen size. Small, hard to focus on, and up to the right. Not really conducive to augmenting your vision. I think it's fair to say that Glass was meant as an information retrieval device - which again, makes me wonder, why not just use a smart watch? Frankly I think the screen quality could be better on the watch too.
All in all, the platform is an interesting foray into wearable computing but I reckon it will take multiple iterations (and years) to get it to the point where I can run more interesting CV and AR applications. In the meantime, I've been building my own wearable computing hardware to enable the types of software I want to build. The Alan Kay quote, and all that.
Yep, via OpenCV. If I still had access to Glass I'd love to try out your library. Do you have any first-person videos of your application? If you have an iPhone 5 you can hold it up to the prism in good lighting and record first-person (it's a real pain though).
I don't have any first-person videos yet, but I should give it a go. (I don't have an iPhone 5, but I can try with my HTC One X.) I'll link to it from the project's github page.
I think the biggest problem is this swiping thing. Imagine Apple had released the first iPhone with a mouse plugged to it. It's a pure betrayal of the original Glass vision, where all user => machine communication goes through voice.
Voice-only is the hardest HCI problem left. Just consider the challenge as an English speaker, with 18 years of experience, of traveling from Alabama to New York to London and Wales and Scotland and Ireland to Mumbai and Kerala to Cairns and Sidney and Auckland.
True, but in the case of Glass the speaker is always the same (it's the owner) so you can learn and adapt the acoustic model over time. I'm wondering btw if Google doesn't do that already for Android's built-in speech recognition, which accuracy is amazing.
The other problem is that apps cannot (at least for now) change the language model, so Glass will always be in either "search" or "dictation" mode.
I really want to see glass interact with your phone so that you can perform minute actions like pinching and zooming or turning on certain camera features with your phone still in your pocket.
The touch pad on the side of the glass is multi-touch-capable, so it can detect pinch and zoom. You can't use this function via the Mirror API, but you could do it by side-loading an APK.
Not even slightly. I don't think there is one last good form. The future is a variety of devices. Despite the rise of tablets, I still code using a desktop machine with multiple monitors, and I wouldn't ever trade it for a tablet.
I've never found a use for a tablet myself. My phone is fine for Kindle reading and anything else I'd use a tablet for. Past that I use my Macbook Pro Retina which is barely heavier than a tablet anyway and has a nice keyboard. Glass/watches are handy because they don't have to be taken out like a phone/tablet/laptop.
I'm counting my phone (4.25" diag) as a small tablet. I don't have a bigger one myself, but I see them, especially at parties, in the hands of children. Their ubiquity is growing.
I can see room for more devices, fitness bands, etc., but as sensors rather than media. I'm just not seeing them beat the form of the book, magazine, tablet.
I mean wrist-watches kind of overloaded their interface after time, date, and dive depth. It was technically possible to put a microfilm reader in a wrist watch, but a paper copy of National Geographic was just so much nicer.
I'd be willing to try a heads-up travel guide, but a cheap and light one in my pocket might be all I need.
when I can write the software for the tablet on it, and get one with a beefy processor (which only fires up to max when on AC power), I might think about it. Right now, I need an ultrabook to do that.
(and no, I'm not interested in the MS Surface Pro, Windows 8 is terrifyingly poor by all accounts.)
Kind of weird he mentions the Recon Jet. You can't even remove the sunglasses part of that, so you certainly won't be reading emails casually at home on it. It doesn't seem to match his key complaints.
I don't know if anyone has asked this yet but why does it have lenses if those lenses don't actually do anything? I mean, why not just the display portion in some sort of headpiece instead of including the glass?
The lenses on Google Glass are completely removable and replaceable. Most people wear Google Glass without any lenses. There are clear and sunglass lenses you can snap in, however.
I had the option to get a Glass through the #ifihadglass program, but after reading up on it's abilities and more on augmented reality, this is not the device I want to spend 1.5k on.
duh!!! Its a beta device and its priced that way so developers can start making their profitable apps already on it. Its an investment. Off course its to expensive for daily use.
"Wearables" in general are going to be a much tougher nut to crack than tablets. Glass is, at least, radical enough to stand a chance.
I'd be less sure that a watch is enough of an improvement over taking one's phone out of one's pocket.
Also, unlike tablets, which have the obvious business use case of liberating people from the "sit down with your computer" inhibitor to interpersonal interaction, nobody knows whether or how pervasive use of wearables could improve productivity or interaction in a business setting.
You know, I bought a Pebble watch so I could help an organization I love (the Long Now) develop a Pebble watch face. I expected to hate it, as I hate most gadgetry.
I've ended up loving it. One of the most annoying things to me about a modern phone is its intrusiveness. Text messages, calendar alerts, phone calls, and other interruptive communications. The Pebble makes all of those notifications a) quiet, and b) subtle. It's fantastic to be in a meeting, see who's calling, and decline the call, all without breaking flow.
I think it will also be good for a variety of special-purpose interfaces. E.g., one person already has built custom software such that when he starts moving on his bike (as triggered by phone-measured movement speed) it switches over to a bike computer display.
And the nice part is that it looks just like a watch. Unlike Glass, people are entirely ok with it.
^^ This is totally the point of this run of Google Glass. While this is a fair and worthwhile personal essay, I don't think it's a particularly impactful statement about the potential of the product. He's basically saying that, for $1500, he wants to be blown away now.
He doesn't want to pay a substantial premium "just" to be part of a privileged set of early-adopters. Everyone knows that this is still a beta, and improvements are on the way, so he's just saying the current iteration can't push him over the hump to hold onto the device (as opposed to recoup his cash).