Hacker News new | past | comments | ask | show | jobs | submit login
Build your own Google Glass (ieee.org)
77 points by dave1010uk on Jan 1, 2013 | hide | past | favorite | 56 comments



When you can remember everything with 100% perfection there is no distinction anymore between the food you're eating and the food you ate 20 years ago. Between the people you're with and the people you knew long ago. And so on.

In one of Aldous Huxley's books (island) there is the Minah bird, whose main role in life is to remind people to live 'here and now'. This is the opposite of that. Either you'll end up re-living the past all the time or you will end up not using it.

Either way I don't see the benefit. I'd rather live here and now and eventually forget than to be addicted to my own past, my memory is more than good enough as it is.

The one place where this sort of thing could be a real plus is to help treat people with deficiencies in this area.


The one place where this sort of thing could be a real plus is to help treat people with deficiencies in this area.

Yes, some research shows that reviewing the things an Alzheimer's patient did that day, along with other pertinent facts, slows the degeneration. I've long wondered if memory loss behaves similar to traditional forgetting of facts, if you could do training similar to spaced repetition to combat it long-term, or if it only works in early stages.

For the last several years, I've been considering the greatest beneficiaries of a perfect digital memory may not be ourselves at all, but our descendants: I am also aware that I will not care about every minute of footage in 20 years. The problem is, I don’t have the foggiest idea which minutes I’ll care about, and I am not ready to let go of any of it yet. http://web.archive.org/web/20100529070919/http://diveintomar...

You have no idea what your children will find interesting or important, and to presume to edit your life down or to not record it at all, while entirely within your right, might perhaps be considered negligent in the future. We all have crazy uncles and adventuring grandfathers whose stories we have only heard fragments of, which we'd love to know more about. Wearable computing and lifelogging may be an answer to that, a way to enable that sort of generational storytelling.


This is my life and I get to do with it what I want. If my descendants want to have interesting lives they'll have to go and make their own, not piggy-back on mine.

And if they truly believe I've been negligent they're welcome to pass their inheritance to their siblings.


In the PDF deck I linked to about my work, I describe storytelling like this:

Storytelling is sensemaking and placemaking of the: past (genealogy), present (diaries and journals) and future (personal [digital] archives).

At PDA2012, there had to have been half a dozen personal storytelling startups, most with a genealogy hook.

You have all the time in the world to scan photos, to write in your diary, but you're running out of time to ask your grandmother how she got that scar, or your grandfather why his hair turned white in the war.

Perhaps you've never been curious about your forebears, but to discount the possibility entirely out of fear that your children's children will enjoy pop-pop's stories so much they'll forget to live their own lives seems churlish to me.


I've read my grandmothers diary. It was extremely interesting, especially because I got a first persons account of world war II, and the way it affected her and the rest of the family.

But I don't feel like I have some kind of right to know anything beyond what she chose to tell and I hope my kids will have the good grace to look at me in a similar way. By the looks of it that won't be a problem.

It's one thing to be interested about ones forebears, another to be obsessed by it. When my mom went all out on some genealogy site and put all of my details in there as well as those of my spouse, kids and so on we asked her to please stop doing that.

Besides the obvious privacy angles I don't think kids that can't stand up for their rights should be included in the documentation that others place online, by the same token my kids and their descendants don't have an automatic right to any level of detail about me.

I remember that one of the dictators (Kim Yong-Il iirc) of NK had a film crew following him making a documentary about his life. The guy was batshit insane, and I feel that anybody that wants to document their live to that extent has an overgrown feeling of self importance.


Thanks for the detailed response!

I think this is a great illustration of the privacy (and thus related legal) issues that simply aren't being addressed in the research and commercialization efforts thus far. We saw a little of this with Instagram's photomap, that people's locations were being "outed" because they were tagged in someone else's photo and that person chose to make their locations public, but it's going to get more complicated before it makes more sense.


It might be useful when the person recording his life is Einstein, or Rockefeller.


Most people you'd want to know stuff about later on are too busy to spend much time on 'self recording'. But I'm 100% sure you'll be able to look up what Aunt Gertha ate on the night of the 22nd of October 2017 and whose dog was briefly lost that evening.


Actually, I was thinking of a time when such a thing might become automatic

Remember when your uncle would lecture you on the benefits of keeping an account of where you spent your pocket money? Well, that's now done automatically because of credit cards.


i'm actually interested in the "augmented reality" aspect of this (as opposed to seeing new e-mail notifications, video calls, location info and whatever).

Imagine being able to create content only visible either through an app's camera or these kind of glasses. Like "Second Life"but on the real world.

"Placing" notes, pictures, videos for your friends or public at certain places; 2/3D objects like arrows pointing at that awesome coffee shop.

Actually thought hard at implementing a prototype (for Android) but my current skills are under par.


I've been re-reading William Gibson novels in order, currently on All Tomorrow's Parties.

Anyway, when I re-read Idoru, I wondered why we don't really have anything like its proto-cyberspace. With its Walled City, a distributed common virtual world. I can see why the matrix of Neuromancer doesn't really make sense, but having played enough mmos in the intervening years, I'm a little surprised Walled City or something like Snowcrash's Metaverse isn't a thing.

It seems like if the infrastructure for it existed, the Reddit crowd et al would build it out in no time.

We've got half-assed things like Second Life, and I think a lot of Minecraft's appeal is from that same strain. But nothing that really works properly.

It just seems like something that would actually happen eventually, it seems like the demand is there.


I see. In theory, like any idea, sounds cool. I've actually taken real world pictures around my building to mockup a possible UI sometime.

The issue is that, as far as i know, geo-location (GPS at least) has a +-10 meter precision, so placing content at a specific spot is, fuzzy.

ps.: by placing content at a specific spot, i mean grabbing the latitude/longitude/height and possibly direction from the device and allowing anyone around it to see it by pointing a device (smartphone/tablet/glasses) there.

edit: of course, the precision is really an issue depending on what your'e "tagging". If you're adding stuff to a town square, +- 10 meters don't make any difference :-)


There's an ios app called "Minecraft Reality" which uses camera tracking to put models into physical spaces. It's not seamless by any stretch (you manually download each model from a list of nearby placements before viewing) and it loses sync if the scene is changed too much, but it does a pretty good job of accurately placing the objects. I imagine that maybe taking advantage of 3d sensors like leapmotion could make it a lot more robust.


For what it's worth, there are many, many smartphone apps that enable this, both using augmented reality and not, and many other research-level prototypes in the literature.

If you've never heard of any of them, perhaps that's because in practice it's not as great an idea as it sounds.


Good to know, thanks! (one less vaporware i have to worry about)


Instead, the greatest value will be in second-generation applications that provide total recall and augmented cognition. Imagine being able to call up (and share) everything you have ever seen, or read the transcripts for every conversation you ever had, alongside the names and faces of everyone you ever met. Imagine having supplemental contextual information relayed to you automatically so you could win any argument or impress your date.

This is the common wearable computing utility argument, but in practice, it doesn't seem to pan out.

Gordon Bell digitized much of his life, and everything for the past ten or twenty years. Phone calls, emails, a photo every sixty seconds, more when his heart rate increased. He hardly ever went back to it. Revisiting it was so rare as to be a notable event in and of itself. Rather, he found people with whom he had conversations would go to him and use him as a reference library.

Bradley Rhodes' Remembrance Agent was an Emacs thing which actively indexed and cross-referenced anything you were typing with things you had written before. He's working on Glass now, afaik: http://www.remem.org/ It isn't a generally useful solution because it requires that you live inside of Emacs. Today you'd need something that could index across multiple devices, multiple independent cloud storage systems, multiple independent accounts, etc.; or something that was locked into a single ecosystem and you lived entirely there, but then you'd develop "blind spots" for things that occurred elsewhere, like how you stop hanging out with certain friends because the only way they communicate with you is via Facebook but you turned off all of Facebook's notifications.

There was an essay a long time ago, a writer had been filing every link and every note into a pre-Evernote piece of notetaking/hyperlinking/PIM software, something with an X or a Z in the name, but I can't recall it or the piece. The essay was about an article he was writing during which the software brought up a saved article and a note he had written, and forgotten about, in a creepily timely and seemingly prescient moment. He wondered at what point the software needs to get credit for providing the research and the associations.

"Forgetting" is a key part of human existence that most wearable applications tend to ignore. "You were last here with [your ex]" says Foursquare. "You haven't talked with [your ex] in a while, make her day by leaving a message on her wall" says Facebook. Using someone's precise words against them can be emotionally cruel. Legally, you're expected to forget all of the specific details of things like trade secrets and company processes when you leave a job; how is that to be reconciled with your perfect, digital memory? None of these things are being actively explored.

We already hit the "second generation" of applications as described by the author; it's the third generation that interests me.


Your post actually fires up quite a lot of my neurons. I'm thinking about a perfect text to speech system which transcribes whatever you say into pure text files. Maybe it'll generate a new file every ten minutes, every half an hour or possibly even every six hours. That'll leave you with tons of text with respect to what you said. Now integrate that with some kind of a keylogger on all your devices (like a computer, tablet, and phone) which can track states and applications. Also, by virtue of having a current generation smart phone you can also tag locations and timestamps, at least to cell tower triangulation accuracy if not GPS accuracy.

This would run transparent to the user. At the end of the day, he'll have pure text files that can be searched. Implementing a google-like algorithm modified to work on flat files would be great. It'd be basically like the Remembrance Agent but more advanced.

I don't know how I'd feel about using such a system, but I'm reasonably sure that we're on the way there indirectly. Heck, most of us voluntarily not only record our lives, but also post it publicly. What's wrong with recording all our life (right from that websearch on the tablet, to what you're ordering at the restaurant whose name you forgot but you visited it on the day right after your best friend got married) if it's going to make it life easier. As such, I've noticed personally that I have stopped storing information in my head unless I use it regularly, but instead I store pointers to the location of the information, like whether I did a search {if yes, then what were the keywords, and what was the result's position}, or did I look it up on a forum or maybe even a real book.

/thinking-out-loud


That'll leave you with tons of text with respect to what you said. Now integrate that with some kind of a keylogger on all your devices (like a computer, tablet, and phone) which can track states and applications. Also, by virtue of having a current generation smart phone you can also tag locations and timestamps, at least to cell tower triangulation accuracy if not GPS accuracy.

All of this is possible now, and the lifelogging community has been doing it for years and years. "Why" you do it is still an unanswered question. The usefulness hasn't been proven out.

Over the past year, I've come to feel that perhaps recording every word isn't as important as recording the context. All the words you said are there, but the context is lost. In context, that was a really funny joke, but the words, by themselves, were rude, or cruel, or sexist, or racist. It doesn't record everyone around you laughing, or how good you felt, or that it was a really good day as far as your depression management was concerned, or that that was the last time you saw X before his accident, and when you do go back and reminisce, those are the important things, not the precise words you used.

As of yet, lifelogging hasn't saved the meaning.


I guess writing is still a poor form to record the whole spectrum of human emotions. Until we find a way to do that, context would necessarily have to be lost.

If only there was a way to make a person relive the same "memory" or "experience", like a Pensieve[1]

[1]:http://en.wikipedia.org/wiki/Pensieve#Pensieve


Why stop there? Instead of saying "until we [someone else] find a way to do that," why not explore what that would take?

What does the research tell us about how we feel? How is that different across genders? Across personality types? Across cultures?

How can we represent that? What sort of accuracy matters? Should it be relative or absolute? Does it matter? What will we be using it for?

How can we record it? How much can we trust self-analysis, like diaries or mood questions? How much can we trust biometrics? Is that dependent per-person or can we generalize?

What sort of biometrics would we need? How long would we have to wear them? Where? Power? Fashion? What about on the beach, can they survive sand and salt water and 110 degree F weather? What about military use, can they survive high-pressure sand and persistent sweat and the very different bodily reactions that someone in a firefight goes through compared to a soccer mom?

What is the goal of an emotional recollection? To understand it yourself? To remind yourself? To reminisce? To convey your emotions to another? To allow another to find something similar in their own recorded memories so they can empathize better?

Don't stop at fiction or wait for someone else to make a web service for it. This is something that can be designed and built today. This is something that could have been designed and built a decade ago.


> There was an essay a long time ago, a writer had been filing every link and every note into a pre-Evernote piece of notetaking/hyperlinking/PIM software, something with an X or a Z in the name, but I can't recall it or the piece. The essay was about an article he was writing during which the software brought up a saved article and a note he had written, and forgotten about, in a creepily timely and seemingly prescient moment. He wondered at what point the software needs to get credit for providing the research and the associations.

Would life-logging help us find this essay or would the essay still be tricky to re-discover?

(Frustratingly, Wikipedia "Lists of X" is in fact "Lists of X where each item on the list must have a Wikipedia article". That's probably great in some respects, but it makes for a lousy comprehensive list.)


Oh, I'm sure I have it saved somewhere, but your point is apt: if all you remember is the general concept, but none of the key phrases used, nor a specific enough timeframe, when you've been saving things about wearable computing for fifteen years (as I have), how do you find it? I don't know that that's a solved problem yet.

At least, it would only be fifteen years of personal saved items to go through, instead of the entire internet.


I wouldn't be surprised if something akin to the Remembrance Agent ends up on Glass. For his PhD thesis, Brad generalized the ideas of the Remembrance Agent to the broader notion of JITIR - Just-In-Time Information Retrieval. He built a web sidebar in that work, but that probably isn't the right app either. However the idea of doing a continuous search based on what you're doing would fit it well. Before it was just searching your own stuff, but now they'd hook in more generally to Google's search. Couple that with something Siri like where the speech recognition is feed as input for a proactive search.

I'm still not sure this is a compelling application for a wide audience, but for several of the early everyday users of wearable computers (including myself) it had a lot of appeal.


I had forgotten about what ended up in his thesis. It's linked to from the Remembrance Agent site if anyone else is interested: http://www.bradleyrhodes.com/Papers/rhodes-phd-JITIR.pdf (PDF)

"Implicit query" is another term for it. On the desktop side, one of the most realized solutions was the Dashboard prototype: http://nat.org/dashboard/

I've long argued that designing multi-modal interfaces is where applications need to go. We're seeing this now with desktop, web and mobile apps all running against the same APIs, but what it really means is you're more likely to be able to also design a UI that's really an audio-only interface, whispering to you continuously via a Bluetooth headset, or a wrist-mounted UI, or... etc.

It also means you can design and build something useful for desktop users, and then mobile users, and then scaled even further down for wearable users. Real-time continuous Google searching as a desktop sidebar, or rendered on an iPad or iPod Touch display linked to desktop input as a second screen, would be a start.


There was an essay a long time ago, a writer had been filing every link and every note into a pre-Evernote piece of notetaking/hyperlinking/PIM software, something with an X or a Z in the name, but I can't recall it or the piece. The essay was about an article he was writing during which the software brought up a saved article and a note he had written, and forgotten about, in a creepily timely and seemingly prescient moment. He wondered at what point the software needs to get credit for providing the research and the associations.

Was it Steven Johnson describing how he uses DevonThink (http://www.nytimes.com/2005/01/30/books/review/30JOHNSON.htm...)?


Yes! Great!

I notice there's no X or Z in DEVONthink, but there is a V.

I also notice the author of the custom software he used before that was Pinboard's Maciej Ceglowski. Fascinating!


Excellent comment! But I take issue with a few points.

I think that recording everything is almost useless, without a good filter (approaching AI levels). The real power, I think, comes in recall and referencing. Just as an example, I've been reading "The Stars my Destination" and two words came up that I didn't recognize (epileptoid and asthenic - funny, FF doesn't recognize them either!), and it would have been handy to just look them up quickly.

As to "forgetting", people already abuse current mnemonic devices (such as photos) to "hold on" too long, and using people's words against them is a problem with or without perfect recall.

For trade secrets, etc, you do what any high security facility does: recording/transmitting devices left at the door, outside, and simply accept the crippling that comes from lack of instant access to information.

Also, what's wrong with living inside Emacs? ;P

Edit to followup: Works of fiction I've found interesting in this vein include "The Final Cut" (even with the poor execution) and "Strange Days"; anyone have any recommendations along these lines?


> using people's words against them is a problem with or without perfect recall

Yes, it is! But one thing I've learned in managing online communities is that you don't give people the ability to do something you don't want them to do. I don't feel having perfect electronic recall should be a feature without more context sensitivity.

> For trade secrets, etc, you do what any high security facility does

Except for the Steve Mann argument: it's a prosthesis. When you take the hardware away from him, you're disabling him. He becomes disoriented and cannot function as well. It's like taking away a wheelchair, or crutches, or a hearing aid. And when more and more of your potential employees rely on these devices 18 hours a day, more and more will be negatively impacted. No, I think it's something that has to be fixed in the legal system or fixed in the entire design of a wearable system.

> Works of fiction I've found interesting in this vein

The novel "Snow Crash" has wearable computing used only by a subset of wired individuals derisively called "gargoyles." Everyone else still uses workstations.

The "Old Man's War" series of books has soldiers with embedded computers, including a subset of soldiers who have them embedded since birth.

The novel "Permanence" also has them, but used for pervasive IP rights enforcement.

The novel "Signal to Noise" has an elite intellectual class with implants that work with pervasive wireless sensors and fully immersive workstations.

The film "Stranger Than Fiction" has AR-style overlays at least during the opening sequences.

The short film "Sight" touches on AR and behavorial control a bit: http://vimeo.com/46304267

This kottke.org piece covers a few other shorts: http://kottke.org/12/04/the-real-google-glasses


Psychohistorical Crisis by Donald Kingsbury: imagine a wearable computer, brain interfaced, that you grow up with. Now imagine it being taken away, and destroyed, and relearning to read, with your eyes and animal brain. A non-authorized sequel to Asimov's Foundation series. Better (imho) than the original.


Thank you! Do I have to have read Foundation to understand it, or does it stand alone?


If you've read the Foundation trilogy, you'll have some more context, but if you're not too fazed by highly science fictional settings (i.e. you've read a fair amount of science fiction) you should be OK.


Are photos really mnemonic devices? I mean, you could certainly have a visual mnemonic device, but a simple photo is just a photo. A mnemonic is usually some kind of trick...

I can't say you're wrong, but I can't say I'm convinced either...


If you'll excuse the analogy, the mental picture that came to mind was someone pining over their ex and taking longer to get over breaking up with them because they keep pictures of them around.


As far as fiction that's similar check out Black Mirror:

http://en.wikipedia.org/wiki/Black_Mirror_(TV_series)#3._.22...


Gordon Bell digitized much of his life, and everything for the past ten or twenty years. Phone calls, emails, a photo every sixty seconds, more when his heart rate increased. He hardly ever went back to it. Revisiting it was so rare as to be a notable event in and of itself.

I think the most useful aspect of large digital archives will be when natural language processing and image recognition get good enough to automatically distil the archive into actual information (rather than just data), no manual reviewing ever.

This would be along the lines of being able to ask "What did Bob say about X?" or "Find things that correlate with weight loss" and get meaningful answers (with summaries and so on, as appropriate).

(However, this seems mighty similar to saying "sufficiently smart compiler" http://c2.com/cgi/wiki?SufficientlySmartCompiler.)


I call it "automated storytelling with post-hoc computational narratives." Old, big PDF: http://s3.amazonaws.com/vitorio/Automated%20Storytelling%20M...

The thing about it is that I don't think the stories the software generates have to be precisely true. I don't even think they have to be internally consistent over the long term. Look at the data, generate a "good enough" story that helps push you toward your goal, and revise it later. Personal historical revisionism. In the long term, you won't remember the details, all you'll know is that your mood is generally better and you're two inches slimmer in your waist, so, what does it matter how it happened? The computer is looking out for you, that will be all that matters.


It'll help a lot when I can search it from my computer:

in:conversations, timeframe: 201x-11, location:office, present:Kalle Anka.

Fast forward 90 seconds and I'm proving he ACTUALLY told me to do exactly that.


And then he'll say, that's not what I meant, you're being pedantic, you're addressing individual points instead of the whole issue holistically.

Or maybe it's a personal fight, and you're throwing his words back in his face, and now he realizes he can't trust you with his feelings because you always take him so literally. He's sorry he's not a poet, that he's only human and has trouble expressing his emotions sometimes, but this just isn't going to work between us.

The only person who's ever looked bad when I've used someone's words against them is me.


There is a difference between using someone's verbatim statements when they have been made in an emotionally heightened state of mind (e.g. in a fight), and using them to vindicate one's self when the blame is being assigned, e.g. in a workplace. There is a reason why the old adage of "give it to me in writing" exists.

Also, even though people may say the wrong things when they are emotional, a repeated pattern of them expressing those sentiments can reveal their real disposition. In this way, a record of what is said, word for word, can be useful to uncover someone's real thoughts. I already do this with my brain, but it is easier when it's in searchable, dated text.


Mapping has traditionally focused not on the past, but on the future. Specifically, in considering courses of action. The Olympian perspective maps provide allow people to account for far more information than they can derive from their immediate surroundings in that particular moment. Having visualized a potential course of action, maps then lead back to the here and now, and indicate where the first step should fall. In other words, they're about identifying the optimal path from this moment into a future moment that is preferable to others.

In carrying out this process, there have always been four limiting factors. The first results from the precision and accuracy of the map itself. The second results from how well people can locate themselves in the map, and accordingly, how much confidence they can place in the plans they derive from it. The third, has to do with how swiftly people can toggle back and forth between the cartographic view (outside the bubble looking in) and the ground view (inside the bubble looking out). For example, 18th Century mariners using the lunar distance method for determining longitude at sea may need hours to gather raw information from the relative positions of the horizon, stars and moon then run the calculations to correctly position themselves on a map in order to determine the precise compass bearing they should follow, which is something we can no derive in realtime. The final limit has to do with the kinds of information that can be mapped, and how swiftly it can be refreshed. Once, we could only map coastlines. Now we can map the clouds above them. At the extent of the mappable expands, so does the range of factors that can guide our plans for the future.

As far as humans go, the first three factors have absolute theoretical limits while the fourth is theoretically unlimited. Google Glass represents a development in which all three of the theoretical limits are reached simultaneously, while the fourth has the lid taken off. Thanks to our survey instruments, we can expect to map the entire globe with millimeter precision, and locate any object within similar precision. Tools like Glass provided a realtime overlay of information that could, at one stage, only be gotten by consulting a map and plotting a course. And with billions of sensors on Earth and in Space feeding data to enormously powerful processing centers, the range of inputs for cartographic overlays just gets bigger and bigger.

I've become firmly convinced that our arrival at this points represents a seminal moment in human development, one that will stand out in the history of our species for centuries, if not millennia to come.


As someone with a degree in geography, this is a bit hyperbolic, and a bit of a one-sided, urban, first world, view of cartography that might be nice for advertisers in big cities with pervasive 4G or wifi, but will also be ignored everywhere else.

Mapping is traditionally about sensemaking and placemaking. Precision and accuracy are two different things, and both are entirely context dependent. There's a famous book, How to Lie with Maps, all about it. Maps are how we take things from others by drawing a line in a different place, or how we explore boundaries by drawing what we know and wondering what's everywhere else. They are stories on paper that sometimes have legal or royal or personal meaning, but they are always still stories. Geography is a social science, and cartography is as much art as it is evidence.

In Sketching User Experiences, Bill Buxton tells a story about maps, where your Google Glass example breaks down:

Imagine that you were kayaking off the coast of Greenland, and needed a chart to find your way. You might have a paper chart, but you will probably have trouble unfolding it with your mittens on, and in any case, it will probably get soaked in the process and become unreadable. From the urban perspective, an alternative solution might be to go to your PC and use a mapping program on the internet... However, there is a minor problem here, too. You don't have your PC with you in the arctic, much less in your kayak. We all know about internet-enabled cell phones and PDAs--they might provide another alternative. Why not jump on the internet using your cell phone, and get the map that way?

But here is the problem. You probably can't get cellular service where you are in your kayak. And even if you can, your battery is probably dead because it is so cold. Or, your phone won't work because it is wet. Even if your mobile phone does work, and you have service, you probably can't operate it because you can't do so without taking your mittens off, and it is too cold to do so.

Now let's look at a third approach, one that the Inuit have used... This shows two tactile maps of the coastline, carved out of wood. They can be carried inside your mittens, so your hands stay warm. They have infinite battery life, and can be read, even in the six months of the year that it is dark. And, if they are accidentally dropped in the water, they float. What you and I might see as a stick, for the Inuit can be an elegant design solution that is appropriate for their particular environment.

There are entire cultures where your Euro-centric view of mapping does not compute, so much so that researchers at my alma mater would go into jungles and teach third world tribes how the usurping white man views their world so they can make maps that you, the outsider, can understand what they had previously only expressed emotionally. There are entire continents where your "Mirror World" cannot accurately represent anything, and I feel that puts your seminal moment much further away than you imagine.


"There are entire continents where your "Mirror World" cannot accurately represent anything, and I feel that puts your seminal moment much further away than you imagine."

Really? Entire continents? Nothing whatsoever? Not even a coastline?

For what it's worth, I am acutely aware of non-western cartographic traditions. One of my own teachers is among the world's foremost experts in Chinese mapping, which (as you know) is astonishingly different from its European counterpart. I have also studied the cartographic traditions of the South Pacific, India, and Arabia, all of which combine different interests with different ways of thinking about them, and have resulted in wildly distinct senses of the world.

None of this changes the fact that the model now proliferating with GPS enables smartphones is, at heart, the Western one. And if you really think these things are limited to the rich world, you are simply unaware of what's actually happing all around us. By 2016 there will be more of these things on the planet that people, and not because everyone in the rich world owns half a dozen.

Whether or not the perspectives developed elsewhere can find their way into this framework is the thing I find fascinating. Even if they don't, the sheer scale of the economic changes wrought by the presently-unfolding geospatial revolution will secure this moment's place in history for ages to come. That's the thing about studying the world's cartographic traditions; you reach a point when you can recognize a big deal when you see one.


> Really? Entire continents? Nothing whatsoever? Not even a coastline?

As I said, accuracy is context-dependent. Satellites in space can tell you that sacred temple moves every time it is rebuilt, but everyone on the ground will tell you it has been the same temple in the same place for a thousand years. Which is accurate? Which matters? To whom? For what purpose?

> Whether or not the perspectives developed elsewhere can find their way into this framework is the thing I find fascinating.

To say that GPS and Western conceits about mapping are going to take over the world whether cultures with different concepts of place and time like it or not -- they can adapt or not -- has a lot of manifest destiny behind it, and I find it pretty professionally offensive. I think we lose a lot when you're expected to make a map so the souls of the dead can find their way using ArcGIS.


You, sir, have raised pedantry to a level I've never seen before on HN. But I'm not afraid to be servicy, so let me retroactively preface my remarks with the qualification that they apply only to members of the living, and not members of the dead and / or other residents of any spirit world currently known, or waiting to be discovered in this universe or any other.

Separately, I don't know why you're interpreting remarks about people around the world who are buying tools that improve their lots in life with "manifest destiny" which was a religious justification for the genocide of Native Americans in the 19th Century. For a guy so concerned about the meanings of precision and accuracy, this interpretation is hilariously devoid of either.

But please, feel free to elaborate.


Pedantry? No, geography. A traditional use of maps is for the living to draw the locations with emotional significance to the deceased (and then you'd post them in their tomb). Chinese and Japanese maps both were more concerned with political and emotional importance than absolute scale.

So it is with many tribal cultures. Map-making as Westerners understand it may be completely foreign. Certainly it was for the Native Americans, and you see maps in tribal cultures reference landmarks and geographies that you simply can't see or understand without a deep understanding of the flora and fauna, with (what we would call) "distortions" applied to emphasize danger, or emotional meaning, or cultural import.

Except they're not distortions. They're accurate to the user of the map. They're just not accurate to your GPS system. Absolute scale is the only thing GPS is concerned with, but being able to define anything absolutely is an incredibly modern thing.

I don't see it as "people around the world who are buying tools that improve their lots in life." I think you are expressing a very imperialist mindset with the words you are using. A GPS system which cannot innately apply emotional or cultural distortions to match the worldview of the user is, by that very nature, imposing the worldview of the GPS creator on it.

So, yes, I think manifest destiny is exactly what you're advocating for in your comments here.


Did it ever occur to you that cartographic paradigms are like languages? In the same way you can learn French without giving up Italian, did you know you can learn to see the world through a new lens without abandoning one that's very different?

I'm asking because you note that "Chinese and Japanese maps both were more concerned with political and emotional importance than absolute scale." Like I said before, this isn't news to me. I've spent a fair amount of time with traditional Chinese cartography. I understand (and greatly appreciate) how fluidly it's related to Chinese poetry and literature, incorporating them in ways that are largely alien to the western map-making tradition.

But I'm also aware that China is launching its own GPS system, and that people in China do, in fact, use GPS to find their way around. Indeed, they're investing in it heavily. I'm sorry, but "choosing to invest in a technology" is just not the same as "being murdered en masse by foreigners claiming divine justification."


While I'm sure there are lots of people who use geo-aware devices to improve their lives, being both a geographer by education and a UX designer by profession means I'm also sure lots of them would prefer to use a geo-aware device which takes their cultural preferences into account.

While a culture doesn't "have" to abandon its old ways, when a militarily superior presence arrives and says "we own all of this now because this GPS-made map says so" and you have no way of representing your territorial boundaries except in terms of "that sacred land on the third loop of this animal's migratory trail" and "twelve generations of reconstructing this holy temple in the same spot which moves because it's on the shore of a river that changes course and width and breadth constantly," and your kids who were previously happy hunting in the forest by day and smoking around a campfire by night now want guns and jeans and Nintendos because they're novel, it's hard to reconcile the two. Now you have to figure out French without anyone wanting to teach it to you, because then they don't get to pave over your Italian land if you figure out how to explain that it's yours in French, and also stop your kids from becoming indentured French servants just to spite you. It's a technology that's been forced upon you; you haven't been given the choice to "invest" in it on your own terms.

These are real things that happen. This is not a contrived example. This is what geographers at my alma mater dealt with, teaching non-first-world peoples how to use GPS against encroaching developers and governments, and how to translate between their native cultural "distortions" and the emotionlessness, meaninglessness, absolutism of GPS and GIS.


Right, that's about as many reversals as I can handle in one thread. Good luck with the job hunt.


It might be instructive to get the, er, perspective of someone who has used augmented vision for several years already. http://spectrum.ieee.org/consumer-electronics/gadgets/why-sm...


Can't wait to see all Google Glassers stumbling around addled by pop up advertisements.


Just like all those pop-up advertisements Google has put elsewhere, correct?


GMail is pretty infested with ads.


I can't wait to rewire Google Glasses and run the entire thing off of a local server.


Good god I'm so tired of this stupid inaccurate meme implying that Android comes with ads. It is the stupidest thing I continue to see here.


Does Google Glass comprise of only a micro display? I assumed the technology was a little bit more sophisticated than that.


Here is what is currently known about Glass:

- Transparent microdisplay, front-facing camera, touch-sensitive controls, audio I/O, internals "equivalent to a Galaxy Nexus", IMU, WiFi




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: