We have a robot similar to this at work. It's broken right now - maybe the battery is dead, maybe there's a software issue - and nobody's particularly interested in fixing it, because it never worked too well.
For one, on our model, there's zero peripheral vision. This makes it really hard to orient yourself in space - how far away is that wall? Did I scoot far enough past that doorway to swivel 90° to the right and go through it? Am I about to hit someone?
The same problem occurs when you're trying to talk to people. You can really only see two people who are standing in front of you. I'm not even sure if there's stereo sound - as there appears to be on the Snowbot, since he swivels towards people who are addressing him.
Other minor difficulties:
* it was difficult to park it into its charging station, which meant that sometimes it would just go comatose in a hallway.
* there's still a fair amount of lag, which makes it slightly annoying to have a conversation. Dialing someone over Skype/Hipchat/whatever-video is almost always the better option.
* sometimes you'd remote in, and be in an unfamiliar location because the robot wasn't returned to its parking spot. Since the whole point is to be able to access remote locations, we typically weren't familiar with the office layout, and would have to wander around aimlessly trying to get someone to help us get to where we needed to go.
All of these issues sound like they should be fairly easy to resolve with the tech.
The unit should have an internal map of the company and know where each employees desk is, you shouldn't have to manually drive it around an unknown office, thats just crazy! I'm sure the fact that it doesn't is a result of it still being in the novelty phase, but I would think the industry might get serious about this technology as teams become more and more distributed. Anything that simulates the effect of presence is going to be very valuable and detrimental to the communication of the future distributed workforce. I think VR/AR will ultimately take over this space, but these roving camera/screen's might be a good interim solution.
The camera on the bot should switch to wide-angle / fisheye view when it moves, then switch to a regular lens for faces when it's still / just rotating in position.
The wide angle / fisheye view would provide a more typical FPS-like frame of reference, while the 20-35mm view makes faces appear realistic.
that was my first thought as well. the main problem seems like something a tiny piece of curved glass could solve. If it could get almost 180 degree FOV, you could pan in software on the client end.
I haven't used one of these myself, but from my experience with regular video calls (and a mention in the article), a killer feature will be some way for the robot operator to maintain eye contact without having to fake it, i.e. looking at the camera instead of the screen. Googling brings up some rumors about devices with cameras hidden behind the screen, especially an Apple patent from 2009... but even if the tech doesn't pan out, I bet you could fake it pretty well in software if you have multiple cameras on different sides of the screen.
I've worked for 2 smallish tech companies that had robots like these sitting in corners collecting dust because they invariably turned out to be not very useful.
It seems like a setup that gives remote users access to building-integrated CCTV feeds, microphones, and speakers for an intercom-like interface with a nice UI for navigating and directly connecting to a local user's devices (based on "where" they are in the space) would work much better than a "centralized" mobile machine. I would guess those whose culture already respects a presence of disembodied and remotely connected souls (transdimensional spirits, perhaps) would have an easy route to adopting this.
We are on the cusp of solving the peripheral vision piece, because 360 degree camera rigs are getting cheap and popular. As soon as they can stream in realtime, you could have low-latency VR as if your head was really on top of the robot.
There are already commercial VR camera rigs that can do translation within a sphere big enough for head motion. The camera makers want to be able to make content for use with things like Gear VR.
Once granted political asylum in the United States, it's virtually impossible to revoke. As much as pessimism is warranted, there are significant substantive differences between Russia and the United States.
Oh, there are absolutely substantive differences, I wasn't claiming there was not.
However I imagine politically useful dissident might well be kept in an ambiguous status, if the perceived use as a bargaining chip later was high enough.
Really I was just objecting to the idea that the US is somehow above such shenanigans because of some sort of philosophically deep commitment to the "rule of law". Neither proposition holds much weight on evidence of actual behavior. Which isn't at all to say I believe the US is equally likely to engage in such behavior as Russia, mind you.
never said that celebrity doesn't have a cost (a high one in this case) but there is an upside to his actions. Acknowledging so doesn't commit you to supporting one side or the other.
I know this question might sound silly but can FBI "arrest"/confiscate/disable the robot itself because it's aiding a fugitive? And the person/company that owns the robot, can they by charged with aiding a fugitive?
IANAL but would love to hear what some more informed have to say about this. The law is constantly catching up to technology....
Cannot "arrest" something that's not a natural person, but yes, they could absolutely seize a bot under a number of civil forfeiture laws, or likely as evidence. The burden would be on the owner to sue to get it back.
I've always heard that with civil forfeiture they are "arresting the money". Isn't that arresting a non-person? Or have I just been reading simplifications?
Simplifications. "Arrest" has specific meaning in most jurisdictions. "Seizing" would be more appropriate; one can seize property as well as persons, and in the case of persons there can be a seizure without an arrest.
The point is what a designation is for; in the case of an arrest vs detention or seizure for natural persons, it's usually an elaborate shell game around when various search doctrines or other restraints on police behavior kick in. In the case of property, the only point is to grab your stuff; the circumstances surrounding it don't particularly matter because they can always decide later under one doctrine or another that they're entitled to keep it.
It definitely raises interesting questions about mind-body duality. If a telepresence robot is a physical manifestation of Snowden's self, then it would seem to be fair game for the FBI to arrest it. If it isn't, then we have to ask how a sufficiently-advanced version of the robot would really differ from one's physical body.
To make the question meaningful, I suppose you'd have to stipulate that the robot is somehow irreplaceable. Meaning it's the only one of its kind that's capable of representing itself as the body of Ed Snowden, and that it's the only means at his disposal for interacting with the outside world.
On the other hand, it's no stretch at all to say that the company who provided the robot is liable under US law for aiding and abetting him, especially if Snowden has to use their servers to communicate with it. (Well, OK, it's a stretch, but it's a stretch that the US judicial system has had no qualms about making in the past.) Our Federal courtrooms aren't often mistaken for philosophy classrooms.
IANAL either but I would naively expect the gimmickry to be irrelevant and for this to be treated no different than having a telephone call with Snowden.
Good point - and also like arresting Ultron - "Don't mind me, Officer. While you arrest me, I'll just be downloading myself out of this robot and into another robot across the city."
In the movie, Ultron wasn't always present in only one single robot at a time. There were numerous scenes where he was actively controlling / in the bodies of multiple robots. For example when he shows the twins what he is building, there are dozens of robotic bodies all being controlled by him simultaneously.
To the point about the movie focusing on one substantial robot body, there is an excellent explanation: extremely limited quantities of vibranium.
My understanding was that a large part of what lifted the city up was also vibranium. So I don't feel the vibranium was so scarce he could only afford himself one master unit. But his goal was to control the mind gem in Vision's body, which would have been quite unique, and was his end goal.
Snowden's making up to $1.25M/yr on speaking fees plus script consulting for Oliver Stone. Nice bump from gov't payscale. Though not enough to compensate for being under 24/7 FSB watch.
You are taking a speaking fee that is implied to be at the upper end, and multiplying it by a number including unpaid speeches (which is "many" of them), and then implying that this is how much he is making.
Sure, you said up to. I also make up to $1.25M/yr on speaking fees, but it's not a bump up from any payscale since the actual number is 0.
That's fascinating! Can you share some resources for me to learn more about the subject? I haven't seen any quotes for his speaking fees or script consulting revenues.
Well in the article linked they state the following:
> He is scheduled to make more than 50 such appearances around the world this year, earning speaking fees that can reach more than $25,000 per appearance, though many speeches are pro bono.
"Earnings in a casino can reach more than $X per day".
How informative is such statement? Is the $25,000 based on the maximum fee that any speaker has earned? Is it the average fee? Is it the highest fee that Snowden or someone representing him has said he have earned?
> In 2013, $10,000 was considered a lower limit for speakers brokered by speakers bureaus, $40,000 a regular fee for well-known authors, and famous politicians were reported to charge about $100,000 and more.
Does Snowden use a speakers bureau? I suspect that if you sell speeches to highest bidder vs talk on specific conferences thats relevant to the subject you want to bring awareness, you are going to earn different amount of money. It the same way that civil rights lawyers has a different earning perspective than say a high profile defense lawyer or a company lawyer.
Is the $1.25M/yr in any way anchored to Snowden, or is it just speculation? Do Daniel Ellsberg also earn 1.25M/yr through speaker fees? The Wikipedia article about him fails to mention his vast riches.
>He is scheduled to make more than 50 such appearances around the world this year, earning speaking fees that can reach more than $25,000 per appearance, though many speeches are pro bono.
Snowden movie:
> Snowden had a hand in making the film as well, consulting Stone and helping with the casting.
So you read the part where it said he does many speeches for free yet decided it worthwhile to create an estimate that you know doesn't align with reality and draw conclusions from it?
We have 3 Beams at 2 offices. Everyone assumed the first one would be just a Silicon Valley start-up novelty, but they are surprisingly useful and personal for someone out of the office to have a presence. They can drive right up to your desk and it's like having a conversation with a live person.
Somehow the telecom connection (when it does not go completely down) is better. The voice from the machine to the people in the office is clearer, and the beam driver seems to pick things up with the microphone better (although) I have never been on that end to experience it.
It's totally at the discretion of the Beam driver get online drive over to your desk and start a conversation. Much more like having someone in the office. It also has a very personal feel talking to your colleague next to you. You have to experience it to understand. The full-scale Beam is a much nicer experience, we also have the cheaper model, which also works well.
The word came from Czech, first use being from Karel Capek (checked some more sources, including the work from Sevan Nişanyan[1], an Armenian-Turkish language researcher, who would probably be unbiased in this matter).
Another language with the same word used with a closer meaning does not always mean there is a connection. Etymology usually discovers connections which aren't intuitive at all.
Yeah, actually this is how I remembered it from Webster's Third International many years ago, which I don't have anymore. Since then, I've become reliant on etymonline, which is handy, but somebody's hobby project[0]. A good reminder to trust the memory over the internet once in a while.
Word robot comes from word "robota". In feudalism that was form of taxation, paid with labor. Feudalism in Austria empire put many restrictions on people (impossible to move, needed permission to marry, no free religion etc), but was not really a slavery.
Source: I read memoirs of guy who invented this word in theater play.
> He recently collaborated on a track with a French musician, delivering a spoken-word monologue on surveillance over an electronic beat, and recommended the title: “Exit.”
A little bit off-topic, I was quite surprised when I clicked the link, to find that the unnamed "French musician" is, in fact, synthesizer legend Jean-Michel Jarre. Why not mention his name? It kind of sounds like it's some random dude on youtube, but he's right up there with Kraftwerk and not even a little obscure :)
On another note, I think it's funny how many similarities this music video has with the intro-theme of TV-series "Person of Interest" :) The imagery and especially the vocal effects on Snowden's voice in the second half :)
Couldn't the government punish the people who are facilitating this? In their eyes this would be akin to carrying around a camera for a Russian spy, no?
Weird, I remember running into (an earlier version of) one of these devices at a club in Hermosa Beach, CA around 1997.
I'm guessing that one was locally manned by someone nearby. It's possible that device did not have a videocam feed back either but that local person had LOS on the robot.
For one, on our model, there's zero peripheral vision. This makes it really hard to orient yourself in space - how far away is that wall? Did I scoot far enough past that doorway to swivel 90° to the right and go through it? Am I about to hit someone?
The same problem occurs when you're trying to talk to people. You can really only see two people who are standing in front of you. I'm not even sure if there's stereo sound - as there appears to be on the Snowbot, since he swivels towards people who are addressing him.
Other minor difficulties:
* it was difficult to park it into its charging station, which meant that sometimes it would just go comatose in a hallway.
* there's still a fair amount of lag, which makes it slightly annoying to have a conversation. Dialing someone over Skype/Hipchat/whatever-video is almost always the better option.
* sometimes you'd remote in, and be in an unfamiliar location because the robot wasn't returned to its parking spot. Since the whole point is to be able to access remote locations, we typically weren't familiar with the office layout, and would have to wander around aimlessly trying to get someone to help us get to where we needed to go.