A couple of years ago, I was attacked by a Kiwi bot near a UC campus. This is my story.
The bot and I were moving towards each other on a sidewalk, and when I came close it stopped, as they do when sensing an object in front of them. But there was an awkward moment as I tried to go around it and it repeatedly jerked forward an inch as its motor kicked on and off. Maybe I was walking around the very edge of its radius. In any case, my behavior must have triggered some pathfinding bug, because it turned and drove right into my legs, after which it stopped and sat stationary. Luckily they're small and move slowly so it wasn't a big deal, but that memory stuck with me. Articles about Tesla pathfinding issues always bring it back to the surface.
Kiwi bots aren't (weren't?) actually AI controlled. They had human drivers in South America that controlled them remotely. If one attacked you, it was either the human driver going agro, or just a problem with the latency of the camera -> cell network -> streamed to South America -> driver inputs command -> sent back to the US -> over the cell network -> back to the bot. And the cameras they have were pretty bad (the ordering app would show you the camera view when the bot was nearing its destination.)
Those "exploited workers" probably made decent money relative to the cost of living in their location, and they got to do it from the comfort of a computer instead of hard labor in the sun, which is what someone in their same socioeconomic bracket would more likely be doing.
If it was "our" citizens getting paid near (US) minimum wage to sit at home and monitor a robot all day then yes, I'd still be all for it. Teenage me would have much preferred that to fast food, and even adult me would happily take it as a second or interim job if needed and unable to find better paying work for some reason. And I'm sure it'd be a great opportunity for the physically disabled or other people unable to leave home.
This is a win for everyone involved. A US company gets to outsource easy work at a price below our minimum wage that they can afford to a population which can live happily with those lower wages due to their nations cost of living.
Cost of living is irrelevant when the cost of certain goods like iPhones, computers, Internet subscriptions and other things is fundamentally determined by strong markets like USA or EU.
Or are you going to tell me that Indians don't deserve to use iPhones, watch Netflix, or learn new skills through online programmes? Because that would be pretty racist, and I don't suppose you consider yourself racist, do you?
Further, the fact some countries earn astronomically high wages means they can, when they retire, take everything with them, into a cheap country like Egypt, India, or Greece, and live like emperors. Is that fair? Especially when hard-working people in India can barely afford vacation in their own country.
Ah, the old "If you disagree with me then you're a racist". Please don't engage if you aren't going to engage in good faith, we're not on Reddit. I'm happy to be called wrong, but not if you're going to do so like that.
People deserve what they can afford. Are you suggesting we subsidize the cost of every luxury good so that everyone in the world regardless of income has access to said luxury good? It's a great notion, but the logistics are fundamentally impossible.
I can't afford a Ferrari. Do I not deserve a Ferrari? I can't afford a Rolex. Do I not deserve a Rolex? The answer is no, I don't deserve either of those things. I don't fundamentally deserve anything except for my own life. If I want anything else it's up to me to find a way to obtain it.
> Cost of living is irrelevant when the cost of certain goods like iPhones, computers, Internet subscriptions and other things is fundamentally determined by strong markets like USA or EU.
I will, I liked nickel and dimed so I'll give this a read. One side note, in looking up this book to purchase on Amazon, the hardcover is 2 bucks cheaper than the soft cover. I've seen this kind of thing quite a bit lately. What gives?
If you liked Barbara Ehrenreich's "Nickel and Dimed,"[0] you might like a few of her unrelated books as well. I highly recommend her "Dancing in the Streets,"[1] a 5,000 year history of the deadening of European culture (at Europeans' own hands, no less) that pairs nicely with Graeber's "Debt: The First 5,000"[2], a book of similar pacing, as well as a much shorter book on the gendered professionalization/demolition of the medical practice in the medieval era titled "Witches, Midwives, and Nurses" or "W.M.N."[3]
Her "Bait and Switch,"[4] however, about the white-collar unemployment industry I found dull and unenlightening.
I've noticed this too. Hypothesis: Nobody wants books made of atoms any more, and the few that are selling are paperbacks. Therefore hardbacks are cheaper even though they're rarer and more expensive to produce.
I believe (as of a few years ago) that Kiwi bots are semi-autonomous, meaning they do have someone watching the camera feed but the bots themselves can move in a given direction and will stop if an object is detected.
I’ve had this happen with actual humans. A human is coming toward me on a path. I zig. They zig. I zag. They zag. We walk into each other. It must be some kind of human path finding bug. :-)
I've never actually walked into people, usually after 2 or 3 you look at each other and smile and then one person steps to the side or both and then you go, no you, ok.
it definitely should smile before/while driving into your legs as well as when standing waiting for you to walk around. It can also mark you with the laser pointer to indicate that it does sense you. Communication is the key.
The facial communication is only necessary because we're negotiating as two people who want to go where the other one is. When it comes to bots they can be forever deferential and always yield to humans.
I understand yea - when it comes to a robot and a human though the human doesn't need to yield. It'll probably take some time to train it into people but humans should always have the right of way.
You avoid this by using visual cues. E.g. strongly looking into the direction that you want to go. I suppose that most people learn this at an early age. And these robots should too.
Always go through the right side, is this not a rule in your country? I'm asking not knowing where I learned it, but it definitely is a social norm to take the right side of the sidewalk anytime this may happen. Everyone just does this and it works out great.
Oh how I wish everybody understood this. Even in crowded cities in the US you get a lot of people who do not understand this. A minority to be sure, but a sizable one (I’d estimate between 5-10%, probably 10% but sometimes people who aren’t cognizant of this are accidentally correct in their pathing choice). Unfortunately this means you need to sometimes make split second decisions that this person probably has no idea what they’re doing and instead just figure out how to get around them regardless of convention
It is the left side in my country. Which creates a problem when people from right-sided countries visit my city.
I noticed this in China, a densely populated mostly right-sided country. Whenever a British engineering firm would install escalators they would set the direction opposite to the flow of human traffic. You would walk up to it on the path on the right side and be forced to cross the path of oncoming people to use the escalator on the left before having to cross over again once at the top.
when I came close it stopped, as they do when sensing an object in front of them
The security robots at one of the big skyscrapers down the street from me do not stop for people. My wife got knocked into by one when we were standing in the plaza looking up something on her phone. (They're not little delivery robots. They're about five feet tall.)
Good thing she was confused by what happened, because she's also the type who would have knocked the robot over and asked me to shove it into traffic if she had her wits about her.
Probably. This seems like a public space so almost certainly. However if this was private space sometimes the rules are different. Once in a while I have to go into our factory (not even once a year, but sometimes), and they always make it clear that forklifts have the right away so watch out. (forklifts have poor visibility, so by giving them the right of way they ensure nobody expects them to stop - in practice a forklift driver will stop if they see you, but this way they are not expected to see something that is impossible to see)
I don't think this shields the company from liability. Instead it provides some ammunition to use in the event of a lawsuit.
Things are very different between employees and the general public. I imagine a jury would find that a lady-busting security robot is negligent by default. Whereas, a fork lift driver would be assumed to be doing his job and that situational negligence would need to be proven.
Note that my company does a lot of mandatory training before you are allowed to enter the manufacturing areas. Forklift safety is only a part of it (though a large part as everything else is common sense says you wouldn't do this while forklifts don't follow common sense rules)
I agree if this is a public place a jury would and should find the robots at fault. (unless the robots are running some sort of arrest her routine, or knocking her over because a bad fall is still better than some other danger)
Seriously. What if this thing bowls into a child and seriously injures them? Or a dog that is confused on what the hell is going on? I'm not even against them for mobile surveillance but they need to be safe.
And if these things are really 400 pounds with a low center of gravity as people are linking below.......well then I guess you will just have to enlist the help of one other friend in order to knock it over to prevent it from hurting anyone else.
"rocket shaped" is sort of a generous way to describe it.
My first exposure to security robots was actually a company marketing a repurposed remote-controlled lawnmower platform. It was nearly the size of a Smartcar but low to the ground and designed to cross difficult terrain. Even so, a similarly designed lawnmower tumbled down a hill and killed its operator around the same time frame (I don't think from the same company). That all makes the KnightScope design rather surprising, it seems like these things falling over and injuring people is an inevitable liability. But at least my outside perspective is that the companies using these things don't seem to have much of a head for avoiding liability issues as they're often fielded in ways that end up in negative press coverage at least... not even really due to any kind of fault per se but just the user's lack of consideration of the optics of deploying a large, er, rocket-shaped robot to programmatically harass homeless people.
Some might remember the decade-ago jokes about "do not enter elevator with robot" signs and other artifacts of robots coexisting with humans. It sort of feels like the situation hasn't really advanced that much, we're just getting used to it and actively making use of the present inability of robots to coexist in polite society.
Pure speculation on my part, but having it around 5 feet tall is presumably for the optical cameras to have a better view of the majority of adult human faces. If you're talking a remote control car (at least like the one I had as a kid), any camera is either going to get great photos of people's ankles & shins, up their noses if they're close, or lose detail because they'll have to be too far away to get a decent angle to look at a face.
Run into me with a robot, and it is likely to get knocked over and very heavily damaged, if not pushed out into the street, or off a cliff, or whatever I can find nearby.
And I’m a pretty beefy guy. Run into my wife with a robot, and I will make sure that you really wish you had just run into me instead.
I don't know why they don't parametrize momentum with certainty. In any confusing situation, go into ultra slow environment scanning and when confidence increases, allow for a bit more.. rinse / repeat.
That's how to get a robot half feet into a choke point, immediately get stuck for half an hour surrounded by walls and confused people, until developer on an emergency Slack call along facility managers and company CTO verifies and communicates a likely-safe state of robot and surrounding equipment to field operators and a go is given to pull the thing out of the elevator.
All I can say without breaking agreements is that these are products, not ideal models of conceptual engineering. They're not created by people who like the world and want it to be a better place. They're created by people with lots of money who want a lot more. They've found an avenue for this by persuading other wealthy, greedy people to give them a lot of money and promising they can give them more back. They'll do this by persuading everyday people to not do things like produce, prepare, and transfer food themselves and instead pay money for these robots to do it.
These robots are minimum viable products toward moving capital around, not meeting user requirements or demonstrating great ideas. Hurting a few people in the process is part of the equation. Getting anyone to care about $cool_algorithm is not part of the equation. Getting people addicted to the convenience is part of the equation. Getting things to market as blindingly fast as possible so the capital moves before feedback from the field arrives is paramount.
That's an unnecessarily cynical generalization. Sure, maybe the leaders of the companies creating these things are profit-motivated, but is that really true of the individual engineers and designers who created it?
Both of what you said can be true at the same time (not mutually exclusive of each other) while OP’s assertions may still be true for certain individuals if I’m thinking logically.
We are talking about what motivates humans as human behavior, which tends to be varied, nuanced, and hard to reduce to mutually exclusive categories like being only profit driven or only driven by intellectual curiosity.
I think you can be both motivated by money and intellectual curiosity. If you are an engineer turned founder, you can be both?
No, that is a very accurate description. The engineers willing to work on those things and suppressing deeper thoughts for the money and kick off new tech are part of the equation and the problem.
A manager I had once had a postcard in his office "The engineer is the camel on whos back the merchand rides to his success."
You are a lever and even provide the excuse for being one yourself.
> They're not created by people who like the world and want it to be a better place. They're created by people with lots of money who want a lot more. ... They'll do this by persuading everyday people to not do things like produce, prepare, and transfer food themselves and instead pay money for these robots to do it.
This is an extremely negative outlook. I'm a robotics and controls engineer for a small (25-employee) integrator, our company mission is to make lives and products better, and I really think that everyone believes in that. Our meager budgets and slim, fluctuating profit margins are evidence that it's not all about "lots of money"...there are certainly those making a killing on it but it's not everyone. And maybe Upton Sinclair was correct, it is difficult to get a man to understand something when his salary depends upon his not understanding it, but I've spent a lot of time thinking about this (and not just in response to news articles, I took ethics and philosophy courses to pad out my gen eds on my way to my engineering degree, I've read books on the topic, and I've talked to lots of other engineers, my customers, the operators who have been transitioned from old equipment to run my new automated equpment...). But I stand by my argument that humans are no good replacement for robots, and robots are no good replacement for humans. The tech needs to be employed judiciously, but it can be used for good.
I've installed equipment in dozens of places where life was made better: There were less than 90 fingers among a lunch table at the foundry with 10 guys at it (4 + 3 + 2 + 1 + 1 + 1 lost digits) when I installed a robotic grinding cell that removed parting lines from valve castings, now they can ergonomically load infeed shuttles and have time to quality check the parts from behind a safety fence; no more fingers have been lost. Two older women (One with arthritis!) at a plastics company no longer have to keep up with placing a tiny foam spacer on a dial table every 2.5 seconds for 8 hours a day, 6 days a week, with a half-hour lunch and 2 15-mintue breaks...that's nearly torture, and it was a really challenging material handling problem, but the robot does it well. The operators now pour in bags of foam spacers, do offline quality checks more frequently (catching upstream problems quicker, leading to less waste), and basically pour bins of parts into the machine and get one assembly out every 1.75 seconds now. Two weeks ago, I was training a 64 year old seamstress (she retires in 8 months and 24 days) on the operation of an automated sewing machine. She's been pushing fabric through a sewing machine, keeping it between 3/8" and 5/8" on the seam allowance, since she was 16 years old. Now she lays out fabric on the infeed table - she's pleased that she finally has time that doesn't impact production rates to make sure the patterns match precisely - and she inspects the stitching on the product that comes out the outfeed chute to adjust thread tensions and strokes on the sewing machine. Literally Tuesday of this week, I was at a wood processing plant installing a new automated saw, when I heard that a 19-year-old greenhorn lost his right index finger between the first and second knuckles on an old manual saw. I was there installing the fully automated, fully guarded replacement equipment; you can drop a pallet of roughsawn lumber on the infeed material handler and correctly sized boards come out the other side, with no one needing to be closer than 20 feet from the saw blade. I wasn't fast enough.
In all these cases, no one got fired, people just transitioned from mindless, repetitive grunt work to real human work, while capacity and efficiency increased. And not only are all these operators enjoying their jobs more, your gas is cheaper, new cars are cheaper and more reliable, new furniture is cheaper and the cushions are more consistently sewn, and solid-wood cabinet doors are produced more safely, accurately, and quickly. It's not all about capital.
kudos to you! I'm confident relieving humans of tedious work is more valuable to society than bringing college kids food.
My comment is related to my experience in delivery robotics and this is an alt. Not everyone is bad. I, too, believe my current job to be more ethical than my previous experience. Of course, I didn't know going into my prior experience what it was really about.
I come from the country where such machinery doesn't work - USSR/Russia - and as a result there is no innovation and the country is well behind. If you discover other ways of having successful innovation the humanity will probably put up a large statue of you and your name will be on the plaque of the next Voyager.
No? There are numerous clips where Telsas in "full self driving" mode pull the equivalent maneuver of a teenager going "OH SHIT I WANT TO GO THERE" and veering very violently.
The phantom braking problem is likely just one of the many symptoms of Musk's insistence on relying on optical systems instead of more expensive sensors.
Expense was part of the equation initially, however, through economies of scale, we eventually would have been able to reach a feasible price point. Cost has nothing to do with why Tesla is pursuing an optical-only system.
To get rid of the dependency on the radar sensor for autopilot, we generated over 10 billion labels across two and a half million clips. To do this we had to scale our offline neural networks and our simulation engine across 1000s of GPUs, and just a little bit shy of 20,000 CPU cores. We also included over 2000 actual autopilot full self driving computers in the loop with our simulation engine. And that's the smallest compute cluster.
So what's the point then? You said it's not expenses and then you explain how you think it caused you extra trouble/work/development effort. But what's the reason?
> The phantom braking problem is likely just one of the many symptoms of Musk's insistence on relying on optical systems instead of more expensive sensors.
Based on what? How would 'expensive' sensors help?
We know that in some situations expensive sensors can get data that optical cannot. What we don't know is if any of the above is enough extra data.
What we do know is there are times when humans are bad drivers, and other times when humans continue when they shouldn't relying on luck. (Ie driving in snow storms with low visibility)
The weird thing is that the first time I realized this was actually happening was watching "Ridiculousness" on MTV. Chanel (one of the hosts) mentioned that she had ordered food delivered and couldn't understand why the app just showed it waiting outside. So she goes out to see why the guy won't come up and ring the doorbell only to realize it was a food delivery robot waiting for her.
I live out in the middle of nowhere. Wonder what other stuff I don't know about happening in cities!
Even if the Laws were real (they're not) they won't work if all you have to do is add some adversarial interference to some neural thing to make the robot think that the human is not a human, or, even better, another robot that will harm a human. Then it's a moral imperative under 3LoR to destroy that "robot".
This trick also works on humans: you can often circumvent their "protect humans" programming by simply messing with their classification system to label a human as "terrorist", "infidel", or even "unemployed".
The bot and I were moving towards each other on a sidewalk, and when I came close it stopped, as they do when sensing an object in front of them. But there was an awkward moment as I tried to go around it and it repeatedly jerked forward an inch as its motor kicked on and off. Maybe I was walking around the very edge of its radius. In any case, my behavior must have triggered some pathfinding bug, because it turned and drove right into my legs, after which it stopped and sat stationary. Luckily they're small and move slowly so it wasn't a big deal, but that memory stuck with me. Articles about Tesla pathfinding issues always bring it back to the surface.