Hacker News new | past | comments | ask | show | jobs | submit login

Reminds me of http://www.washingtonpost.com/wp-dyn/content/article/2007/05...

  the autonomous robot, 5 feet long and modeled on a stick-
  insect, strutted out for a live-fire test and worked 
  beautifully, he says. Every time it found a mine, blew it 
  up and lost a limb, it picked itself up and readjusted to 
  move forward on its remaining legs, continuing to clear a 
  path through the minefield.

  Finally it was down to one leg. Still, it pulled itself 
  forward. Tilden was ecstatic. The machine was working 
  splendidly.

  The human in command of the exercise, however -- an Army 
  colonel -- blew a fuse. 
  [...]
  This test, he charged, was inhumane.



And your story reminds me of a story my dad used to tell about one deer season when they spotted a huge buck.

Someone shot and it shot both of the deer's front legs right off, but it amazingly it hopped into the woods using only its rear legs.

They tracked it for a while, and spotted it again.

Then another shot, and one of the rear legs flew off.

It fell over, but still it kept pushing its way along with its remaining leg.

They kept tracking it.

Finally, they caught sight of it again, and shot its last leg right off.

Unfortunately, the deer still got away.

They couldn't track it anymore, and lost its trail.


This is the most terrible and cruel story I've read all day :(


Don't worry, the deer is still alive and doing well.

I just saw him coming down a hill the other day.

I call him Snowball.


Charles Stross, commenting on Saturn's Children, http://www.antipope.org/charlie/blog-static/2013/07/crib-she...

"A society that runs on robot slaves who are, nevertheless, intelligent by virtue of having a human neural connectome for a brain, is a slave society: deeply unhealthy if not totally diseased. "


I mean, absolutely, that would morally and, hopefully, legally wrong. Moreover, there are many ways to evaluate "intelligence", and it's not even clear that such criteria are the correct ways to judge whether a creature is a moral patient, a moral actor, or neither (for lack of better terms).

All that said, I think it's fairly clear that Spot is just a dumb machine. Some of its descendants might be more, but we haven't gotten close to the "robot slave" point.


Scale this[0] up to 100 billion simulated neurons (feasible on dod budget), and it will probably operate way beyond a single human, or groups humans can do. Build multiple of them, and the ancestors can just copy the models built at t0=0 and be as intelligent as one that spent the time to build those models, takes us ~20 years to do the same for humans (maybe less so over time, but not on the order of what can be done with something like this).

Some relevant quotes from Bubblegum Crisis Tokyo 2040:

"They exist as substitutes for the lower castes, the indentured labour, for all manners in which humans formerly oppressed their own. Slaves."

"Why do I exist? Was my purpose to replace humans, whose inability to coexist in peace is their evolutionary flaw? Or was my destiny to serve as the progenitor of a subservient race? I do not know. I did not ask to be born."

"A being is a being. A machine is a machine. Most humans would believe these two states to be exclusive, separate orders of existence. And yet, they are not. The key is neotiny, the retention of characteristics from an earlier stage of development. A human fetus follows the path paved by its ancestors, evolving in the womb from unicellular, to amphibian, to mammal, to man. There were those who believed that humanity was the end of the progression, the end product of natural evolution. They were wrong."

[0] http://www.dailymail.co.uk/sciencetech/article-2851663/Are-b...


Leaving aside robots for a moment, look at what happens to human labor markets after trade agreements with countries that have ... different labor standards.

It's our human choice whether we race to the bottom (cost reduction) or race to the top (agency). If we're going to play god, should we seek to build slaves or agents with some degree of freedom? Or require devices to have realtime human-agency guidance?


I think you're conflating several issues.

I don't think it's a choice between "race to the bottom" or "race to the top". Someone needs to do dangerous, nasty, repetitive jobs if we want to maintain a standard of living that many people have become used to. Creatures with the sort of agency you're describing are, in my opinion, unsuited to those tasks, for several reasons, including moral and economic reasons. The robots we are increasingly using to do those jobs are much better suited, and there isn't (again, in my opinion) a moral objection that solely applies to such machines.

That said, our policies are woefully out of date in the face of such increasing automation. Our current system inflates employment and even a meager standard of living. We are going to need to revise our polices, both in the more developed nations and in those that have, as you so tastefully put it, "different labor standards". I don't know how to do this. There are many proposals; a popular one is the basic income guarantee. I'm not educated or intelligent enough to really understand the implications of such a policy, but I can agree that the just and humane treatment of all creatures with the agency you're talking about is among the best guiding principles that I can think of.

The two issues raised above (whether it is moral to use a machine for automation, and the fair treatment of creatures with agency) is separate from the point related to the development of human-manufactured creatures with agency. We don't know how to do it yet, but we are slowly working towards it. Assuming that we eventually do figure it out, that will be a victory as long as we treat our new children like we would treat our homo sapiens children. The research and development of such creatures with agency and those for industrial automation are not mutually exclusive, and serve different purposes.

To try to put it a different way: something is going to need to harvest fruit. It's a shit job. I would rather have Spot do it than a person of any variety, human or otherwise.


Thanks for your thoughtful response.

Should we allow some of our creations to have access to their design docs and source code? How about private communication with each other?

There's also a property/control rights question: should the manufacturer and/or regulator of the autonomous device always have a remote override, or should the purchaser/owner of the device have exclusive control over software policy? Analogies can be made with DRM and autos.


Nevertheless I expect to see those BigDog like robots walking among us in the near future (10 years ?) as police/firefighters and other services.

When things are this cost effective( no one dying in action) they are adopted very fast.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: