Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



What was it that initially made you think they are morons? Do you believe that AI should not be held to ethical standards?


It is still quite possible they are morons, it's just less certain.

Where did you read me saying that? I think ethical standards for AI should come from reputable CS PhDs who understand the technology they're pontificating about working in conjunction with lawyers and legislators and taking into account public sentiment, which hopefully will eventually be the case when this becomes more of a mainstream issue. The authors do not appear to have any of that background. I think the AI aistorian/philosopher/psychologist field belongs on the fringe for now and likely a very long time.


I did not read you saying that, I was asking you a question.

> ethical standards for AI should come from reputable CS PhDs who understand the technology they're pontificating about

Should AI be held to different ethical standards than humans for the same actions? Given that we are beginning to see AI (proto-AI?) directly manipulate and interact with our physical world (autonomous vehicles, smart homes, etc.) I think it's a perfect time to study AI ethics in depth.


Quote the entire sentence if you want to quote me there. I do not think "trained ethics experts" should fall into any equation whatsoever. There are enough of them in the middle east and deep south. We don't need them infecting the intelligent world anymore than they do already. The conversation should not be driven by an intellectual movement that is equivalent to a bunch of cavemen poking a coconut with sticks.

As my post that you're responding to stated, this will enter the mainstream in a few years with a lot of unqualified people voicing their opinions. Hopefully the qualified people end up drafting the laws (note: not professional ethicists)


Now you've finally made your position and opposition clear: you believe "ethics" is interchangeable with "religion." I would not to work alongside an autonomous machine designed by someone who dismisses all ethics as nothing more than superstitious mythology. I think your view is a great example of why AI ethics should be examined by outsiders.


I have in no way equivocated ethics with religion or dismissed the purpose of ethics in AI, merely questioned who will be most fit to pass judgment while claiming them as a justification, however, it's now clear to me you're a idiot from that last post so I will not engage further.

Edit: edited moron to say idiot


> I do not think "trained ethics experts" should fall into any equation whatsoever. There are enough of them in the middle east and deep south. We don't need them infecting the intelligent world anymore than they do already. The conversation should not be driven by an intellectual movement that is equivalent to a bunch of cavemen poking a coconut with sticks.

I'm rolling my eyes so hard. Whatever did you mean by this?

Don't give yourself too much credit, you've made no such point. You've merely called the authors morons and equated ethics and religion with cavemen.


It does seem to presupposed agency on the part of the machines being studied. We aren't worried about making sure all of our other machines are ethical. Why not?

That is, we should maintain that the people involve are held to ethical standards. But I don't see that being up for debate. Is it?


> We aren't worried about making sure all of our other machines are ethical. Why not?

As more machines are autonomously making decisions and exhibiting emergent behavior while also being able to directly interact with space shared with humans, maybe we should be worried about it.


The ethics are in building the machines, though. Consider, we don't worry about the ethics of bio-weapons, from the perspective of the weapon. Rather, we say it is unethical to build said weapons. (Right?)

This doesn't change just because we could build a nanotech (or other) weapon that selectively kills people.


I disagree, particularly if the machines are making their own decisions and exhibiting emergent behavior.

I don't think only weapons should be considered. I think there will be many unintended outcomes from AI that was never meant to hurt anyone, but has nonetheless.


But this is somewhat nonsensical. Is a slaughterhouse ethical because no kids are allowed to walk into them? Are they unethical because they would kill a person in the wrong place?

It would be unethical to build a slaughterhouse that traveled around and had the potential to go into a populace. It seems odd to call the slaughterhouse itself unethical, when it was the building of the machine that is the problem.

If you are giving agency to the machine, then you are wanting to teach the machine ethics. At some point, I can see that happening for "intelligence." Not for "intelligent machines" though.


For what it is worth, I had a similar gut level reaction. See https://news.ycombinator.com/item?id=16738190 for my explanation of why I think that this is a bad idea.


I think you're overlooking the needs of a third stakeholder in this debate: the people who will have to share space with autonomous and/or artificially intelligent machines but are neither an expert in ethics nor creators of AI. As a bystander to this debate I'd like to know the debate is happening now and not after I've been killed by an autonomous machine that may or may not have been acting ethically.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: