Hacker News new | past | comments | ask | show | jobs | submit login

I've been attending a lot of AI/Data Science conferences lately, and it's incredible to me how this is not a major part of any conversation about AI and automation.

A talk I recently attended by a data scientist from Amazon had him gloating about how many jobs he could eliminate.

Ironically, the only speaker who brought it up as a major social problem we'll have to tackle is someone from Uber. His solution was less than satisfactory, but at least he recognized the issue.

I don't want to pretend we live in a world of algorithms without consequence.




As a "data scientist" myself and more importantly as a human being, I witness the same behaviour almost every day and it baffles me.

Yes, what we do can have consequences. We need to think about that!

I have friends working for weapons manufacturers. They don't gloat about building stuff that can blow children up!!! Why the hell should we be absolved from any moral consequences for our acts?

I am not equating elimating jobs and kill children, but I would prefer if our industry abstained from any thought on the consequences of its trades.

I once had to choose between working for a weapons manufacturers for a very nice salary. I chose not to work for them. But I thoroughly thought about it and I don't blame my friends for making a different choice. I politically object to that choice, but it does not mean I am some sort of white knight...and it does not mean that sometimes in the future, if presented with another opportunity, I wouldn't make a different choice...


It's the gun-manufacturer/shooter dissociation.

Anecdotally, I had a neighbor who programmed the guidance systems for bombs, and the only reason I remember him is because immediately after introducing himself as such, he followed up with, "But I'm not the one who's dropping them. By making them smarter I can save lives".

I think that no matter how technically intelligent a field's operators are, they are still subject to the same dissociations as everyone else.


You are absolutely right and I can totally relate to both your experience and your neighbor's.

I don't program guidance systems for bombs, but I program marketing tools which are, in essence, tricking consumers into buying stuff. I dissociate myself with that issue by considering that any commercial relationship is based on tricking the other party into buying more stuff, but I would totally understand if someone objected that my software is not morally acceptable to them (and I would politely suggest that they go bother someone else :p ).

Further down the line, we could end up discussing if living in a society based on capitalism is "right" or "wrong". I would totally understand if people considered that as "not an HN worthy submission", but I think that inside a thread on the moral, philosophical and social consequences of AI, it could come up as a subject...and be down-voted if need be, not flagged as off-topic.


I think we're still in the early phase of A.I. where things seem more theoretical and thus ethics is not included in the discussion. However, as we near the time when policies will have large scale implications for our society, those consequences will be measured out. This is why I do not think A.I. will be a revolution but rather a gradual process. Already, the automation of cars is subjected to government regulation.


We have been working on AI in one form or another for 50 years or more, I think it ought to be time for a serious debate about this.

Lisp date from 1958 and some would argue that rule-based programming is AI. Eliza is also more than 5O years old.

The ethics of AI have been extensively discussed for a very long time.

In essence, the debate taking place around AI is a heir of the 19th debate on automated looms. Karel Čapek play, Robots, has been written in 1920 and it was already an ethical discussion of "autonomous machines"...

My first introduction to AI and its consequences and dilemna come from Isaac Asimov Foundation Cycle and that dates back to the 1950s.

AFAIK, the 3 Laws of Robotics invented by Asimov are actually used by philosophers & AI practitioners.

(I added and then removed references to the Golem, but...it could be argued as relevant to this discussion)

I am quite vehement in this discussion exactly because I am currently debating whethever or not I should release a new AI software I have designed. From a technical standpoint, I am quite proud of it, it is a nice piece of engineering. From a political standpoint, I feel that tool could be used for goals that I am not sure to agree with...


True, that said, it's much less theoretical to the people doing it than the average blue collar worker whose lives they're disrupting.

That's why I think the industry has a moral and practical responsibility to push society to properly prepare for the results. Because we understand the implications better than anyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: