Hacker News new | past | comments | ask | show | jobs | submit login
Machine Behavior Needs to Be an Academic Discipline (nautil.us)
96 points by dnetesn on April 2, 2018 | hide | past | favorite | 63 comments



When you create a group of experts whose expertise lies in telling other people how what they're doing is wrong, you create problems.

The experts need to justify their position by coming up with more and more reasons that people need to listen to them. People who are actually doing the work quickly become convinced that said experts are ignorant asses who don't know what they are doing. This automatically sets up grounds for conflict.

I agree that the ethics of AI is important. However I expect the best work to come from people who are experts in AI first, and only secondarily in ethics. And I expect that the self-appointed experts in ethics won't recognize this. (Particularly not when faced with anything that makes their position seem less important.)


Your proposal has the exact opposite problem - the AI experts are incentivized to not make two much waves regarding the dangers, because that would endanger the field (and money invested in it).

Thus we get quotes from them like "overpopulation on Mars"


Re-read it. I did not say that AI expertise by itself was enough. And I did not say that nothing should happen.

I said that people whose expertise is in being experts on what others should do are going to not do a good job.

Let's take a concrete recent example.

http://time.com/5209144/google-search-engine-algorithm-bias-... decries how Google search results are racist and are biased against black girls. It is written by someone whose background is clearly not machine learning, but who qualifies as an expert in the ethics of machine learning. She has a PhD, tenure at a well-regarded university, and a book on the subject.

While her article complains about the results of machine learning algorithms, and how horrible they are, she said nothing that showed an understanding of how they actually work. Why they produce the results that they do. Her diagnosis is that Google is a racist and sexist company. Her proposed solution is that Silicon Valley companies should be forced to hire blacks, and particularly black women, for the express purpose of keeping companies accountable for their racist environments.

The truth is that Google's internal environment is neither racist nor sexist - quite the opposite in fact. The actual problem is in the data available to Google of what people click on. How you can modify the algorithm to be slanted towards socially acceptable results is a tricky problem that requires a CS background. How you can modify the incentives so that companies are motivated to produce good results is an even more tricky problem.

Yet if she got to create the regulations, what would happen? We would literally get a requirement to hire politically correct thought police!

How well do you think that would work?


Can you point to where the author recommended that Google and co. be forced to hire black women (or any identity group for that matter)?

What she did say is that it's not surprising that Google search produces biased results, given that minorities are underrepresented and topics like ethics are barely discussed in engineering schools.

It sounds to me like she is attempting to raise awareness of a systemic issue in the tech industry, which requires no qualifications other than knowledge of the issue.


> This is no surprise when black women are not employed in any significant numbers at Google. Not only are African Americans underemployed at Google, Facebook, Snapchat and other popular technology companies as computer programmers, but also jobs that could employ the expertise of people who understand the ramifications of racist and sexist stereotyping and misrepresentation and that require undergraduate and advanced degrees in ethnic, Black/African American, women and gender, American Indian, or Asian American studies do not exist.

> We need people designing technologies for society to have training and an education on the histories of marginalized people, at a minimum, and we need them working alongside people with rigorous training and preparation from the social sciences and humanities. To design technology for people, without a detailed and rigorous study of people and communities, makes for the many kinds of egregious tech designs we see that come at the expense of people of color and women.

So perhaps not a direct call to hire minorities, but a direct call to hire people versed in identity politics and various non-white studies.


Right.

- The company is bad because it doesn't hire enough black women.

- The company needs to hire people versed in identity politics to critique what they are doing.

- Programmers should be required to take the kind of course that she teaches.

- Programmers should be only allowed to work alongside the people versed in identity politics who can tell them what they are doing wrong.

In short, this is a power grab. "They done wrong, so people like me need to be given authority to make sure that they don't continue doing it."

That may seem like an extreme example, but it is along a natural progression for people whose profession is figuring out what should be complained about. Everyone wants to believe that what they do is important. That they should be given more power and authority. What that means in practice varies by profession. But that is always the tendency.


This doesn't make any sense. Why are you lumping AI experts into a group?

I'm trying to picture if the government went around to all highly qualifying AI PhDs asking them if they would like to join a prominent ethics panel to guide and aid in AI ethics regulation in industry. So, you're claiming they're going to decline that or accept and sabotage under the assumption that they're a. all members of a giant AI consortium and b. any ethics regulation is going to destroy their industry?

As bad of an example as this is - look at other industries. Banking officials in government who set banking policies come from the finance industry. While this certainly has its problems and how they get there comes into question, they actually understand the industry they're regulating.


I didn't say forbid AI PhDs to work on ethics. I just say that an AI PhD shouldn't be a requirement.

We already have an ethics discipline. This is an ethics problem.

Do you also propose that for example self-driving cars accidents should be investigated by AI PhDs which worked on self-driving cars algorithms, and they should be the ones proposing regulations? Are you saying that a regular "car accident specialist" is ill equipped to handle these, and it's opinions aren't as valuable because he's an expert in "human car accidents" not "AI car accidents"?


Frankly, I think it should be. Extremely bright and educated people who have worked with the technology as an executive in a professional or educational/research setting.

There is a legal term for what you are describing - it is called "conflict of interest". No, I do not think professionals should be involved in making decisions in matters in which they have a conflict of interest. Not all AI researchers are stakeholders in Uber, Waymo, or Tesla, hence you would select those people to investigate, as you would analogously do in any other matter of human affairs.

In fact, I'll go a step further and say that an "AI specialist" should be working alongside a "car accident specialist" and "laywers". I am arguing that this is a better combination than "Ethics specialist", "car accident specialist" and "lawyers" to investigate the accident.


> Not all AI researchers are stakeholders in Uber, Waymo, or Tesla, hence you would select those people to investigate

Think if this would work in finance: asking JP Morgan people to investigate the Barclays Libor scandal. Smart people understand that a scandal against one of their peers will taint them also.

Do you hear today Google/Amazon/.... or other big private data hoarders makeing noise against Facebook? The one which spoke, Apple, doesn't have and rely on private data as much.

I would say that a serious ethics panel would have opposed the way Facebook handles private data, in a way that engineers working on them don't. Because engineers didn't study ethical/political problems of the past, and they don't understand how being lax with this can cause harm.

BTW, in all universities, doing studies on humans/animals requires first getting approval from an Ethics comitee. Googling now a bit I see that they have people from all faculties - Engineering, Humanities, Law, ... So universities, which give those AI PhDs consider that it's valuable to have a diverse ethics panel, not just one composed of Engineering professors if they meet to discuss an engineering experiment ethics.


Actually, that's literally exactly what happens in finance, as I stated earlier. People leave JP Morgan or Goldman for a comfy position at the SEC and maintain their close friendships. Edit: and unfortunately, it does make sense, as unfortunate as that is

Further, comparing the reactions of large companies with divergent interests to scandals is only tangentially relevant; that does not imply anything meaningful of the bias of hiring autonomous individuals from divergent backgrounds in AI; they do not all work on autonomous cars or <sell your data> at Big N. Finance has different social conventions as well.

"engineers didn't study ethical/political problems of the past, and they don't understand how being lax with this can cause harm" - I would wager that the average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major" being proposed. But okay, let's set up an AI McCarthy panel with "experts", we'll get bored of Trump scandals on TV eventually.


> Average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major"

This sounds like the jokes of when a plumber comes in to fix the pipes of a physicist, and the physicist is like "Whats so hard here? It's just basic physics"


Hopefully the AI Ethics major is the physicist in that analogy, except that would be giving a bit too much credit


>I would wager that the average CS professor at a top school is significantly more well-read in history, ethics, and many other subjects than the "AI Ethics major" being proposed.

Wait, what? To become a professor you have to spend the vast majority of your time teaching and/or studying/producing research in your specific discipline. How do you reach the conclusion that someone who doesn't exclusively focus on a topic would be more well-read on it than someone who does?


It really depends on your basis of comparison here. I would prefer the opinion of a top CS professor. It is a very reasonable assumption that they are, like most other hyper intelligent and intellectually curious people who find time to do other things in their long lifespan despite working 60 hours/week, extraordinarily well-read. This is relative to the binge drinking undergrad who decides to become an AI Ethics major because the workload and intellectual rigor is lower than STEM and he knows he can easily get affirmative-actioned into a well-paying job micro managing software developers with moral high ground as soon as he graduates, which seems to be the convergence of the idiotic suggestions being made. Would I say that a top CS professor has more of a background studying ethics than a top ranking federal judge or a top ranking philosophy professor? No, I would not. I'll also mention that one's level of knowledge and life experience is a completely dichotomous issue from whether an individual is inclined to make "fair" or "unbiased" ethical judgments when given authority to do so.

As btilly is referencing above, you already have a clear example of what type of people this field will attract: people with poor intellectual ability and zero understanding of inefficient, large-scale bureaucracies, basic probability or statistics, or software systems, painting simple organizational ineptitude as deliberate malfeasance and racism with the intent of bolstering themselves and their social group financially. It's clear there is an issue and it needs to be addressed as a growing pain, but deliberately training moral police with power over the software industry may lead to something like a dark age in technology.


> Thus we get quotes from them like "overpopulation on Mars"

Context, please? :)


The point being that AGI is at least 50 years and probably 200 years or more away.

It is such a distant threat and yet such a focus, and all the while we sit in our lovely apartments 40 minutes from nuclear incineration.


It's also just a cherry picked example too. Surveys of experts show there isn't really a consensus, but they generally aren't that optimistic.

https://nickbostrom.com/papers/survey.pdf

>We thus designed a brief questionaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


yeah clearly Dr. Ng was addressing fears of AGI in that quote. Thing is that machine learning can be used for nefarious purposes, intentionally or even unintentionally, without AGI being a reality. Those are more realistic fears

the conflation of machine learning and AGI causes confusion yet again..


Agree - I think that your point of the use of the tool rather than the tool itself being of sole concern is really important. Clearly devices (like Nukes) that can kill thousands must be regulated tightly, but to extend the same level of control to the whole technology makes applications (like medical scanners using nuclear tech) that are of great benefit potentially untenable.




i think this is a fair concern, especially when you consider the typically abysmal quality of science journalism about machine learning.

but I also don't think someone trained in CS, who has a deep understanding of machine learning systems is necessarily equipped to understand on their own the problems their systems can cause when they're out in the real world, being used and manipulated by real humans. It's just a completely different domain.

so if we say only CS researchers can speak credibly about the ethics and consequences of AI systems, then the work simply won't be done at all, or else will be done poorly.

this is why I think the article's call for an interdisciplinary field is reasonable -- in the current world, most CS researchers blithely ignore consequences, and most people who care about social impacts of these technologies do not adequately understand how they work.


No, training in CS is not sufficient. However it is necessary.

If an attempt is made by people with only a lay understanding of the subject, the work will be done poorly for a different reason. And the poorness of the work will be something more readily apparent to those in the field than not.

Of the two necessary skills to intelligently comment on the subject, CS is the harder one to acquire. Which is why I believe that the best work will come from those who studied CS as a primary, rather than those who studied it as a secondary.


Money is the only thing that consistently defies outcomes... Specifically liability that puts profit at risk. Unlike statutory engineers, programmers/AI experts/etc have no professional obligation to act ethically, and it’s a fools errand to assume that they will act to prevent unethical conduct. History has repeatedly proven this assertion out.

Nobody is going to call their friendly neighborhood ethicist until they are at risk of losing something. Rogue companies like Uber will only be checked when there’s a credible threat of being sued into oblivion. Employees of rogue companies will only act if they have exposure.


Does your argument imply that all regulation is failure?

Also, why assume that Machine Behavior specialists would not be not AI experts?


It looks to me like GP only implies failure-proneness for regulatory systems controlled primarily by people who aren't deeply familiar with the subject matter being regulated.


What we really need is two houses, AI experts and ethics experts, taking their separate experiences and working together to find some ethically correct way to help integrate technology into our lives.


We covered these topics in my Computer Ethics class. I took a bunch of philosophy classes on Ethics too. You can teach a person from a book all about Ethics. The problem is that people will make the decision to be ethical with their gut or in the moment.

For instance I was designing an app that would identify if the food had an allergen or not. Eventually I came to the realization that my program if deployed could actually hurt people. So I stopped working on the project. I wrote a blogpost about this (shameless plug) at https://medium.com/@zitterbewegung/making-computer-vision-sy... . The core issue was I was giving people information that they could perform medical decisions. This gave me a bad feeling and a bunch of lawyers told me when I referenced this post.

I eventually pivoted what I learned into this (yet another shameless plug) : https://steemit.com/twilio/@zitterbewegung/mms2text-let-your...

which doesn't have the issue of harming people.


But not making your app killed people who didn't notice nuts in their food...


That is too strong a conclusion. People with nut allergies already have procedures for knowing if nuts are in food, for example, asking the restaurant or preparing food their self.

Without knowing if his app was accurate, we cannot say whether building or not building the app was the right decision.


The app was very inaccurate during my testing. Hiding almonds in a dish made out of pure chocolate (a Hershey kiss) would be an easy counter example.

Also, I have a nut allergy so I have a complicated set of procedures to figure out if nuts are in the dish (that was the main motivation for the app).


Humans aren't any better at determining if a kiss has invisible almonds. An AI could solve that (better than a human could!) by knowing memorizing ingredient lists form public databases and tagging foods that have nutty variants, often times that people wouldn't know about.


I think it's fair to say humans are better at reasoning about uncertainty and risk. If the food isn't in the database, or we aren't sure if it's a match, what does the algorithm say?

ML algorithms work on statistical performance against loss functions or error rates. They aren't (yet) good at understanding the difference between a mistake that causes a missed dessert and a mistake that might kill you. Maybe they can guess correctly a higher percent of the time if shown flashcards, but that's small consolation from the hospital bed. They also aren't that good at the limits of their own knowledge, i.e. saying "I don't know".


But, I would be responsible for peoples deaths if my system said there wasn't any nuts in the dish and they ate it.


This article seems really wishy-washy and confusing. What is actually being proposed?

A set of standards that automated algorithms should adhere to? What kind of standards are we talking about?

A new branch of philosophy that talks about the actions of automated algorithms? What would be the point? How do you actually turn that into something that progresses, and that people will act on?

More ethics classes in universities for CS students? How do you differentiate this from what's already happening? Is it just broader?

---

To try to steelman this a bit, I figure there are two issues at play.

The first is making sure that AI is working in the way we designed it to; it's free of biases, it doesn't endanger people, it adheres to laws. We're doing horifficially on this metric when it comes to long-term superhuman AI alignment stuff, but most people are talking about next-decade issues. Those are raw technical issues, and though we're still working on robustness and detecting biases I really cannot see much need for external guidance outside of the natural technical progress that the field is already heavily invested in. These are problems we want to solve already.

The second issue is use of these technologies as tools. This is where we talk about how large companies' algorithms affect public perception, or how automated militaries affects warfare, or how self-driving cars are litigated. These are not AI problems, they're social problems. Yes, the technology behind those examples looks kind'a similar behind the scenes, but these are distinct social, legal and economic problems.

This is kind of like seeing the advent of electricity, predicting its effect on society, and asking for people to study "ethics of electricity".


the first problem is one where CS researchers would likely benefit from more contact with outsiders, although I agree it's a pretty lively area of research already, at least for the shorter-term issues.

the second problem is one where social scientists could benefit from actually understanding how these systems work. yes, they are social problems, but the sociologists could do much better if they had a better understanding of the details of the technology.

The authors say they want "a consolidated, scalable, and scientific approach to the behavioral study of artificial intelligence agents in which social scientists and computer scientists can collaborate seamlessly". So, a new interdisciplinary category.

I get the sense they are especially concerned with the empirical study of behavior and social impact of big ML systems in the real world. In another post I compared this sort of thing to sociologists studying transit infrastructure.


> the first problem is one where CS researchers would likely benefit from more contact with outsiders

Could you be more concrete about how you would like other fields to contribute?

> the second problem is one where social scientists could benefit from actually understanding how these systems work [...]

> The authors say they want "a consolidated, scalable, and scientific approach to the behavioral study of artificial intelligence agents in which social scientists and computer scientists can collaborate seamlessly". So, a new interdisciplinary category.

I don't really agree. You don't need to know how electricity works to study its societal effects, and the societal change from electricity-augmented manufacturing is a completely different problem to the health and safety regulation of indoor sockets.

The same is true for AI. You don't need to know what backpropagation is to work on the long-term ramifications of automated warfare, and that in turn is a largely irrelevant discussion for someone figuring out whether a company's hiring algorithms are racially biased. There is neither a clear need for top-down regulations to be grounded in the minutia of the systems, nor any obvious advantage of grouping these discussions under one umbrella.

There is a need for the disciplines already dealing with these problems to pick up more specialised knowledge as the systems get more common, but that is a far cry from what seems to be argued in the article.


> Could you be more concrete about how you would like other fields to contribute?

In the world of fair machine learning, "What are the right criteria for fairness under [set of circumstances]?" is usually not easily answerable, and I don't think we'll find satisfactory answers without more involvement from non-CS researchers -- both people in economics, law, and the humanities who are sort of generally concerned with fairness in society, and people who know a lot about specific domains where systems are being deployed. This in particular seems like an issue of "machine behavior" as described in the article.

With regards to transparency/explainability of models, there's a problem of making sure the "explanation" is actually useful and intuitive to the user, where psychologists (and HCI people who are already in CS depts) may have a lot to contribute.

In both cases there is a little ad hoc communication and there ought to be more.

> I don't really agree. You don't need to know how electricity works to study its societal effects, and the societal change from electricity-augmented manufacturing is a completely different problem to the health and safety regulation of indoor sockets.

I think electricity is not the best analogy, because it has a really subtle theory that few people understand, but we're so familiar with its use that many of its interesting properties are too obvious to see. It's also not at all autonomous -- it is often fruitful to think of a specific machine learning system as an agent, whereas this doesn't make much sense for electricity. Most importantly, I think it's just broader and more general than machine learning (trivially so!).

> The same is true for AI. You don't need to know what backpropagation is to work on the long-term ramifications of automated warfare, and that in turn is a largely irrelevant discussion for someone figuring out whether a company's hiring algorithms are racially biased.

The commonality here is you care about understanding or predicting the empirical behavior of machine learning systems interacting with the real world, especially with humans. ("Long-term ramifications of automated warfare" might not qualify, but I think medium-term ramifications certainly could.)

I don't think CS researchers are trained or particularly interested in empirical studies of human behavior, the stock market, etc., nor should they be, so somebody else will have to help. That somebody had better know enough about CS to be able to collaborate with actual CS researchers, though, or the results are going to be poor.


> In the world of fair machine learning, "What are the right criteria for fairness under [set of circumstances]?" is usually not easily answerable

There is nothing particularly machine learning specific about this. If I want to design an AI to detect bank fraud, you're right that I want to do cross-disciplinary research in its design, but I do not understand what AI ethicists would add to that.

My disagreement is not that AI will involve itself in other fields, and in the process we need to learn about those fields. It's with the idea that either those people should be talking about the raw technology or that there should be a general field about how AI specifically relates to the sum of every other field.

> The commonality here is you care about understanding or predicting the empirical behavior of machine learning systems interacting with the real world, especially with humans.

Which is just the technical domain of AI research. Narrowing it down to the commonalities has removed all of the interesting points we were going to study!

I agree that when AI is added to the social systems or the stock market, we need to involve people versed in social systems or the stock market. I still do not see why they cannot collaborate in exactly the same way that they have already. The only difference visible to me is that that the hammers are bigger.


if a sociologist wanted to study the social impact of transit infrastructure, that would be perfectly reasonable, and we wouldn't expect them to know much going in about signaling systems, switches, or different types of rolling stock. likely a railway engineer wouldn't be all that successful trying to do the sociologist's job.

i think machine learning systems are different, and any researcher in a new field of "Machine Behavior" would have to know much more about how they work than in the railroad example. but there's no reason to suppose that work done by people trained outside of CS departments will be nonsense.

and as the authors say, there are real blind spots where CS researchers might just ask the wrong questions entirely.


> if a sociologist wanted to study the social impact of transit infrastructure, that would be perfectly reasonable, and we wouldn't expect them to know much going in about signaling systems, switches, or different types of rolling stock.

Yes -- and a good sociologist would also learn a lot about that stuff once they started their research, I'd imagine :). Not enough to be a railway engineer, but presumably enough to be able to speak meaningfully & intelligently to railway engineers. Not unlike science journalists.


The problem here, I think, is that although humans can be studied without knowing the structure of their brains, (psychology, sociology, etc...) machines of today cannot. The reason for that is that humans are humans and barely differ from each other biologically (the structure of the brain), but ANNs are very diverse and their behavior changes depending on their "biology," so machine psychology or machine sociology cannot be built on stable grounds


> although humans can be studied without knowing the structure of their brains, (psychology, sociology, etc...) machines of today cannot.

why?

Or do you mean that study of some humans is generalizable to most/all humans, whereas machines lack this consistency and generalizability?

Anyway, why would this concern prevent "feature detection" tests that apply to all machines?


"In our own work—being computer scientists ourselves—we’ve been frequently humbled by our social and behavioral science collaborators. " this part rings truer than any other phrase in the whole thing.


AI is not yet complex enough to warrant such a discipline.

Obviously, like any other tool, AI systems are part of the purview of anyone studying how society works (and as a particularly complicated tool, deserve a particularly extensive examination.) But at this point, studying the ethics of a machine is like studying the ethics of a fruit fly.


"We often cannot certify that an AI agent is optimal or ethical by looking at its source code, any more than we can certify that humans are good by scanning their brains."

What do people make of this claim? Is that a fair comparison?


It's fair to some extent. The human brain is vastly more complex, so I don't love the comparison. However, with very modern machine learning which is based on deep neural networks, even though the source code is simple, it produces a very complex neural network after training on data. That network is very hard to understand and predict.

Even with simpler algorithms, you might argue that in a complex environment they could have some emergent behavior that's hard to foresee based on the source code. For example, if one is not careful then even simple linear regression on some datasets can "learn" techniques like redlining -- using zip code as a proxy for race as a part of biased predictions.


It sounds like the term they're looking for is "robopsychologist", I was hoping for a mention of the great Susan Calvin.


[flagged]


What was it that initially made you think they are morons? Do you believe that AI should not be held to ethical standards?


It is still quite possible they are morons, it's just less certain.

Where did you read me saying that? I think ethical standards for AI should come from reputable CS PhDs who understand the technology they're pontificating about working in conjunction with lawyers and legislators and taking into account public sentiment, which hopefully will eventually be the case when this becomes more of a mainstream issue. The authors do not appear to have any of that background. I think the AI aistorian/philosopher/psychologist field belongs on the fringe for now and likely a very long time.


I did not read you saying that, I was asking you a question.

> ethical standards for AI should come from reputable CS PhDs who understand the technology they're pontificating about

Should AI be held to different ethical standards than humans for the same actions? Given that we are beginning to see AI (proto-AI?) directly manipulate and interact with our physical world (autonomous vehicles, smart homes, etc.) I think it's a perfect time to study AI ethics in depth.


Quote the entire sentence if you want to quote me there. I do not think "trained ethics experts" should fall into any equation whatsoever. There are enough of them in the middle east and deep south. We don't need them infecting the intelligent world anymore than they do already. The conversation should not be driven by an intellectual movement that is equivalent to a bunch of cavemen poking a coconut with sticks.

As my post that you're responding to stated, this will enter the mainstream in a few years with a lot of unqualified people voicing their opinions. Hopefully the qualified people end up drafting the laws (note: not professional ethicists)


Now you've finally made your position and opposition clear: you believe "ethics" is interchangeable with "religion." I would not to work alongside an autonomous machine designed by someone who dismisses all ethics as nothing more than superstitious mythology. I think your view is a great example of why AI ethics should be examined by outsiders.


I have in no way equivocated ethics with religion or dismissed the purpose of ethics in AI, merely questioned who will be most fit to pass judgment while claiming them as a justification, however, it's now clear to me you're a idiot from that last post so I will not engage further.

Edit: edited moron to say idiot


> I do not think "trained ethics experts" should fall into any equation whatsoever. There are enough of them in the middle east and deep south. We don't need them infecting the intelligent world anymore than they do already. The conversation should not be driven by an intellectual movement that is equivalent to a bunch of cavemen poking a coconut with sticks.

I'm rolling my eyes so hard. Whatever did you mean by this?

Don't give yourself too much credit, you've made no such point. You've merely called the authors morons and equated ethics and religion with cavemen.


It does seem to presupposed agency on the part of the machines being studied. We aren't worried about making sure all of our other machines are ethical. Why not?

That is, we should maintain that the people involve are held to ethical standards. But I don't see that being up for debate. Is it?


> We aren't worried about making sure all of our other machines are ethical. Why not?

As more machines are autonomously making decisions and exhibiting emergent behavior while also being able to directly interact with space shared with humans, maybe we should be worried about it.


The ethics are in building the machines, though. Consider, we don't worry about the ethics of bio-weapons, from the perspective of the weapon. Rather, we say it is unethical to build said weapons. (Right?)

This doesn't change just because we could build a nanotech (or other) weapon that selectively kills people.


I disagree, particularly if the machines are making their own decisions and exhibiting emergent behavior.

I don't think only weapons should be considered. I think there will be many unintended outcomes from AI that was never meant to hurt anyone, but has nonetheless.


But this is somewhat nonsensical. Is a slaughterhouse ethical because no kids are allowed to walk into them? Are they unethical because they would kill a person in the wrong place?

It would be unethical to build a slaughterhouse that traveled around and had the potential to go into a populace. It seems odd to call the slaughterhouse itself unethical, when it was the building of the machine that is the problem.

If you are giving agency to the machine, then you are wanting to teach the machine ethics. At some point, I can see that happening for "intelligence." Not for "intelligent machines" though.


For what it is worth, I had a similar gut level reaction. See https://news.ycombinator.com/item?id=16738190 for my explanation of why I think that this is a bad idea.


I think you're overlooking the needs of a third stakeholder in this debate: the people who will have to share space with autonomous and/or artificially intelligent machines but are neither an expert in ethics nor creators of AI. As a bystander to this debate I'd like to know the debate is happening now and not after I've been killed by an autonomous machine that may or may not have been acting ethically.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: