Hacker News new | past | comments | ask | show | jobs | submit login
University of Helsinki free MOOC on the Ethics of AI (mooc.fi)
144 points by ignoramous on Dec 24, 2020 | hide | past | favorite | 60 comments



Ethics is a trend.

It's whatever is trendy to say or think at the moment. A set of arbitrary unenforceable rules to shame practices that is decided by whoever convinced other ethicist to grant him a PhD.

Of course, all that is done without going through the rigor of making these practices illegal. That would have a real impact. And require real work.


How else would you get folks incapable of getting a PhD in hard sciences to rule over the folks that do hard science? How else would you make those who manipulate real-world data to get their preferred outcome feel good about themselves while doing it, or at least to hide the ugliness captured by the data? Wouldn't getting famous from hard work and taking risks take too long when one can just shame some top ML honcho on Twitter and become the ethical AI celebrity overnight? I predict we will need a license to do ML research soon and that would have to be rubberstamped by some AI ethicist... Not like the most recent ML conferences aren't doing that already ("Your dataset has only two genders? Bye")


> How else would you get folks incapable of getting a PhD in hard sciences to rule over the folks that do hard science?

In most subfields of scientific/technical ethics, it's very common for ethicists to have advanced degrees in the scientific/technical area.

> I predict we will need a license to do ML research soon and that would have to be rubberstamped by some AI ethicist

Except insofar as ethics is a component of the education and licensing requirements for regulated professions like medicine and law, that's not a thing that has happened in the US for any field (and even there is practitioners, not separate ethicists, who are responsible for the content and validation.)

So, aside from sheet paranoia, what is this prediction grounded in?


> it's very common for ethicists to have advanced degrees in the scientific/technical area.

Maybe, I can't tell, I am missing hard data on the distribution of degrees among ethicists. The one from Facebook whose lectures I attended had no technical degree but multiple degrees related to philosophy and sociology. One of the first lecture statements was that "technology is inherently about power", which is obviously how people outside tech perceive us and what many SWEngs are naively missing as we tend to think it's all about fun and self-realization (well, OK, it's dystopian now after profit-sensing individuals invaded our field). So those people outside want to regulate us with "ethics", i.e. creating power structures they rule and we need to consult for permissions.

> sheet paranoia

Was this truly a line of thought you wanted me to follow in order to become unreasonable?


In any form of human co-operation we need to agree on some rules how that activity is carried out. It's not about some "ethicist" telling you what rules you must follow but majority of us agreeing to what the rules should be. The job of an ethicist then should be to point out problems in proposed and already agreed-to rules.


Yes, we already have those. They are called laws.

Ethics as is currently practiced in tech is using the mob to remove whoever you don't like.

To put it another way: the only person to lose their job over Epstein running a rape island was Stallman for being right but uncouth.


> Ethics as is currently practiced in tech is using the mob to remove whoever you don't like.

To the extent that that's even arguably true, it's irrelevant, since it's about the practice of basic organizational ethics within the tech community, and not about either the theory or pragmatics of any the discipline ethics of technology, or any of it's subdisciplines like AI Ethics, which is and are about the ethical implications of the application of technology, and not about the personal, non-tech-related ethics of people who happened to work in the field of technology.

> To put it another way: the only person to lose their job over Epstein running a rape island was Stallman for being right but uncouth.

Yeah, that is a very good illustration of how you are talking about something that has nothing to do with the Ethics of AI or any other ethics of technology subfields.


Pointing out that the interests of the supposed gatekeepers of morality are misaligned with society is a lot more worth while than doing the same for a group of people whose only crime is having a wage 50% higher than the median in the US.


that said, AI does seem to require some hard ethical thinking about how it should be used and by whom, and which limits ought to be implemented

and i dont expect a bunch of startup devs who only care about meeting a deadline or their next funding goal to really think about the philosophy of what they are doing


How is that different from linear regression?



>> Not like the most recent ML conferences aren't doing that already ("Your dataset has only two genders? Bye")

Hi. Are you saying that most recent machine learning conferences required datasets with more than two genders (where gender was applicable) otherwise they rejectd submissions?

Can you please name some of the recent machine learning conferences that did this?

Also, at which point did the rejection happen? e.g. was it during a preliminary review stage, during review? etc.


This is unhelpful reductionism. Like science, it's an ongoing project to minimize harm in a very real way. No one is pretending that we are done or that forcing ideas o people is a good idea. The point is just to systematically think about the consequences of our actions. Ethics provide tools for this.


> Like science, it's an ongoing project to minimize harm in a very real way.

What would this "very real" way be? Different people have widely different objective functions they are trying to maximize. The expectation that we should always attempt to pursue cosmic justice is completely unrealistic.

We already have a proper way of enforcing "very real" ways to curtail abuses, it's called passing laws.

There probably is a place for serious, proper discussion of what the proper working ethics for a software engineer are, but it's nowhere near as preminent as the zeitgeist is suggesting.


> We already have a proper way of enforcing "very real" ways to curtail abuses, it's called passing laws.

And how is it decided what should become illegal? Just passing laws banning unethical things? Now we are back to square one deciding what is and isn't ethical.


The ballot box.


But who writes the words that go on the ballots? Ballots are not like Ouija boards. Someone has to make a proposal for others to vote on. The person who writes that will be an ethicist, if not by title, then by the fact that they wrote the text of a proposal to ban something unethical.


Who ever pays the politicians the most money.


Could you expand upon the ethical implications of this?


No, they also pay the salaries of the ethicists at the universities.


Who's talking about cosmic justice? Don't be ridiculous. We can identify instances of harm and that's about it.

Every system dealing in dichotomies (including binary) will eventually fail in reality, which always is more nuanced. Exactly how it fails depends on our knowledge, biases and our ability to account for them.

The problem is that flawed, biased AI systems are presented as neutral and fair, when they can never be. Ethics is what you get when you genuinely care about other peoples reality, not just following the law.


One of the points of corporate ethics is to avoid pissing off the public so much that laws get enacted to regulate you.


I'd say its just the reverse. Like the strange and consistent finding that people are shockingly unmotivated by monetary incentive, it seems likely that straight up legal penalties would be less effective than soft social pressure. We are fundamentally social creatures, and things like signaling and status are usually in the drivers seat.


A law has to pass the rigorous test of court. Soft pressure just has to get a mob going on.

> Like the strange and consistent finding that people are shockingly unmotivated by monetary incentive

I find that hard to believe. Maybe again there's some strange societal pressure against voicing that money is a motivation.


I would guess what GP is referring to is something along the lines of this book: https://www.danpink.com/books/drive/

IIRC, the idea is just that money only goes so far up Maslow's Hierarchy of Needs. Once our material well-being is taken care of, we want mastery and autonomy and incentives geared that way are more effective than money.


It's quite lucky that it's not people who do most of the harm in society, it's corporations.


You're talking about applied ethics, not ethics as a whole.


>A set of arbitrary unenforceable rules to shame practices that is decided by whoever convinced other ethicist to grant him a PhD.

As someone also skeptical of ethics, this isn't a very useful or helpful way to see it. You say ethics is arbitrary, but it's only as "arbitrary" as every other humen-made system, and there are some (good, in my judgement) theories on the connection to human evolution.

You say that ethical rules are decided by an ethicist going for a PhD, but historically, ethical discussion has happened outside the (formal) academy, never mind just to score a PhD. Why couldn't we say the same about sociology or economics, or even mathematics? It's not as though these people decide ethics for us - ethicists only decide ethics to the extent that artists decide what's blue or red. It's a process you are more than welcome to engage in from "the outside", and you don't need a degree in philosophy to do so. The fact that you may be ignored is irrelevant to the correctness of your theories.

If you disagree with an ethical theory, even a popular one, it's your freedom to refuse to obey them when imposed for the reasons you put forward. It may even be a moral responsibility to do so.

You say ethics is unenforcable. This is quite clearly either trivially true, or false - ethics has been and is enforced by levaging moral responsibility both on the conscience of the actor, and what society will tolerate (usually when ethics may process to law). Shame and guilt are also motivating factors.

You speak as though there is no ethics behind making bad practices illegal, but law is filled to the brim with such debates that often hinge on ethics, or principles we will or won't give up for ethics. To take a recent example, debate around freedom of speech or religion. The whole reasoning behind those "self-evident" freedoms is our ethical reasoning.

Ethical reasoning is neither static, nor ever in the best possible state. Ethical discussion (mind you, led mostly by people who aren't experts in ethics) over the past 100 years alone has changed so much. To dismiss it on the grounds that it changes is to your own detriment as someone who, like everyone else, is part of the constant ethical dialogue. By refusing to engage with the philosophy PhDs in reasoned argument, you're giving up your "vote", so to speak.

When I see an argument or position in philosophy which is too narrow, built on false premises, ignorant of previous research, or simply logically fallacious, I don't throw my hands up and sneer at those careerist PhDs in their ivory tower. I don't give away my seat at the table, bury my head in the sand and insist they're wrong for reasons I refuse to articulate. Instead, I form my principles for myself, and if questioned why I don't follow the current trendy theory, my answer is simple - "It's wrong, and here's why..."


Ethics is not a trend. For 50 years those who do nothing and know nothing are yapping about ethics. I suggest that we start this discussion when a machine makes the first unethical move. Saves lots of energy and grant money.


I just read all of this. I think the whole course -- which is actually pretty short, like an hour or two of reading sans exercises, and for the most part clearly written -- would be a good assignment for a single day of a first-year university computer science course. It does offer a reasonable, albeit almost entirely nontechnical, overview of why people should care about AI ethics, and what caring about it "means".

As for my reaction to it, I'll borrow a useful distinction the course makes near the beginning about three types of ethics:

> 1) Meta-ethics studies the meaning of ethical concepts, the existence of ethical entities (ontology) and the possibility of ethical knowledge (epistemology). 2) Normative ethics concerns the practical means of determining a moral (or ethically correct) course of action. 3) Applied ethics concerns what a moral agent (defined as someone who can judge what is right and wrong and be held accountable) is obligated or permitted to do in a specific situation or a particular domain of action.

In this language, the course is almost entirely about 1) with a few gestures in the direction of 2) and 3) near the end. I'm not totally unsympathetic to this; there are still people out there who are not convinced AI (or algorithmic) ethics is a "thing" (response: any apparatus that makes choices that matter involves ethics!), so I get why justifying the topic itself seems necessary.

But in the other direction, I think people sometimes conflate "ethicist" with "person who will know what to do" -- that is, someone with expertise in 2) and 3) above. In my limited experience, ethicists seem to "work on" mostly 1). I hope there will be more work on the 2) and 3) parts of AI ethics in the near future.

By the way, if you want a more technical look at this topic (albeit framed more narrowly as "fairness and machine learning"), there is a nice draft textbook on the topic [1]. It probably has enough material for a semester-long upper-level computer science course.

[1] https://fairmlbook.org/


Humans are “flawed” and “biased” (product of their environments and learnings) and thus anything they create and “control” is by definition “flawed” and “biased”. AI is just a set of the many tools humans have created. The biases can be both positive or negative.

It is like people who talk about human rights while ordering bombings of children far away. The ethics of AI is similarly flawed. It might solve some problems here but might inadvertently or deliberately accentuate others depending on your perspective. It is a sense a cost/benefit solution where we may never even be aware of the costs but just the benefits (those who incur the costs may not have a voice or be muzzled).

I am all for AI by the way. I simply see it as a tool. It is up to societies to formulate acceptable policies and measure the costs and benefits.

Humans were good and bad before AI and I see nothing changing there.


The point is that things like AI can multiply the effect of any human bias. Humans were fundamentally just "bad" after as before the invention of the machine gun (technically mostly modern artillery, but I'm being poetic), but there probably wouldn't have been as many dead at the Somme without it.

We are seeing every day how AI can be used to censor and suppress, that means we need to be increasingly systematic in how we approach biases both in practice and as a concept (the latter aimed really at those who apply political dogma to what is ultimately a scientific question).


To follow up (from a different debate, but this point still stands)

> A core, not side, effect of technology is its ability to magnify power and multiply force — for both attackers and defenders.

> For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

https://www.schneier.com/blog/archives/2013/03/when_technolo...


This is one of the best criticisms of AI that I have read.

Leaving science alone and working on our own biases should be the way forward.

But with censorship on the rise in every corner of the world and political spectrum, I would say things are bleak.


But who decides which biases are good or bad? If we have no yardstick for that, we become ineffective at personal progress.


What is you point? The fact that humans have shit ethics does not mean we need not address ethics in AI. Especially if the AI is then used to justify the biases.

And that comparison is completely ridiculous.


Right, like we extensively worked out the ethics of automobiles before we allowed them.

Can you imagine if we let cars contribute to inequity by allowing rich people to own them and not poor people? Or if it was possible to just aim a car at people without the self-driving stopping you.


The idea is the AI developers should be first of all aware of the potential problems, I would say that maybe AI should be done by competent engineers that studied this domain including data science,statistics and ethics and maybe this engineers should be responsible similar how other professionals can lose their right to practice if they don't respect the rules.

The excuse that the data is garbage so is not my fault is terrible, detecting if the data is garbage is part of your job IMO.

My criticism applies for AI people that work on projects that have large consequences or affect a large number of people, for small we can keep the "no warranty disclaimer"


Yet another example of wrong interpretation of statistical bias.

Not that I expected much better from an AI Ethics course clearly not tought by serious machine learning experts.


I was always wondering if a lot of bias is not just application of bayes theorem (appliedby human or by AI)


[flagged]


If you think about ethical bias in a system that has no concept of ethics you are going to have a bad time.

You're essentially trying to fix reality by altering datasets which is a completely erroneous way of solving the problem.

It's playing God by determining what is right and what isn't all alone and without the input of society.


These systems are created for some intent by humans. Studying the ethics of AI is really thinking about the ethical issues surrounding the practice of implementing AI systems.

Ethical reflection on the intent and impact of the systems one builds is not mandatory in our field (it is for other professions) but probably still a good thing to consider if you want your contribute to society to be a positive one. Taking time to think about this stuff in a MOOC sounds like one way of avoiding doing that thinking alone and without the input of society.


I disagree. If you take one of the examples mentioned in said MOOC, which is the bias in word embeddings that makes vector arithmetic go from "doctor" to "nurse" if you replace "male" by "female".

I agree that it would be nice if the returned vector would be "doctor" in both cases but the embedding code (the implementation) or the embedding algorithm (theory) have no idea about gender, ethics or moral.

Here the bias comes from the datasets the AI trained on.

The bias of those datasets comes from society writing texts in a biased way.

So the solution to fixing this "bias" is fixing the language used in society which is not an AI problem nor a dataset problem.


I have been wondering if it would be possible to collect examples of bias, the same way we collect other datasets, and teach NNs to de-bias themselves. The reason for why this is hard to decide is that bias is kind of the opposite of relevant information. The data would be patterns to avoid rather than follow.

Assembling a database for the purpose of de-biasing might also prove unfeasible because of inductive bias.


The fundamental problem is that deciding what's biased is extremely subjective and context dependent. If an AI says "crime is often a problem in lower income neighborhoods", is it delivering a statistical fact or expressing bias against the poor? Depends entirely on how we think people are going to use the results.


Good point. Inductive bias takes many forms.


Or, accept that many women enjoy being nurses and doctors, such that man->doctor | woman->nurse/doctor isn't weird.

It's not a competence thing, or stopped being since woman doctors are a thing, and is a motivation thing. Not to a 'better' place, but a different one.


>You're essentially trying to fix reality by altering datasets which is a completely erroneous way of solving the problem.

I would argue this is a decidingly effective way to solve the problem.


No. This policing of “toxicity” is exactly why so many institutions are suffering from progressives’ excesses.

Learn to take criticism before it’s too late and you crash and burn.


I was pointing out unecessary negativity and disinformation. You guys are looking at excuses to be angry, I don’t get the extrapolation to progressives’excess in institutions (I don’t even know what this refers to). I got outsmart in a real HackerNews fashion. Hope you won all the Christmas dinner debates this year.


This view on bio-ethicists reflects my views on ethics as an academic field in general: https://marginalrevolution.com/marginalrevolution/2020/12/fr...

Some quotes:

> I realized quickly that the incentives are all wrong if what we want is people who will think hard about humanity’s pressing ethical dilemmas and who will suggest intuitively appealing solutions.

> Since almost all ethicists are academics, they have to publish, and in order to publish you have to be novel, and since the basic principles of ethics are little changed for millennia the incentives to do thorough homework on the basis of principles which are widely understood and accepted is not great.

> Furthermore, if you decide to be a utilitarian, then basically all ethical issues will boil down to cost/benefit analyses which you have to outsource to technocrats, so your unique expertise as an ethicist will be worth little.


I just finished the course and what a complete pile of garbage. It seems to reinforce group think more than anything. They ask for a lot of free form responses that are graded by peers and it's obvious that there is a correct and an incorrect point of view. I'm not sure which I'm more afraid of, unaccountable corporations or self appointed technocrats.

They talk about ethics washing but don't seem to consider that this course is a part of that cycle.


I am sorry, but I don´t follow your reasoning. Why do you think the course provides an example of ethics washing? As I understand the term, ethics washing means - roughly - that someone claims to be ethical, while (s)he is really not committed to implementing ethical principles in practice. I don´t think it is a fair accusation in the context of free online moocs. Correspondingly, I don´t understand your other concerns either. The course is an introductory, free online course for anyone interested in the state-of-art ethics of AI. As I see it, it is not meant to be a mind blowing course for professional philosophers doing AI ethics. One might expect professionals to know the field already. And it is not meant to provide a list of dogmas or principles for true believers; if you want to have your correct answers or to get the correct codes of conducts, this clearly isn´t your course. As a philosopher I found it very strange to claim that it´s "obvious" that there is a correct/incorrect point of view. Instead, I find it obvious that most of the questions are really complex. And what comes to the peer reviews, I see your point. I have a background in AI ethics - I am not an expert, but know some stuff - and I have also lectured online, so I know how difficult it is to design an introductory course for a non-specified target group. But as I see it, the peer reviews are meant to provide feedback and possibility to learn from others. Of course, in the best possible world, there would be a teacher commenting all the answers, but it is in contradiction with the mooc ideology. But your claim that the course "reinforces" group think - that´s crap. How exactly is it supposed to do that? Or, do you honestly find other people intellectually so... "under developed" that their thoughts can be manipulated simply by asking them to read or evaluate someone else´s answers? If this is the case, then - indeed - it is necessary to worry. And yes, if this is the case, then we are very glad that you are worrying it.


There's also a free course from the Linux Foundation on this topic: https://www.edx.org/course/ethics-in-ai-and-big-data


I think it's premature to talk about "AI ethics" when AI is nowhere to be seen.

In the US this whole "ethics" trend is being used as a tool to politicize and censor research[1].

[1] https://spectator.us/militant-liberals-politicizing-artifici...


That story is from the magazine The Spectator right? According to Wikipedia "The Spectator is politically conservative and supports the Conservative Party" .

Being openly "conservative" means biased. Although they are entitled to their opinions it is good to be aware that they have a (conservative) bias. They do not try to be objective, they try to be conservative.

And what is "conservative bias"? It of course means conserving the status quo, conserving the power of the rich and powerful, conserving inequality. That kind of thing is what I think ethicists should be studying.

Note that to the conservatives, since by default they want to conserve things, don't want change, they don't want ethicists snooping around asking questions about what if anything might be unethical about the existing power-structures. No Sir :-)

So it seems to me that for a conservative magazine to criticize "ethicists" is very self-serving for them.

https://en.wikipedia.org/wiki/The_Spectator


Bias is natural and inevitable, and it has to be acknowledged in research. The problem is, liberals want to shut down anyone who disagrees with their own bias. It's not about 'snooping around asking questions', but more like slandering and aggressively silencing people for having a different opinion.

You'd think it would never affect you as a liberal but it was funny to watch Google's very progressive & liberal research leads labeled 'racists', and evil incarnates for standing in the way of the activist mob. I mean they're are still trying to 'cancel' Jeff Dean, of all people, for firing some activist [1], and Nando de Freitas for merely supporting both sides[2].

If this is your ethics discourse, I'd rather AI be unethical.

[1] https://docs.google.com/document/u/0/d/1f2kYWDXwhzYnq8ebVtuk...

[2] https://twitter.com/NandoDF/status/1336030085640548354


I think what' you're saying is that some people who claim to be ethical are not. That may often be so. Just because people say something, does not necessarily mean it is so. But I think "ethical" by definition means what is good for us all


no, I'm saying that "ethics" discourse has been hijacked by militant activists pushing political agenda.


Political Ethics are a kind of ethics.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: