These systems are created for some intent by humans. Studying the ethics of AI is really thinking about the ethical issues surrounding the practice of implementing AI systems.
Ethical reflection on the intent and impact of the systems one builds is not mandatory in our field (it is for other professions) but probably still a good thing to consider if you want your contribute to society to be a positive one. Taking time to think about this stuff in a MOOC sounds like one way of avoiding doing that thinking alone and without the input of society.
I disagree. If you take one of the examples mentioned in said MOOC, which is the bias in word embeddings that makes vector arithmetic go from "doctor" to "nurse" if you replace "male" by "female".
I agree that it would be nice if the returned vector would be "doctor" in both cases but the embedding code (the implementation) or the embedding algorithm (theory) have no idea about gender, ethics or moral.
Here the bias comes from the datasets the AI trained on.
The bias of those datasets comes from society writing texts in a biased way.
So the solution to fixing this "bias" is fixing the language used in society which is not an AI problem nor a dataset problem.
I have been wondering if it would be possible to collect examples of bias, the same way we collect other datasets, and teach NNs to de-bias themselves. The reason for why this is hard to decide is that bias is kind of the opposite of relevant information. The data would be patterns to avoid rather than follow.
Assembling a database for the purpose of de-biasing might also prove unfeasible because of inductive bias.
The fundamental problem is that deciding what's biased is extremely subjective and context dependent. If an AI says "crime is often a problem in lower income neighborhoods", is it delivering a statistical fact or expressing bias against the poor? Depends entirely on how we think people are going to use the results.
Or, accept that many women enjoy being nurses and doctors, such that man->doctor | woman->nurse/doctor isn't weird.
It's not a competence thing, or stopped being since woman doctors are a thing, and is a motivation thing. Not to a 'better' place, but a different one.
Ethical reflection on the intent and impact of the systems one builds is not mandatory in our field (it is for other professions) but probably still a good thing to consider if you want your contribute to society to be a positive one. Taking time to think about this stuff in a MOOC sounds like one way of avoiding doing that thinking alone and without the input of society.