Hacker News new | past | comments | ask | show | jobs | submit login

How else would you get folks incapable of getting a PhD in hard sciences to rule over the folks that do hard science? How else would you make those who manipulate real-world data to get their preferred outcome feel good about themselves while doing it, or at least to hide the ugliness captured by the data? Wouldn't getting famous from hard work and taking risks take too long when one can just shame some top ML honcho on Twitter and become the ethical AI celebrity overnight? I predict we will need a license to do ML research soon and that would have to be rubberstamped by some AI ethicist... Not like the most recent ML conferences aren't doing that already ("Your dataset has only two genders? Bye")



> How else would you get folks incapable of getting a PhD in hard sciences to rule over the folks that do hard science?

In most subfields of scientific/technical ethics, it's very common for ethicists to have advanced degrees in the scientific/technical area.

> I predict we will need a license to do ML research soon and that would have to be rubberstamped by some AI ethicist

Except insofar as ethics is a component of the education and licensing requirements for regulated professions like medicine and law, that's not a thing that has happened in the US for any field (and even there is practitioners, not separate ethicists, who are responsible for the content and validation.)

So, aside from sheet paranoia, what is this prediction grounded in?


> it's very common for ethicists to have advanced degrees in the scientific/technical area.

Maybe, I can't tell, I am missing hard data on the distribution of degrees among ethicists. The one from Facebook whose lectures I attended had no technical degree but multiple degrees related to philosophy and sociology. One of the first lecture statements was that "technology is inherently about power", which is obviously how people outside tech perceive us and what many SWEngs are naively missing as we tend to think it's all about fun and self-realization (well, OK, it's dystopian now after profit-sensing individuals invaded our field). So those people outside want to regulate us with "ethics", i.e. creating power structures they rule and we need to consult for permissions.

> sheet paranoia

Was this truly a line of thought you wanted me to follow in order to become unreasonable?


In any form of human co-operation we need to agree on some rules how that activity is carried out. It's not about some "ethicist" telling you what rules you must follow but majority of us agreeing to what the rules should be. The job of an ethicist then should be to point out problems in proposed and already agreed-to rules.


Yes, we already have those. They are called laws.

Ethics as is currently practiced in tech is using the mob to remove whoever you don't like.

To put it another way: the only person to lose their job over Epstein running a rape island was Stallman for being right but uncouth.


> Ethics as is currently practiced in tech is using the mob to remove whoever you don't like.

To the extent that that's even arguably true, it's irrelevant, since it's about the practice of basic organizational ethics within the tech community, and not about either the theory or pragmatics of any the discipline ethics of technology, or any of it's subdisciplines like AI Ethics, which is and are about the ethical implications of the application of technology, and not about the personal, non-tech-related ethics of people who happened to work in the field of technology.

> To put it another way: the only person to lose their job over Epstein running a rape island was Stallman for being right but uncouth.

Yeah, that is a very good illustration of how you are talking about something that has nothing to do with the Ethics of AI or any other ethics of technology subfields.


Pointing out that the interests of the supposed gatekeepers of morality are misaligned with society is a lot more worth while than doing the same for a group of people whose only crime is having a wage 50% higher than the median in the US.


that said, AI does seem to require some hard ethical thinking about how it should be used and by whom, and which limits ought to be implemented

and i dont expect a bunch of startup devs who only care about meeting a deadline or their next funding goal to really think about the philosophy of what they are doing


How is that different from linear regression?



>> Not like the most recent ML conferences aren't doing that already ("Your dataset has only two genders? Bye")

Hi. Are you saying that most recent machine learning conferences required datasets with more than two genders (where gender was applicable) otherwise they rejectd submissions?

Can you please name some of the recent machine learning conferences that did this?

Also, at which point did the rejection happen? e.g. was it during a preliminary review stage, during review? etc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: