Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Morality (lareviewofbooks.org)
41 points by how-about-this on Jan 29, 2020 | hide | past | favorite | 17 comments



Interesting read, and some of it rings true for me, but the article is a little bland.

A parallel observation: In the "old days" of the internet, the shining new capability of connecting computers, sharing documents, web pages, software, messages, opinions, and, of course, pornography, the feeling of the day was one of betterment. The network would allow for up-turning out-dated businesses and processes of the past, and we will all be better for it. Information wants to be free!

Yeah, sure.

Information wants to be free? Well, if you consider my click-stream, search history, and probably every keystroke I enter, then yes, that information has certainly been freed. I can even search this treasure trove of information, if I'm just willing to share every piece of personal information about me, from my contact list, to my personal emails, to my phone's location, my calendar and my purchase history.

I no more trust the AI "morality" pundits than I trust the (well meaning, but mis-guided, IMO) visionaries of the internet's early days (and, yes, you can count me among those ranks).


I personally think "information wants to be free" (as in no money) is a major cause for why the Internet went so bad in so many ways. Since everything has to be free, the only possible business models are those that revolve around monetizing the user or are otherwise deceptive and indirect.

This created and bootstrapped the whole surveillance capitalism and "adtech" industry. Instead of paying for information we have people paying to manipulate us via free media built to be addictive.

I'm sure some of this would have happened had the Internet possessed a payment mechanism and people willing to pay for stuff, but it would not have become the dominant model for everything nor as pervasive.


The decision to not build the billing system first, or at all, is what separated the internet from telco culture.

Note that any billing system is also necessarily a surveillance system - someone has to keep the billing information and know who their customers are. The fight is ongoing as to whether cryptocurrency might change this, but you have to accept that any system which allows very large scale money laundering is not going to be tolerated, which puts an upper limit on how successful that project is going to be.

And a billing system doesn't save you from adverts or selling customer information; US cable TV shows that.


I wonder how "pay to view" Internet would've worked. How do you know if the page/site you're going to open will be worth the 0.05 cents? There are/were kids in Macedonia who made fake news sites to make money from Google ads: https://www.wired.com/2017/02/veles-macedonia-fake-news/ , imagine that sort of behavior minus the G-middleman.

Could they have built an Internet where there was a system to claim refunds, with a "judge" who would decide? Could there have been a rating system ("90 out of 154 visitors said the information on this page was worth the money"), and how could that have been abused?


The model would be similar to telco pay per kB, as any other way would be unfeasible - for public sites, like public TV. And similar to cable networks where you would get a set of sites available for a monthly installment, with some options similar to Netflix.

The internet would be much smaller and much more centralized.


If we had a paid internet, then IMHO we'd still get the same adtech surveillance because it's extra money available - in the exact same manner as paid cable tv became ad-filled anyway.


I'm not totally convinced of that.

Back in the 90s and early 2000s people really took privacy seriously. The whole PC culture considered spyware anathema. Anyone around back then can remember the outrage when any piece of software was caught sending any private data anywhere without explicit user authorization.

Then the mobile revolution happened, and mobile basically re-educated and re-conditioned the market to accept surveillance capitalism as the de-facto business model. That was then "back-ported" onto the PC and the web.

Why was mobile built this way? Because the web learned that surveillance and ads is the only working business model. Mobile was built as a surveillance/ad platform first and an app platform second because the web showed that this was economically the way forward.

If users were accustomed to paying for things mobile might have been built like a mini-PC for your pocket with a touch UI but carrying forward the old-school PC culture of "I own my stuff" and "don't spy on me."

If I were given a billion bucks to build an iOS/iPhone competitor that respected user privacy and such, the first thing I'd do would be to actually prohibit free apps in the default App Store. All apps must be paid. I would pair this with really advanced privacy and security controls and a good UI for those and a lot of dark patterns applied in reverse to discourage apps from asking for elevated permissions. Push the market back to "you are the customer" from "you are the product."


The environment of the actual old 'closed-garden' internet-like services (something like CompuServe, AOL or Minitel?) would be informative, but don't have any personal experience with them. If some older people can chime in then that would be interesting.


But because there's a race to the bottom, and information is duplicated with marginal values near zero, sites which were free and didn't require monetization would squash sites that did over time.

I see convergent evolution towards ad networks/data brokers, monthly payment walled gardens, and crowdfunding.


Didn’t that happen because advertisers (those who create and sell products) realized that their exact target group is right there on the internet? How could monetizing the access prevent that, assuming similar growth?


> I'm sure some of this would have happened had the Internet possessed a payment mechanism and people willing to pay for stuff, but it would not have become the dominant model for everything nor as pervasive.

It would rather be inevitable. In the news media there were paywall-only news sites in the early days of the Net and this model has almost disappeared (even sites with payed content show ads to their paying members!) as free sites with ads get a lot more views and are more profitable than the ones having a fraction of paying members.


The article's underlying thesis seems to be summarized by

> The practitioners of AI are not up-front about the genuine allure of their enterprise, which is all about the old-school Steve-Jobsian charisma of denting the universe while becoming insanely great.

But that... doesn't seem true? Most modern AI practitioners, especially the prominent ones, profess that their work is going to change the world forever and that's why they want to do it. So I'm not sure what the author is really proposing that AI practitioners should do. Work on less important problems? Play it by ear instead of trying to develop moral principles preemptively?


>I'm not sure what the author is really proposing that AI practitioners should do.

If he wants us to work for our moral betterment ('Nobody does AI for our moral betterment') he's asking us become like old-fashioned saints or modern-day activists. But they are not problem-solvers per se. They act like they already have the solutions to our problems.

The irony is that, whatever the intentions of its creators, AGI probably will lead to our moral betterment by giving us a clearer understanding of what it means to be human.


In my experience, the AI Safety people are less concerned with a nebulous "morality", and more concerned with practical things like making AI self-improve slowly enough that we can catch it before it's a runaway process; or making AI not think in ways where it imagining torturing people means there's an actual ephemeral consciousness experiencing the qualia of being tortured. Engineering stuff, that happens to interact with morality... in about the same way that genetic engineering could be said to interact with the morality of the resulting organism.


> Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it

Well said. Pessimism is a reasonable reaction to this sweeping line, but I offer a quick summary of what I hope is non-naive optimism:

1) Problem: When we think technology is neutral, we miss how it creates niches outside our intended design and social/political constraints, since the environment of economics thwarts all. We have agency as individuals, yes, but the environment rewards and select for any uses of the technology which yield advantage. “If you incentivize something, it will happen.”

2) Solution?: Nobody fucking knows. But “Game ~B” is what people are calling a nascent research effort to identify what attributes a more cohesive civilization must have to allow better collective “sense-making” and “choice-making” so we may become capable of directing technology instead of leaving it to the emergent dynamics of hapless competition. What's also interesting are the considerations for employing antifragile, transitionary social structures that can thrive in our current entrenched “Game A” dynamics to plant said attributes. Gotta change the underlying rules instead of designing atop those which are broken.

3) Future Ethics: I've also seen some excitement about a so-called non-relativistic ethical framework within Game~B, called The Immanent Metaphysics. It claims to be commensurable with — but not derivable from — science, rigorously formalizing intuitive ethical principles, to at least create a consistent grounding for making effective ethical choices.[1] The hope is to better inform (but not determine) our use of technology after we step into a more coherent phase of civilization. Tech can NOT be put back in the bag, so we must develop our wisdom to carefully wield the godlike powers they give, which we are currently making a mess of.

[1] only a few people currently understand the dense framework, but I'm currently working through it and may be able to offer a summary of what I know so far here. neat stuff


https://bioethics.georgetown.edu/2016/02/oxford-uehiro-prize...

^ This is a link to a relevant article containing an important counterpoint: it argues based on a few assumptions that we should let computers write our moral codes and not the other way around.


If morality is derived from a belief in God, then a nihilist could argue any form of morality is essentially artificial.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: