So I guess the question is, what is the alternative?
The only realistic option I can think of is some combination of:
• Make autocomplete operate on a blacklist instead of whitelist, with a more limited goal of only removing e.g. known porn sites.
• Make the list of potential matches machine-generated, without human intervention. (Aside, are we sure the current list isn't just the 250K most-visited sites on the internet, or something like that?)
Either of these would remove culpability since it's no longer a curated list. And yet, would that make it more safe in a meaningful way?
Why should Apple avoid autocompletion of porn site domains? That’s an area rampant with scam sites, especially similar domains. Plus “everyone” looks at porn so the benefits would be widespread.
If Apple wants to protect users then autocompleting porn site domains seems like the place to start, not avoid.
The absence of porn domains raises the question of Apple’s intent.
> Why should Apple avoid autocompletion of porn site domains?
Because if I’m sharing my screen on a business call, and I start typing something into my browsers’s address bar, I don’t want it to autocomplete something nsfw which just happened to share the same first letter.
There's no such thing as "machine-generated, without human intervention". Even something as seemingly simple as "most-visited websites" involves measuring choices. (Fundamentally, this is indeed about responsibility. Until a non-human gets some form of citizenship, they have none.)
Furthermore, we now know that in practice "machine-generated" seems to be even worse, because too many people are fooled by the "the machine did it" 'excuse'. (Like you seem to be doing here ?)
Billions in quarterly revenue doesn't allow Apple to solve the halting problem. I can't begin to imagine how they would do what you're suggesting. They need to detect when a website changes in kind, but ignore day-to-day changes or normal UI revamps.
I'm honestly unclear on how you can't see how Apple could solve this with billions. It is definitely a "throw money at it" situation, no question.
Moderation is a hard problem because it isn't just a matter of someone filtering between the polite posts and the less polite posts, it's a matter of filtering between the polite posts and the content that will sear your soul, no joke.
But that's not what this is. This is just, is the website still there and look correct? With the correct software setup it's roughly a person-month by my estimate to gets eyes on every site in the list.
(Though most people usually don't set write this sort of software very well, making someone laboriously click this, scroll around some, click some more, click a tiny radio button, click the tiny submit button, wait for the next thing to load, etc. It'll be longer & more work with this style. Someday I hope to have the chance to write some sort of classification program and implement the UI I've wanted for a while, which amounts to "right -> ham, left -> spam", and everything as pre-rendered as I can get it before it gets to the human. I'm sure some people out there have done something like this, but it makes me honestly sad how few I've seen.)
There is a service called Visualping that basically does this. It takes a screenshot and sends you a “diff”. You can set it and say by what % things need to change.
They could use a similar tool plus human review to maintain the list.
How could they realistically do that?