Hacker News new | past | comments | ask | show | jobs | submit login
Apple Is Bringing A.I. To Your Personal Life, Like It or Not (newyorker.com)
37 points by jrochkind1 4 months ago | hide | past | favorite | 44 comments



I went to use ApplePay, and Siri repeatedly stepped over it, and wanted to have a conversation instead.

I just want to be the genuine me with a limited number of people, and say as little as necessary, because people don't like to be flooded with boilerplate.

In fact, HOW one summarizes has a tremendous effect on the results, as in posts that must be 4 paragraphs or less, for copyright reasons.

AI, in my opinion, should be used for "algorithmic" things that replace immense amounts of tedious work, such as identifying and categorizing photos, where the downside of a goof is trivial.

Good outcomes, especially where accuracy really matters, demand independent checking, and the fact that AI is being run on the least powerful chip in the arsenal is worrisome.

It doesn't even take AI to ask "Do you really want to send this email to the entire company?" but I have to worry when AI is composing the email, and I have to scrutinize it for gaffes. We all have brains smart enough to say what we mean the first time (perhaps with some iterations). Otherwise, we're just literary critics. I like statements and discussions that have real meaning, where nuance matters.

Be careful what you outsource. Better a creator than a proofreader, IMO.

Creativity is somewhat opposite of coming up with the next most likely word or words in a sequence.


> Be careful what you outsource. Better a creator than a proofreader, IMO.

Or as authors put it: "write drunk, edit sober".


I thought Apple's approach was interesting and a good balance between usability and novelty.

Most of what the processing is going to happen in-device, I'm not sure at what extent it'll have to rely on the cloud and even more so on open-AI's chatGPT. At this point I'm optimistic that most of the things I would be using it for will be handled on device.

I think, for me, most of my interaction with smart assistants has been setting reminders and alarms. They just don't work for much more than that, I'd like to have a more powerful assistant that is capable of doing things like parsing my emails and calendar and answering questions about them (on-device).

For me the key part of it is that I'm okay with my phone accessing everything my phone has. I'm not okay with Apple doing it. I'd rather the assistant tell me "I'm not capable" than to quietly send my private data to Apple or OpenAI.

How capable the model is, that's something I've yet to see. I think the best way to go about testing is to simply turn the internet off and see how much new Siri can handle.


Algorithmic decision making, which obliterates nuance and forecloses alternate futures, has been a part of every America’s personal life for almost 30 years, when display ads on the web began targeting users.

This technology is a new, personally accessible version of that.

This is not to say that we should not be alarmed at the potential abuse of this technology; it is to say that we should have been and should continue to be very alarmed, and always vigilant about carving out spaces to exist without this technology. It might be impossible to live a modern life without it, but that doesn’t mean we cannot or should not push back, take breaks, question.


I haven't read enough about how the OpenAI integration but it has me thinking I won't be upgrading to the latest iOS or macOS. OpenAI just really feels too untrustworthy for me. Like in two or three years it'll be revealed that everyone's information really is being used for their models, they'll get a $40M fine and continue on.


> how the OpenAI integration

FTFA: “For queries that require more horsepower, users will be offered the option to outsource a task via the cloud to ChatGPT.”

> it'll be revealed that everyone's information really is being used for their models, they'll get a $40M fine and continue on

You think Apple would accept that sort of breach of contract for $40mm?


I'm not sure Apple would know.

edit: I meant OpenAI gets the fine.


Apple wouldn’t know but OpenAI gets fined and Apple still doesn’t know? Who is issuing these secret fines?


Yes.


The OpenAI integration is almost exactly the same thing as the current Siri integration with Wikipedia. It has nothing to do with "horsepower" or "your data". It's literally a plugin for getting info like how do I make a cherry pie, or write me a sonnet, that Apple's models aren't trained to be good at. And it's called from apple's side with no identifying information and just the query to get the encyclopedia style information. There is so much misinformation about this.


> There is so much misinformation about this.

I honestly find that more interesting than the announcement.

This feature and how it is implemented has been very clearly and very widely promoted in non-technical, no-jargon language. It is trivial to find out the correct information.

One of the main pieces of false information is that some people think Apple's AI is implemented like this:

On-device processing for a limited set of tasks + Off-device Processing by OpenAI.

That's false, and this falsehood is also implied by The New Yorker's article.

The actual system is more like this:

On-device processing + Apple's own Private Cloud Compute for off device processing of larger models + optional OpenAI for special OpenAI specific requests.

Those OpenAI requests function as you've already described, with the extra step of also requiring the user to specifically consent to using OpenAI every single time (while also showing the user exactly what will be sent to OpenAI), plus a few steps in between to ensure the user is anonymous from the query.

As mentioned already this information isn't hard to find out. One could learn it unambiguously from any of the apple news releases¹, WWDC talks, or their original Keynote.

Yet the majority of discourse here is false information, followed by wrong assumptions that stretch into conspiracy theories.

I find this to be such an important lesson in going to the source.

See also: AdObE aRe StEalInG uR aRtwOrK fOr Ai TrAiNing. (Adobe are basketful of questionable activities and perhaps also deceptive² things, but they're not doing this and the that information is also readily available.)

¹ https://www.apple.com/newsroom/2024/06/apple-extends-its-pri...

² https://www.ftc.gov/news-events/news/press-releases/2024/06/...


I understand you need to call OpenAI through iOS to send data to them and that's all non-identifying information, but they literally having top executives quitting because of ethical concerns. Look at all the dirty tricks Meta and Google has used to get around privacy settings. Why is anyone putting their faith in a seemingly bad actor whose entire business is admittedly built on theft? This isn't an industry built on good faith and the billions invested in these companies will have to be paid.


I hear your concerns, but one needs to find the dividing line between adversarial online discourse and the reality of what is being offered.

If OpenAI looks scary because there is a buzz around it, then that's indicative that one is building opinions based on a "vibe". That's fine to do for yourself because the user is in no way compelled to use OpenAI, and Apple's implementation clearly delineates when OpenAI is available, and specifically requests the user's consent every single time, while also showing what information would be sent to OpenAI.

Additionally the announced plan is to make this part of the system easily switched to other AI providers, much like one switches their search engine. So if your personal leanings are supportive of a different AI provider, then that is your option to change.

I think you would really benefit from reading about how Apple plans to introduce these features, and the specific privacy protections made not just for OpenAI, but also for processing on Apple's own AI-nodes. They have taken threat actors very seriously in their approach, and you'll see that this approach is genuinely different from what is currently being offered:

https://security.apple.com/blog/private-cloud-compute/


Yep. The unfolding OpenAI heel turn is clear to anyone watching, as is Apple's declining ability meet the expectations set by its own marketing. If you move your finger along those trend lines and find where they intersect, you'll get a glimpse of the future of this product. I guess the counterargument is "but it'll be different this time"?


Apple deploying the feature that chucks balloons all over your webcam in every application was the harbinger for features that are added without thought, randomly inconveniencing users, simply because they make fun demos. AI being deployed everywhere has the same vibe.

https://medium.com/macoclock/why-i-think-facetime-gesture-ef...


The reactions: weird for sure. On-by-default is intrusive.

That said, they're so easy to disable. But the silly reactions aren't the feature though, just the sugar.

Having a system-wide virtual video output and camera control is/was a great addition to the OS.

(Article link is missing a token to be read by non-subs, btw).


“Like it or not”

I’m pretty sure this is a feature that can be turned off, right?

You don’t even need an Apple ID to use a Mac. You don’t need to enable Siri at all. The list goes on.

I think a lot of people will use these features because the benefits are attractive.


> You don’t even need an Apple ID to use a Mac. You don’t need to enable Siri at all. The list goes on.

They’re nowhere near as aggressive as Microsoft but Apple does make it a pain to opt out. The first release of macOS with Siri would leave the icon in the menu bar even if you turned it off, and reactivate if you accidentally clicked it. Skipping signing in with Apple ID on iOS involves something like pressing “Forgot Password” then “Don’t Sign In” then dismissing a pop-up that nudges you away from what you were trying to do.

I would not be surprised if Apple doesn’t give users much of a choice in using their AI features, at least on-device. Even if you don’t engage with those features, it may mean your OS is now storing more data about your behaviors, as it does for earlier AI features like frequent locations and intelligent recharging.


I have relatives that have been going without the Apple ID for years, perhaps decades, on their Macs.


Can you use the App Store without an Apple ID? Genuine question. I was under the impression you cannot.


No, but you don't need it at all on a Mac. It is indeed a problem on the iPhone and iPad.

Although, you still really don't need to use iCloud or Siri.


It’s hard to trust the privacy guarantees of a device that tries to inject AI all over the place. Who knows when my data is being lifted into the cloud and used by random third parties in unexpected ways. I wish someone would make a AI free smartphone to avoid all this.


> Who knows when my data is being lifted into the cloud

FTFA: “Apple A.I. is designed to run on the device itself, rather than via the cloud.” (You’re given the option of offloading heavier compute to their cloud.)


Yes but every little thing is now AI infused. How can a user know (like guarantee themselves) that, for example, the autocomplete isn’t sending their keystrokes somewhere?


What does that concern have to do with AI though?

If your risk model means you can’t trust your device manufacturer, AI makes no difference to that.


That’s a good point. I care about this now more than before because of AI, however. All companies have an incentive to get as much of our data as they can for use in training AI. Desperate or low growth companies have this incentive by being data resellers to those that need data for their AI training.


Imho they already had the incentive for ad targeting.


I am hoping GDPR can save the rest of us and force apple to provide some way to opt-out


The number of dumb hot-takes in response to this article is staggering. The system as described so far is opt-in for anything sending your data outside of either the device or a carefully vetted and privacy-first cloud system that was designed by people far smarter than you to make sure that Apple could not get to your data. Apple knows that it is better for them to be unable to see this data and to be able to prove it to anyone holding a warrant.

Will there be a general on/off slider just like there is for Siri? Probably. Will more than 1% of the userbase turn it off? No.

Will I be surprised if, days before Apple Intelligence is released to the general public, it is declared that apps distributed via non-Apple app stores are not eligible for linking to these AI services (payment of the cloud services and OpenAI being funded by App Store revenue, naturally, and TANSTAAFL)? No, I will not.


> The system as described so far is opt-in for anything sending your data outside of either the device or a carefully vetted and privacy-first cloud system that was designed by people far smarter than you to make sure that Apple could not get to your data.

> Will there be a general on/off slider just like there is for Siri? Probably. Will more than 1% of the userbase turn it off? No.

These two statements are at odds


> These two statements are at odds

No they are not.

As described so far, the system is on device or out to Apple's private computing cloud. If the system thinks it can use OpenAI then it will ask. It will ask _each time_. So far Apple has not said that there will be a complete opt-out but I expect there will be just as there is for Siri.


"Out to Apple's private computing cloud" is exfiltrating data off the device. It very much should not be lumped into the same category as "on device", and should in fact be placed in close to the same bucket as "sent to OpenAI". I don't know the details of how it works, but from your description it sounds like you expect sending your data to Apple would not be opt-in?

Homomorphic encryption is not practical. If you're sending data to a server for processing, that server must be able to see your data. If someone tells you otherwise, they are lying.


> or a carefully vetted and privacy-first cloud system that was designed by people far smarter than you to make sure that Apple could not get to your data.

I've noticed over the years smart people have made great things, but often leave them in the hands of idiots. What does 'carefully vetted' mean.


> What does 'carefully vetted' mean

“We’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research. This promise, too, is an enforceable guarantee: user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software” [1].

Based on this thread, Apple over-allocated to engineering over marketing.

[1] https://security.apple.com/blog/private-cloud-compute/


GDPR also applies to the training data. So far as I know no AI training dataset is made entirely from users who consented to their data being used for that purpose - how could it, when the whole thing is so new?

> Will I be surprised if, days before Apple Intelligence is released to the general public, it is declared that apps distributed via non-Apple app stores are not eligible for linking to these AI services

Yeah, this seems like the move, along with banning competing AI apps from the store.


I'm looking forward to it. Generative AI is amazing and fun!


It's a sorry day when Apple has to buy it's technology from others. What have they been doing?


> sorry day when Apple has to buy it's technology from others

For HN this is some sorry stuff. Cupertino is getting ChatGPT for free. (It’s unclear who’s paying for compute.)

Get a few other folks in the hopper and one of them will be paying Apple to be the default outsourced AI to the tune of Google’s tens of billions for default search slot status.


Not the point I was making.


What is the point? iOS will run a local model for requests that involve personal context and pass requests off to a private cloud instance for requests that are too complex for local inference. For non-personal requests OpenAI or some other service is invoked. Apple isn’t paying OpenAI. Where is the ‘Apple buying their tech’ part in this?


Apple's been buying technology from others since Applesoft BASIC replaced Woz's Integer BASIC on the original Apple II, not sure why you just noticed now


What rock have you lived under? Are you not aware that Apple has bought a lot of its technology? Siri and iTunes just off the top of my head.


Well, they've been using Google for internet searches for a long time now. So I guess it's been a sorry day ever since the iPhone came out.


Wait until you see where Apple News, Weather, and Stocks get their external data. Believe it or not, Safari even supports opening web pages not made by Apple and not even served from Apple servers!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: