Hacker News new | past | comments | ask | show | jobs | submit login
Apple Closes in on Deal with OpenAI to Put ChatGPT on iPhone (bloomberg.com)
62 points by tosh 4 months ago | hide | past | favorite | 49 comments



They were also in talks with Gemini lately: https://9to5google.com/2024/05/10/google-apple-gemini-discus...

I find it a bit weird that Apple with its massive market share and RnD budget apparently wasn‘t able to get a team and traction (or at least any significant acquisitions or acquihires) on AI with the writing being on the wall since quite some time now. Makes me a bit worried about their privacy angle and that it might disappear.


Their privacy angle is 100% the reason they haven't managed to make a decent LLM.

To make a decent LLM you need to ignore data protection and throw all available text into a huge model. If your morals don't let you do that, then you can't make your own LLM.


But you can partner with a 3rd party to submit all that text to, and pretend your still privacy conscience? Pretty sure people would rather Apple build a first party server based option than partner with anyone else. When it's Apple violating privacy people are more chill about it.


Or you get the model and host it on your own hardware. The sort of thing an Apple might be able to do.


> If your morals don't let you do that, then you can't make your own LLM.

You're reaching, a lot. Apple is more privacy-friendly than most companies at the moment - no argument there, but they still sell your privacy on the web to Google in exchange for a 36% cut of advertising revenue [1] which amounts to ~$20 billion [2], or a rather petty amount of ~$10/device [3].

These same "morals" also allow them to hand over data on all Chinese citizens to the CCP, among countless other privacy-destroying compromises they have to make in order to profit from the Chinese market [4]

[1] https://arstechnica.com/tech-policy/2023/11/google-witness-a...

[2] https://www.bloomberg.com/news/articles/2024-05-01/google-s-... (https://archive.is/oPg1C)

[3] https://www.theverge.com/2023/2/2/23583501/apple-iphone-ipad...

[4] https://www.nytimes.com/2021/05/17/technology/apple-china-ce... (https://archive.is/MQmY7)


Apple is probably big enough that they would be trusted to take the weights and run them on apple hardware under license.


That will 100% lead to the weights getting leaked, I doubt OpenAI would risk that


> its massive market share and RnD budget apparently wasn‘t able to get a team and traction

Frankly I don't want Apple to try to be the best at everything, because they generally aren't. I've been bitten by iMovie getting worse, Time Machine doing a terrible job, Apple Maps being less than useful for a long time, iCloud being generally lackluster, and more. I'd much rather they outsource the service to a company that is solely focused on the task. There's no reason they couldn't ensure those services remain private or secure.


What I want in 2 years on the iPhone 17 is “Listen up, Siri”

At which point it just starts recording a voice memo until we tell it we’re done. Then it summarizes what it’s heard into todo items, grocery lists, list of potential appointments or stops to make. It manages to deal with the pauses, the umms, the stammering, the “let’s get some orange soda. No, wait, make that grape.”

Basically, listen in to my wife and I talk about our day and summarize it into action items.



Doesn't Samsung already offer this, with a note-taking app for meetings?


Yes and it can translate on the fly as well.

https://www.samsung.com/us/support/answer/ANS10000942/



I would have never expected this. I thought no one had a moat in AI, so why can’t Apple just use/extend one of the existing open solutions? Also, how does Apple deal with providing Siri data to a third party?


The licences for open-source models are generally designed to restrict this kind of usage by big tech. See the "Additional commercial terms" of the Llama 2 licence, for example:

> If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.

There is probably a need for an alternative term to describe these kinds of models, because they're not exactly open-source in the traditional sense


If anyone can strong-arm OpenAI into letting them self-host their models, it's Apple.


I think OpenAI would be happy to cut a deal. Microsoft already self-host their GPT4 models for MS Copilot.


How can apple strongarm OpenAI?


The same way they strong-arm TSMC into giving them exclusive first dibs on new process nodes - by having more money than God and making an offer they can't refuse.


You and I have very different definitions of the word "strong-arm". By that definition, I just strong-armed Apple into giving me an iPhone by giving them $1200.


Not the same. If Apple pays OpenAI, it won’t give them 0.001% of their profits, but something like 20%.

If your iPad is worth 20% of Apple’s profits, you bet Tim will be hand-holding you through the obnoxious setup process.


Me buying something is different from Apple buying something because the two things have different prices?

To strong-arm someone means to impel them by physical force, especially against resistance, not to give them so much money they'll be happy to cooperate. That's basically the exact opposite of strong-arming.


You are reading things too literally.


My guess is it is mainly marketing. Since chat gpt is a well known brand now


Even then, I think the last company/brand that Apple integrated in their software was Google, and that is long gone (I mean actual full apps/services dedicated to them, not just Mail.app integration)


A lot of people here are assuming this will involve API calls to OpenAI’s servers.

If a deal is made it will almost certainly be to self host OpenAI’s models, the same way Microsoft does. Apple will use RAG to ground requests in the user’s own iCloud data, and no user data will be sent to OpenAI or be used to train any OpenAI models.


Seeing how few things can be done/automated on a locked iPhone even when using Shortcuts, I don’t expect this to go beyond a better autosuggest for text. Which is fine, killing the AI hype might just work out for eveyone.


Maybe I’m dim today but I can’t tell if they mean via API or running locally in some manner. I guess if it’s local, it wouldn’t actually be the GPT we know.

I’ve been very excited about the idea that Apple takes a stab at having it run locally, because it would help improve what I consider to be by far the largest UX issue: the awkward latency.


For most of my use cases, the privacy impact of sending LLM queries to someone else is a larger UX issue


For Apple, they know the average person doesn’t really care about this. They only lean into privacy when it’s a useful competitive weapon. There are many people willing to pay for products now that record their screen 24/7, or buy “AI” wearables that are just open mics. The Overton window shifted for privacy concerns somewhere between Google Glass (where people were barred from locations) and now.

Still, it wouldn’t surprise me if they work with openai to release less powerful local models just for simple Siri use cases so they can keep using privacy for marketing.


> For Apple, they know the average person doesn’t really care about this.

This is one sad thing about the free market: everybody is _not_ average in one way or the other, but all we get are products made for average people, so in the end everybody is a little bit unhappy. Some people are far from average, therefore they must be very unhappy. Now if only the products we bought weren't so much locked down, we could modify them and use them however we wanted to.


It would be nice if Apple products were less locked down. Android is the only option if you want that sadly for the phone space


Latency is going to be far higher per response quality unit if you run the model on the phone.



I'm interested to see how they handle the financial side of whatever extra LLM features they start to offer.

Free, like Siri? Free to a point, then payment needed? Limited abilities are free? Subscription only?


My guess is that they host a version of the model locally on the iPhone.


Even if they don’t (or if it’s partially networked as some recent rumors suggest), it’ll be rolled into one or both of two predictable costs (to the consumer):

1. The device sale itself, either raising the ASP or offsetting some other cost (to Apple) savings

2. Recurring payments for iCloud (or any rebranding it might undergo along with the feature)

Apple’s pricing model, if not totally predictable, is exceedingly formulaic. If they deviate from these into some sort of nickel and diming on “AI” features alone, that would almost certainly be a clear sign that they’re betting against it as a long term selling point.


This indeed seems to have been a heavy focus of their research team in the past year, eg. "Efficient Large Language Model Inference with Limited Memory" [1] and OpenELM [2]

[1] https://arxiv.org/pdf/2312.11514

[2] https://arxiv.org/pdf/2404.14619 (with 1.1B parameters, this appears to be their attempt at building a lightweight LLM)


Maybe a very cut down version - any of the more recent and capable OpenAI models are surely far too large to put on an iPhone, and far too large to run (in terms of both memory available, and processing power).

This would maybe align with the 'limited abilities are free' approach.


The countdown to a controversial privacy violation begins.

Makes you wonder if this will be baked in or if they'll give users the ability to opt-out of data sharing on a self-proclaimed "private" OS.



Curious to see the specifics and terms of any such deal before speculating if it's a good or a bad thing, for users, and for Apple.


So what are they releasing in this WWDC exactly? I heard it was all about on device AI


When possible. Apple already has a hybrid model — for example, Series 9 and Ultra 2 can do many things with Siri completely on-device, but (as with pre-S9-chip devices) still leverage a more powerful device when needed. I predict they're generalizing this as a framework that automatically passes and returns requests according to some heuristic of quality + privacy + latency. Even local devices like Macs and HomePods may participate in this "AI mesh".


I thought OpenAI had an exclusive with Microsoft? Or is that expiring?


Frankly I find this a bit disappointing.


Anyone else worried about the apparent non-orthogonality here? Why do we need Apple's approval for using third party services on the hardware we own?


This is about Apple tying 3rd-party capabilities directly into the OS. You can use ChatGPT (or whatever) via several existing extensibility mechanisms right now — Siri, Shortcuts, the Action Button, Share Sheets, etc. — with apps which have done the integration work.

https://help.openai.com/en/articles/7993358-chatgpt-ios-app-...


Does seem like another browser/search integration debacle. Makes you wonder which side is getting paid between Apple and OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: