I think OpenAI may eventually have to go upmarket, as basic "good enough" AI becomes increasingly viable and cheap/free on consumer level devices, supplied by FOSS models and apps.
Apple may be leading the way here, with Apple Silicon prioritizing AI processing and built into all their devices. These capabilities are free (or at least don't require an extra sub), and just used to sell more hardware.
OpenAI is clearly going to compete in that market with its upcoming smart phone or device [1]. But what revenue model can OpenAI use to compete with Apple's and not get undercut by it? I suppose hardware + free GPT3.5, and optional subscription to GPT4 (or whatever their highest end version is). Maybe that will be competitive.
I also wonder what mobile OS OpenAI will choose. Probably not Android, otherwise they would have partnered with Google. A revamped and updated Microsoft mobile OS maybe, given their MS partnership? Or something new and bespoke? I could imagine Johnny Ive demanding something new, purpose-built, and designed from scratch for a new AI-oriented UI/UX paradigm.
A market for increasingly sophisticated AI that can only be done in huge GPU datacenters will exist, and that's probably where the margins will be for a long time. I think that's what OpenAI, Microsoft, Google, and the others will be increasingly competing for.
> OpenAI is clearly going to compete in that market with its upcoming phone.
Excuse me, I'm not an english native, you mean like a smart phone? Or do you mean some sort of other new business direction? Where did you get the info thtat they're planning to launch a phone?
I believe there has been rumors that OpenAI was working with Jony Ive to create a wearable device, but it was unclear wether it would be a phone or something else.
OpenAI will make its money on enterprise deals for finetuning their latest and greatest on corporate data. They are already having this big enterprise deals and I think that's where the money is.
They will keep pricing the off-the-shelf AI at-cost to keep competitors at bay.
As for competitors, Anthropic is the most similar to OpenAI both in capabilities and business model. I am not sure what Google is up to, since historically their focus has been in using AI to enhance their products rather than making it a product. The "dark horses" here are Stability and Mistral which both are OSS and European and will try to make that their edge as they give the models for _free_ but to institutional clients that are more sensitive to the models being used and where is the data being handled.
Amazon and Apple are probably catching up. Apple likely thinks that all of this just makes their own hardware more attractive. It's not clear to me what Meta's end goal is.
I actually expect open source models will be small _but larger than they are today_ because phones and laptops will get dedicated chips and software for running eg the best open source (weights?) model
So eventually you could be running decent sized models locally (iOS could even provide an API with fine tuning etc)
In my experience apple's ML on iphones is seamless. Tap and hold on your dog in a picture and it'll cut out the background, your photos are all sorted automatically including by person (and I think by pet).
OCR is seamless - you just select text in images as if it was real text.
I totally understand these aren't comparable to LLMs - rumor has it apple is working on an llm - if their execution is anything like their current ML execution it'll be glorious.
(Siri objectively sucks although I'm not sure it's fair to compare siri to an LLM as AFAIK siri does not do text prediction but is instead a traditional "manually crafted workflow" type of thing that just uses S2T to navigate)
Does android even have native OCR? Last I checked everything required an OCR app of varying quality (including windows/linux).
On ios/macos you can literally just click on a picture and select the text in it as if it wasn't a picture. I know for sure on iOS you don't even open an app to do it, just any picture you can select it.
Last I checked the Opensource OCR tools were decent but behind the closed source stuff as well.
Not sure about other Android OEMs but OCR has been built in to Samsung Gallery (equivalent to Photos app on iPhones) for a while. Works the same way - long press on text in an image to select it as text. Haven't had any issues with it.
I'm not saying they will on the high-end, but maybe on the low end. Apple's strategy is to embed local AI in all their devices. Local AI will never be as capable as AI running in massive GPU datacenters, but if it can get to a point that it's "good enough" for most average users, that may be enough for Apple to undercut the low end of the market.
> Local AI will never be as capable as AI running in massive GPU datacenters
I'm not sure this is true, even in the short term. For some things yes, that's definitely true. But for other things that are real-time or near real-time where network latency would be unacceptable, we're already there. For example, Google's Pixel 8 launch includes real-time audio processing/enhancing which is made possible by their new Tensor chip.
I'm no fan of Apple, but I think they're on the right path with local AI. It may even be possible that the tendency of other device makers to put AI in the cloud might give Apple a much better user experience, unless Google can start thinking local-first which kind of goes against their grain.
> But for other things that are real-time or near real-time where network latency would be unacceptable, we're already there.
Agreed. Something else I wonder is if local AI in mobile devices might be better able to learn from its real-time interactions with the physical world than datacenter-based AI.
It's walking around in the world with a human with all its various sensors recording in real-time (unless disabled) - mic, camera, GPS/location, LiDAR, barometer, gyro, accelerometer, proximity, ambient light, etc. Then the human uses it to interact with the world too in various ways.
All that data can of course be quickly sent to a datacenter too, and integrated into the core system there, so maybe not. But I'm curious about this difference and wonder what advantages local AI might eventually confer.
This is a fascinating thought! It could send all the data to the cloud, but all those sensors going all the time would be a lot of constant data to send, and would use a lot of mobile data which would be unacceptable to many people (including probably the mobile networks). If it's running locally though, the data could we quickly analyzed and probably deleted, avoiding long term storage issues. There's got to be a lot of interesting things you could do with that kind of data
> I think OpenAI may eventually have to go upmarket
Let me introduce you to the VC business model. Get comical amounts of money. Charge peanuts for an initial product. Build a moat once you trap enough businesses inside it. Jack up prices.
If you have the new iPhone with the action button, you can set a shortcut to ask questions of ChatGPT. It’s not as fluid as Siri, and can’t control anything, but still much more useful.
Nobody is switching away from Apple over this, so ultimately Tim is doing his job. Under his watch Apple has become the defacto choice for entire generations. Between vendor-lockin/walled gardens and societal/cultural pressures (don't want to be a green bubble!), they have one of the stickiest user bases there are.
True, but that doesn’t mean we shouldn’t complain.
My hope is that the upcoming eu rulings allow competition here. Ie force Apple to get out of the way of making their hardware better with better software.
I think it's shitty and has no excuse, but the parent is right. Apple has no incentive to respond to their users since all roads lead to first-party Rome. It's why stuff like the Digital Market Act is more needed than some people claim.
You know what would get Apple to fix this? Forced competition. You know what Apple spends their trillions preventing?
agreed. I'm not trying to "excuse" shitty work, merely observing the incentives/pressures on them. We can complain about it all we want, but we won't understand it until we understand the incentives.
I think that's a bit glib. Right now it's true that nobody's going to leave the Apple camp because of Siri, but it's also true that nobody's going to leave the Android camp because of it. That state of affairs could change.
It's not a time for complacency, if only because driver assistance is becoming more important every day. There are good, sound business reasons to put competent people on the Siri team.
Apple may be leading the way here, with Apple Silicon prioritizing AI processing and built into all their devices. These capabilities are free (or at least don't require an extra sub), and just used to sell more hardware.
OpenAI is clearly going to compete in that market with its upcoming smart phone or device [1]. But what revenue model can OpenAI use to compete with Apple's and not get undercut by it? I suppose hardware + free GPT3.5, and optional subscription to GPT4 (or whatever their highest end version is). Maybe that will be competitive.
I also wonder what mobile OS OpenAI will choose. Probably not Android, otherwise they would have partnered with Google. A revamped and updated Microsoft mobile OS maybe, given their MS partnership? Or something new and bespoke? I could imagine Johnny Ive demanding something new, purpose-built, and designed from scratch for a new AI-oriented UI/UX paradigm.
A market for increasingly sophisticated AI that can only be done in huge GPU datacenters will exist, and that's probably where the margins will be for a long time. I think that's what OpenAI, Microsoft, Google, and the others will be increasingly competing for.
[1]:https://www.reuters.com/technology/openai-jony-ive-talks-rai...