They're already going multi-modal? Holy crap, if google can't deliver in the accessibility space for this (image descriptions better than "the logo for the company"), then I'll definitely go back to Apple. I mean I do hope Apple cleans out bugs and makes VoiceOver feel like it won't fall over if I breathed hard, but their image descriptions, even without an LLM, are already clean and clear. More like "A green logo on a black background", where Google is, like I said, more like "The logo for the company." I guess it's kinda what we get when AI is crowdsourced rather than given good, high quality data to work with.
Honestly if they are coming out with a paper now Apple has probably been working on it for a year or two at minimum . Next year releases of macOS / iOS are rumored to have LLMs as a feature .
I don’t mean to discount this work, but this particular model is the product of a few months of work tops. It’s effectively LLava with different training data, targeted at a specific use case. While I’m sure there is a significant effort at multimodal LLMs within Apple, this is just a tiny corner of it.
In this context “literally” is being used to draw attention to the perceived obviousness of the information. Essentially saying “how could you not see this?”
In this context, "what's the literally here?" means that the author think the post would be better without it: it's too agressive, sarcastic, demeaning, etc. for example. Hence it's a false question.
It's actually funny because you answered a false question (potentially on purpose). Like:
"What the hell are you doing?!?"
"Well Im stealing your car, obviously."
In this context, the verbose explanation to a rhetorical question seems to be the point of the comment, to reemphasize said aggression or sarcasm. It's actually funny because we both are doing the same thing, also potentially on purpose.
Apple has been looking sleepy on LLMs, but they've been consistently evolving their hardware+software AI stack, without much glitzy advertising. I think they could blow away Microsoft/OpenAI and Google, if suddenly a new iOS release makes the OpenAI/Bard chatbox look laughably antiquated. They're also a threat to Nvidia, if a significant swath of AI usage switches over to Apple hardware. Arm and TSMC would stand to win.
I doubt Apple’s going to make some big ChatGPT-style chatbot. They’re “just” going to use the same tech to drive iterative (good!) improvements to their products, like Siri and keyboard auto-complete.
But does it even work sensibly, yet? Almost every time my partner asks Siri something, it works so badly that we end up asking Android/Google's Assistant, which responds well to most things.
I don't recall exactly when it started happening, but I've been having lots of issues with recent iOS versions rewriting not the last word entered, but the word before that. For example, if I start entering "I went to", it'll sometimes correct to "I want to", but it'll do that after I've typed the "to". I've found lots of similar examples. The retrospective nature of the edits mean I miss a lot of them and makes me appear a lot less literate than I am.
Are you tapping the keys or swiping over those that make up the word you want to type? In my experience, tapping has always been and remained poor but swiping is getting better and better with every iOS version.
Swiping through keys doesn't have anything to do with autocomplete. Autocomplete has to do with predicting which word you're going to type next, not guessing which word best corresponds to the swipe you just made.
Given Apple's track record on anything AI related and the terrible state they keep CoreML that not only seems extraordinarily unlikely, it would take a lot of time to win developer trust and that I just don't see happening.
Apple doesn’t have to win developer trust or build an AI platform. They just have to build a compelling consumer product that can only function with AI, and they are better equipped to do that than Google or Microsoft. It remains to be seen if OpenAI will go that route instead of a business built on training and providing access to foundational models.
Yes, this is the most important point and I think somehow least present in even discussions here: the technical question of who produces the best/cheapest LLM/future architecture is considerably less important than who, if anyone, creates a fundamentally new and dominant consumer experience built on AI. Most of the existing players (Google, Meta) would of course prefer that nobody produces such a newly dominant paradigm of computation for end-users since it would greatly reduce their relevance and subsequently revenues. Right now, ChatGPT is the only real contender in this space. However, I think you’re correct that it’s actually Apple who is most likely to be the next who attempts such a paradigm shift. Far too early to bet, but let’s say I wouldn’t be surprised if in five years we end up in a world in which Apple has the consumer monopoly and Microsoft the business monopoly, with Google and Meta falling into irrelevance.
I think Microsoft is going to eat openai, I mean the company is practically half in and out of Microsoft's mouth. Bing will likely add more and more features that are native to chatGPT, Google I think will eventually get in the game, Facebook is actually doing better than Google, especially for open source models which is buoying the smaller researchers and developers.
In the end one company will build AGI or super AGI that can do the function of any existing software even games, with any interface even VR, or no interface at all - just return deliverables like a tax return. The evolution might be, give me an easier but similar QuickBooks UI for accounting to just do my taxes, the company who gets here first could essentially put all other software companies out of business, especially SaaS businesses.
The first company to get there will basically be a corporate singularity and no other company will be able to catch up to them.
>They just have to build a compelling consumer product that can only function with AI
Yeah and I'm not talking exclusively about developer trust. Given Apple's current consumer lineup (see Siri, Apple photos, predictive text etc)... we only have evidence that they suck at ML. What makes you think they are going to suddenly transform overnight?
> see Siri, Apple photos, predictive text etc)... we only have evidence that they suck at ML
…or that they only deal with mature tech and not the shiny new thing. Makes sense to me. I don’t doubt everyone will have a personal LLM-based assistant in their phones soon, but with the current rate of improvements to LLMs and AI in general, I’d wait for at least a year more while doing R&D in-house if I were Apple.
You could use that apologist language for any company. If they suck at something just say they are “biding their time” No. Apple is just demonstrably behind.
Having terrible predictive text, voice to text, image classification etc isn’t just a quark of the way they do business. Those are problems with years of established work put into them and they just flat out aren’t keeping up.
“Apologists” … this is the domain of strategic analysis, business and products. Apple, Google, et al are not feudal lords or entities owed personal allegiance, nor sporting teams for fans to rally around, nor are we talking about morality and ethics where Apple did something wrong and apologists are justifying it.
As far as whether they are keeping up or not, I disagree, but neither of our opinions really matter unless we’re betting — that is, taking actions based on calculated risks we perceive.
… because Siri, predictive text, etc suck because it isn’t using an LLM. Alexa, and the Google Assistants from the same era all suck as well. I don’t see how evidence that Apple sucked with pre-LLM ML is an indicator that they will suck with integrating an LLM into their products.
No one said anything about transforming overnight.
There are so many modern ML components that have terrible or no support in CoreML. Try to do any convolution other than conv2d, advanced or custom activation functions etc and you are out of luck. Exporting from PyTorch leads to all sorts of headaches with subtle behavior changes between implementations it is definitely a pain point for developers of widely used software
- iOS keyboard autocomplete using LLMs. I am bilingual and noticed in the latest iOS it now does multi-language autocomplete and you no longer have to manually switch languages.
- Event detection for Calendar
- Depth Fusion in the iOS camera app, using ML to take crisper photos
- Probably others...
The crazy thing is most/all of these run on the device.
I just wish you could turn the multilingual keyboard off—I find that I usually only type in one language at a time and having the autocomplete recommend the wrong languages is quite frustrating
That's true, I have found that mildly annoying sometimes. But most of the time it's a win. It was really annoying manually switching modes over and over when typing in mixed-language, which I do fairly often. It'd be great if there was a setting though.
I had the opposite problem, the languages I usually typed in(Romanian + English) didn't have a multi language mode on iOS. So it was a constant pain to switch btw them when I needed to insert some English terms in Romanian sentences. IOS didn't support multi language for this language pair. On Android it always worked like a charm.
Hey I'm Romanian, too. The latest iOS does what you want -- it has multi-language support and typing mixed English + Romanian is seamless now. Yeah it was a total pain to keep switching languages before iOS 17.
To be honest I distrust Microsoft with swift key, but the it recognizes the change in language just smooth. I could switch languages in one sentence and it would understand what I am writing just fine, no Sill suggestions
Apple recommending wrong words when you write in mixed-language was the case in iOS 15, so much I always needed to manually change my keyboard language. But it’s no more in iOS 17. As an example I just typed this entire comment in Turkish keyboard with English autocorrect and suggestions working.
Maybe the (most likely) AI-based thing requires some training though. I got my new iPhone a month or so ago.
SwiftKey is great and if someone distrusts Microsoft for it...then fine, but various different companies "control" various different parts of my phone.
With an iPhone, it's only one company, that controls every little thing, and we have no insight into Apple at all. They can basically do whatever the hell they want.
Yeah face id is pretty good, no android phones seem to use the ir dot camera which makes me think Apple has a "patent" on it
...lame considering the dot projector is ripped straight out of kinekt.
Google does the memory photo thing too, but only if you use their app which I don't.
Android has had multi language keyboard support for the longest time, in addition to being able to install whatever keyboard I like (I use SwiftKey, it's brilliant) I can also install an llm based one as I please.
Android/my keyboard already does event detection/suggestion in text and has been doing for as long as I remember.
None of these are reasons to buy an iPhone...just reasons to buy a phone, lmao.
Personal ML systems running on hardware you own is the killer app. If these are "good enough" they'll be significantly preferable to using large subscription-based models, where those companies could pull a Lucy any day.
But Apple will just make a magical chip that's different to regular hardware cause they're the best company and invent all the things even if they've been seen before Apple still invented it first, just wait until their Super Unicorn Ultra™ chip comes out with Hyperdrive Retinated LLM™ support, they don't name normal hardware different just for marketing...it's really unique, new and inventive hardware that we're happy to pay a huge premium for because it's so advanced and inventive.
Multimodal large language model
Large multimodal language model
Large language multimodal model
Large language model (multimodal)
I prefer 1, because this is a multimodal type of an existing technique already referred to as LLM. If I was king, I’d do Omnimodal Linguistic Minds, but no one asks me such things, thank god
Rumors suggest they’re gearing up to make iOS 18 an AI focused release. It’ll be interesting to see if they offer different capabilities for online/offline scenarios, or if their offerings are strictly offline.
> Rumors suggest they’re gearing up to make iOS 18 an AI focused release.
Don't underestimate Apple at disappointing enthusiasts like you and me. We've been hearing many awesome stories about the next thing Apple will do, only to realize their marketing team chose to keep it for future iOS/MBP/iPhone generations to keep the profits high.
Running a LLM on-device alongside other apps (i.e. without it taking up all phone resources), and it being reasonably fast may well require more powerful hardware than they ever sold.
A voice assistant that takes 3 seconds to reply and then takes half a second per word is a nice demo, but not a product Apple wants to sell.
And yes, some people will say they rather have that than nothing on their hardware, but “the Internet” would say iOS 18 is slow, eats battery life, etc, damaging Apple’s brand.
I think your expectations are wrong. They sit there in silence with a few leaks here and there and some github projects, people speculate and get all excited about extrapolating those things. Then Apple deliver what works which may or may not be related to it.
What they don't do is sell you a lie a year before release then deliver shit (like every other fucking vendor).
They're plenty capable of delivering garbage. Certain year models of the MacBook pro were inherently faulty. I've had the displeasure of having two of them bought for me at work.
All of Apple's windows software (iTunes, Safari, etc) has been, at best, a barely working port.
I'm assuming they are putting a lot more thought and care into it than the touchbar, crappy keyboards and the rest, but I'm also not holding out much hope either.
Apple can screw up, no question. But they don’t do the two-year hype cycle thing that just about everyone else does in technology (or video games).
It’s incredibly rare for Apple to publicly talk about things that won’t be selling extremely soon.
The iPhone had to be pronounced because it was going to show up on the FCC website, and obviously Apple wanted to control the message. I suspect the Vision Pro may be similar, but they also wanted developers to start getting ready so they would have software day one.
The only thing I can think of that Apple pre-announced and failed at was the Air Power mat. They said it would be coming out soon after and had to push that a couple times before finally canceling it.
Other than that small exception, if modern (post jobs return) Apple announces something is coming, it will come out and be quite close to what they say.
They don’t pull a Humane AI, Segway, Cyberpunk 2077, or No Man’s Sky.
With software, their Come to Jesus moment is still in the future.
(To me, Swift is sort of the Jony correlate on the software side. Doesn't fit perfectly, of course, but very similar "we are perfect who cares about evidence la la la I can't hear you" vibes and results)
> What they don't do is sell you a lie a year before release then deliver shit (like every other fucking vendor).
If you're referring to Google, then you're right. But OpenAI has consistently delivered what they announced pretty quickly. Same with Microsoft. To think that Apple somehow has a secret sauce that helps them surprise everyone is an illusion. They've had 3 years now to show their interest in LLMs, but they're just too conservative to "think different" anymore.
I think LLMs are to inconsistent for Apple's taste, I mean that in both senses. Their perfectionism won't risk bad output, ever, which is impossible for LLMs
My expectation of Apple is that they lurk in the shadows, looking at what others do while perfecting their own thing. Then on the day of release, they'll be a decade ahead of competition.
Maybe they aren’t “decades ahead” but they’re examples of Apple being slow to market, launching very competitive products years after the market was established.
E.g. assuming Apple Vision launches soon, they’ll be “many years behind” Quest from the date of first launch, but most likely miles ahead as far as usability.
Apple has never promised any car. From what is known, project Titan has been cancelled years ago.
The only exeption of not delivering I can recall was AirPower. It's a product they've announced and then embarassingly weren't able to finish up to their standards (or up to what was promised), so they have cancelled it altogether.
> We've been hearing many awesome stories about the next thing Apple will do, only to realize their marketing team chose to keep it for future iOS/MBP/iPhone generations to keep the profits high.
I believe it's more likely we've heard awesome stories about things Apple will do in the future, only to realize that the average HN commenter is incapable of understanding that such stories are contextless leaks, and that it is far more likely you are operating with incomplete information than Apple's "marketing team" is holding things back for future "iOS/MBP/iPhone generations" to keep their profits high.
I know it's more fun to vomit dumb conspiracies onto the internet, but consider changing "realize" to something which conveys equivocation, because your theory about Apple's marketing team holding back mature technology in order to benefit future devices – in addition to being predicated on leaks and rumors, and risibly inane when you consider that such action would create a significant attack vector for competitors – is as equivocal as the belief that Trump is on a secret mission to destroy a global cabal of pedophiles.
I hope they get rid of it completely. People on r/locallama and others have made much better assistants using GPT which use iOS APIs to control the phone. It's ridiculous that Apple still hasn't done anything useful regarding Siri.
I'd be (pleasantly) surprised; look at how long it takes them to allow system apps to get replaced with downloadable 3rd party.
And even then, the activation keyword is likely to be whatever Apple says. Similar logic as 3rd party keyboards, don't want user input to get stuck on even merely potentially untrustworthy or buggy code.
Correct me if I’m wrong but isn’t the activation word “special” in that it has to work in very low power states? I always assumed that is why the wake words are fixed, because said words need to “fit” within a very small power budget.
It's been rewritten several times, people largely don't notice because they don't try using it in different languages. And of course because they want to seem savvy so they repeat impressions from other people's posts, not realizing those posts are years old.
For me fixed phrases would do it often, I use Siri mostly when driving, but keeps saying for almost any command/query: Sorry I can't do that while you are driving. (other main usecase is setting tea/cooking timers with hands full/greasy. This alone makes it pretty useful.)
I guess this won't change, because it is probably for legal reasons, to avoid being sued by some "I just dried my cat in the microwave" style genius after making a car accident (unrelated to Siri, but trying to shift the blame).
Adding support for smaller languages would be nice actually. When its reading out of Hungarian messages loud it sounds incredibly retarded. I always have a great time listening to those and trying to guess the message. :) It would be nice if I could send a message to my significant other about being stuck in traffic in Hungarian. (the iPhone keyboard already does pretty decent voice recognition in the language)
If they do it right, it might make me switch from Android. I've never used iOS before and the only thing I'm able to use Google assistant for is setting alarms, and it can't even delete the alarm I created just now.
GPT-4 voice is so, so good. Really what you would want a voice tool to be like. I can talk to it like a normal human being, unlike issuing specific commands loudly as with Siri.
But no matter the Siri shittiness (which I agree with) an LLM can only interact with the outside world – ie run commands – that exist and have a reasonable API surface, no?
Apple has had automation for ages with Automator, Shortcuts etc but nothing that actually integrates well with day to day flow. So.. setting a timer when my hands are wet already works ok, and that’s about what I need.
I honestly wonder what type of voice interactions people want with their phones. I can see transcribing/crafting chat messages I guess? But even so, it feels like it would mess up and use iMessage instead of WhatsApp, will it narrate my memes, open links and read “subscribe for only 4.99 to read this article”, cookie consents etc etc. if everything sucks how is narrating it gonna help?
Maybe I’m old but I still don’t see the major value-add of voice interfaces, despite massively improved tech and potential.
I would be happy if it does my morning routine for me. Give me a brief summary of my emails over the night, tell me the headlines from the news outlets I follow, summarize tweets from my favorited accounts, and give me an overview of the market based on my investments and watchlists.
It’s a place holder for iOS 18’s expansion. It’s 0.1 of the LLM in iOS. And the other prior implementation was so ducking bad that I’m not sure how you would observe such a regression.
With iOS 17 they added a tiny little LLM to the predictive typing. I have the newest and greatest iPhone but I feel that I very rarely see it in action. I must assume that it’s just too slow at to keep up with my typing at the moment. Or it’s just not large enough to give very many useful suggestions.
Really? Typing in my iPhone 12 Pro has become a nightmare. I suspect it is because of predictive typing ML shit. It happens all the freaking time now. The symptom is that my whole device just froze for a few seconds while the next word is squeezed out. How do I turn it off?
Compared to Android, the iOS keyboard has always been a nightmare. However, wI feel it has been causing me fewer issues within the past month-ish. Has it been updated recently?
It's like how the "New Forest" is really old now: even small LLMs are (from what I've seen which isn't exhaustive) large compared to Markov language models.
Autocomplete got worse because the new system in iOS17 didn’t retain training data from prior versions. It reset everyone back to untrained. I’ve been manually correcting specialized language I use daily (e.g. “iRacing”) on my 12 (non-pro) since iOS17 release, and now it gets it correct 99.5% of the time.
So, rather than turning it off, manually correct the incorrect completions and use the suggested words bar frequently and it will learn how you type. It’s just having to start over after tossing out several OSes worth of training that makes it feel worse.
Yes, their hardware is positioned phenomenally with little RAM even by phone standards which is what you'd hack around with for inference on mobile architectures.
The paper was definitely cool but doesn't allow you to run particularly large LLMs on iPhones. It allows you to run a certain kind of LLM (sparse ReLU based LLMs) whose weights are somewhere less than 2x RAM. So, 7b Falcon works, but the competitive-with-gpt-3.5-turbo LLMs are still out of reach (and aren't ReLU based, although maybe that could change in the future). And nothing is competitive with GPT-4 right now.
Of course in the long run I think it will happen — smaller and more efficient models are getting better regularly, and Apple can also just ship their new iPhones with larger amounts of RAM. But I'd be very surprised if there was GPT-4 level intelligence running locally on an iPhone within the next couple years — that sized model is so big right now even with significant memory optimizations, and I think distilling it down to iPhone size would be very hard even if you had access to the weights (and Apple doesn't). More likely there will be small models that run locally, but that fall back to large models running on servers somewhere for complex tasks, at least for the next couple years.
> And nothing is competitive with GPT-4 right now.
You mean nothing available? Or you mean nothing that public knows exists? The answers to those two questions are different. There are definitely products that aren't available but the public knows exist and are upcoming that are in GPT-4's ballpark.
I mean nothing that is able to be benchmarked and validated by third parties is GPT-4 quality. I know there are upcoming releases that are hyped as being equal to GPT-4, e.g. Gemini Ultra, which I am very excited to get my hands on — but regardless, Ultra is not small enough to run on phones, even using the sparse ReLU flash memory optimization. And we'll see how it benchmarks once it's released; according to some benchmarks Gemini Pro has somewhat underperformed GPT-3.5-Turbo [1], despite Google's initial claims. (Although there are criticisms of that benchmarking, and it does beat the current 1106 version of GPT-3.5-Turbo on the Chatbot Arena leaderboard [2], although it slightly underperforms the previous 0613 version.)
I heard rumours of these claims a few weeks ago, I assume they are talking about the same thing. Nothing concrete but from a reputable person and honestly with how well mixtral performs on the chatbot arena elo board I wouldn't be surprised if it's true.
Siri is really quite dumb. I am confident that a 7B model would be able to provide better responses in over 90% of user queries. I can't even get Siri to reliably set a timer.
Eh no, 7B Mistral / Deepseek would certainly almost already be able to function as a super Siri, but probably something closer to PHI-2 + the new MLX apple optimisations. Have you tried those? https://twitter.com/awnihannun/status/1735355067673526360
If trained on an iPhone API + documentation and given a little web access it would blow absolutely everything out of the water.
If they can already create -basic- Python/Swift/JS/rust apps that sets timers, save things, create lists, how's that too dumb for being a Siri replacement? They just have to give it access to an iPhone/Web Api like ChatGPT's code analysis tool.
So if you ask it "hey siri do this, this and this", it will create a script, then run it on the internal API, or fetch an article then work on that etc.
I know it's still logically "dumb" but i'm not trying to play game theoretical scenarios with my phone or do logic tests or advanced math (yet).
True but you could make the api restricted, having certain routes completely locked, some requiring double checks, some requiring on screen approval or face-id, throttling outside fetches, only being able to run get and not etc, no financial app control etc.
But yeah "hey siri transfer all of my funds to eric", or "hey siri group all of my photos where i'm nude and send them to jack" are new almost sci fi vectors.
Depends on if they implement some form of function calling, really. If something like a 7B Mistral fine-tune had access to search and various applications, I imagine it would perform fine and better than Siri.
This is part of the mismatch between comparing Alexa/Siri/Cortana to a chat based LLM right now. If you just want chat and info retrieval, today’s LLMs are way better then the conversational dialogue, search, and q&a capabilities any of those assistants have. But, if you want relatively predicable task completion for things like smart home control, timers and alarms, real time weather updates, etc. (basic but frequently used interactions) or any integration with your phone or computer directly, there’s a lot to do that isn’t solved yet in the LLM space and is more of a system integration and action choice problem that the existing assistants have been hammering away at for years.
I would argue that “info retrieval” is also something the LLM space has yet to yet to solve to a human level of reliability, but I think your comment is right on. I see this all as part of the greater symbolic vs. stochastic (/neat v scruffy) dynamics
I would hope that it would at last get you out of the hell that is “what the heck did I name that stupid light?”. Device naming is, in my opinion, the worst part of any of the voice based home assistant things.
Is it “porch led”, “porch light” or “outdoor light”? Or is “outdoor light” actually the one in the front yard? What is the one by my kids nightstand? And what routine name do I use to set the mood for watching a movie?
I would hope a properly trained llm with an awareness of my devices and their locations would allow for more verbal ambiguity when controlling things.
Maybe, seems like overkill to solve a relatively straight forward problemt that is about resolution to real entities with a generative model that has no entity grounding (and may introduce other problems). It's really just not solved because Alexa/Siri leadership doesn't actually care enough about that use case to solve it. Device entity resolution and disambiguation does not require an LLM to solve for a limited number of devices, it just requires people to prioritze that over other things, and smart home device owners are the kind of early adopter that would not be the current market focus for growth (my guess, haven't worked on Alexa in many years).
I know a half dozen different ways to improve that today without LLMs.
Yes, I've been trying it out regularly ever since it was released. Last time I've talked to it for like 30 minutes while driving in October (just to test it again). It simply doesn't work for me.
I really, really doubt it for one reason: I’m convinced Apple is still terrified of that “Amazon Alexa tells child to stick a penny in a socket” story, and will hamstring themselves in an attempt to have their agential cake and eat it too
Good for you. It's the only thing phones are required to be able to do in the US even if they don't have a working SIM or you haven't paid the phone bill.
Well, I guess they're also not allowed to cause RF interference or randomly catch fire.
Apple is all about a controlled pleasant experience, it doesn't matter if it doesn't give you shiny new things; most Apple customers don't even know those shiny new things exist, so they keep spreading the word that "Apple is so easy and simple."
The idea of having an unpredictable LLM in the ecosystem is Apple's worst nightmare. I bet they will overly restrict it to the point that it stops being a general purpose LLM and becomes a neutered obedient LLM that always acts according to Apple's rules.
Also, it doesn't help that ALL the authors of this Apple paper are chinese. It raises questions about how Apple will handle political debates with its LLM.
> Also, it doesn't help that ALL the authors of this Apple paper are chinese. It raises questions about how Apple will handle political debates with its LLM.
The CCP thinks it owns all Chinese people on Earth, but that doesn't mean you have to agree with them!
Well apple was a prime Intel client for years until they released M1, and ARM on the cloud isn't really a thing for now... Ultimately it's all about what makes the most sense for what will make the most money, and on a datacenter that means x86 with Linux/Unix
This is a false statement, you have to be specific which one is not available on macos, there are plenty you can use already. Even freecad runs on macos.
They are implying that even though Apple is a wealthy consumer hardware company that has major spats with nVidia and Samsung, it doesn't always make economic sense to make tools they might need in-house when they can simply buy them from a rival.
So rather than invest engineering resources to re-imagine the fridge, they can simply buy them from established manufacturers that make household appliances like Samsung, Sony etc.
The point of this thread is that even though Apple makes silly charts saying how good their hardware is at ML, they use products that their silly charts say aren't as good.
There is no such hypocrisy if they use Samsung refrigerators.
Training the model requires inference for forward propagation, so even then, for your comment to be relevant, you'd need to find a plot that Apple uses to compare inference on quantized models versus Nvidia, which doesn't exist.
...and doing either of those things with CUDA is impossible on Mac. Why? Because Apple burned their bridge with Nvidia and threw a temper tantrum, that's why. Now Nvidia can't support MacOS, even if they wanted.
That's kinda the point of my original comment. Apple claims to know what's best, but contradict themselves through their own actions. We wouldn't be in awkward situations like this if Apple didn't staunchly box-out competitors and force customers to follow them or abandon the ecosystem. It's almost vindicating for people like me, who left MacOS because of these pointless decisions.
I feel like Apple is only testing the waters with AI right now, but perhaps if they get involved enough they'll spend money on their own compute infrastructure? Nvidia is kind of the king at GPU compute right now, and developing comparable hardware is no small or cheap task, but I think Apple is in a very good position to be able to make it work---if they decide to invest in it. But honestly, as far as corporate feud goes, I feel like companies will happily suck it up if it makes some process cheaper and/or easier
> But honestly, as far as corporate feud goes, I feel like companies will happily suck it up if it makes some process cheaper and/or easier
That’s what I think is going on. Apple hated being on the hook for Nvidia’s terrible drivers and chipset/heat problems that ended up causing a ton of warranty repairs.
In this case they’re not a partner, they’re just a normal customer like everyone else. And if Intel comes out with a better AI training card tomorrow Apple can switch over without any worry.
They’re not at the mercy of Nvidia like they were with graphics chips. They’re just choosing (what I assume to be) the best off the shelf hardware for what they need.
Apple silicon is good but it’s designed for a portable. Even the studio and Mac Pro are just laptop chips stitched together. They gotta use heavy duty gear to do heavy duty shit. I know they have a soured relationship with nvidia tho so I would like to see them bolster the AMD/rocm ecosystem. Chances are they’re working on their own stuff here too, though. They are sitting on billions of dollars of liquid cash so I’d imagine they’re using that for some serious R&D.
Dependent is a strong word. At the end of the day all these DL models run on any hardware, and you can easily swap out one type of hardware for another perhaps with some small performance impact. They're commodities, basically.
I’ve made an example app for a Flutter plugin I created that can do this.
Open-source, runs natively on all major platforms. I shared videos showing it on my iPad Mini, Pixel 7, iPhone 12, Surface Pro (Win 10 & Ubuntu Jellyfish) and Macs (Intel & M archs).
By all means, it’s not a finished app. I simply wanted to use on-device AI stuff in Flutter so I started with porting over llama.cpp, and later on I’ll tinker with porting over whatever is the state of the art (whisper.cpp, bark.cpp etc).
App is compatible with any GGUF files, but it must be in the ChatML prompt format otherwise the chat UI/bubbles probably gets funky. I haven’t made it customizable yet, after all - it’s just an example app of the plugin. But I am actively working on it to nail my vision.
You can use it commercially but there are some restrictions, including some of a competitive nature, like using the output to train new LLMs. This is the restriction that Bytedance (Tiktok) was recently banned for violating.
> Usage and License Notices: The data, and code is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
What I thought when reading the title: A new base model trained from the ground up on multimodal input, on hundreds to thousands of GPUS
The reality: A finetune of Vicuna, trained on 8xA100, which already is a finetune of Llama 13b. Then it further goes on to re-use some parts of LLava, which is an existing multimodal project already built upon Vicuna. It's not really as exciting as one might think from the title, in my opinion.
this seems like a good but small research project by a research team in Apple. far away from what product teams are working on for next generation of apple products.
The innovation is the modification of the neural network architecture to incorporate the spatial-aware visual sampler. The data and existing models are not the interesting part.
Maybe the abstract of the paper is a better introduction to what this is:
> We introduce Ferret, a new Multimodal Large Language Model (MLLM) capable of understanding spatial referring of any shape or granularity within an image and accurately grounding open-vocabulary descriptions. To unify referring and grounding in the LLM paradigm, Ferret employs a novel and powerful hybrid region representation that integrates discrete coordinates and continuous features jointly to represent a region in the image. To extract the continuous features of versatile regions, we propose a spatial-aware visual sampler, adept at handling varying sparsity across different shapes. Consequently, Ferret can accept diverse region inputs, such as points, bounding boxes, and free-form shapes. To bolster the desired capability of Ferret, we curate GRIT, a comprehensive refer-and-ground instruction tuning dataset including 1.1M samples that contain rich hierarchical spatial knowledge, with 95K hard negative data to promote model robustness. The resulting model not only achieves superior performance in classical referring and grounding tasks, but also greatly outperforms existing MLLMs in region-based and localization-demanded multimodal chatting. Our evaluations also reveal a significantly improved capability of describing image details and a remarkable alleviation in object hallucination.
This is going to be great for accessibility! Imagine being blind and loading up a video game and using this to figure out what's around, having everything described locally. I mean, um, well that's what I'd use it for anyway. But knowing Apple, we won't be able to prompt the LLM directly so that probably won't happen until 5 years from now.
Yes, I wondered whether "referring" had some special meaning, since the way they seem to use it suggests the word reference would normally be more appropriate there (unless it's a special meaning that warrants the different word).
I'm just inferring myself, but I believe it's referring to discussing things in the foreground / background or in a specific location in the provided image (such as top right, behind the tree, etc) in user queries.
It sounds like the "region inputs" are raster or vector inputs. So I'm imagining highlighting a region of the photo with my finger and having it tell me "that's the Duomo in Florence."
This will make Drone-based AI image context for behavior extremely powerful - especially when aspects of that MLLM handling for spatial-sitrep extremely precise for autonomous movement, then ultimately for decision making WRT interacting with humans (positive interactions and negative interactions).
Is it just me, or doesnt this MLLM seem particularly useful for flying objects with vision?
Many big “open source” releases in the AI community recently are not licensed for commercial use. Not really OSS at that point (ex:fuyu model from adept)
But what do we expect from these giants? They're not going to create fertile ground for new competition. The only businesses they foster are those living under thumb and paying tax.
I guess I at least hoped for "commoditize the compliments" here. Make Google and OpenAI broadly less special.
I seriously doubt that. I’m sure Apple got the rights to whatever they need, it’s not like they’re short on money.
But the fact that they licensed it doesn’t mean that license can be transferred to other people. So it may be that they can only release it for research under the terms of the licenses they got.
One big plus if this takes off as a base model is the abundance of weasel family animals to use in naming the derivatives. Ermine, marten, fisher, ... I'd like to call Wolverine. Llama didn't have much room for some interesting variety beyond alpaca and vicuna.
I believe that's the point the parent commenter was trying to make, although as the leaked Google document noted, "[Google has] no moat and neither does OpenAI".
This is more evidence that Apple is investing in building a MLLM as good as anything OpenAI and Google can build, albeit in a more Apple-y way (privacy-first, licensed content, etc.).
Yes, it looks like Apple is going after everyone and anyone that has a web based LLM, ChatGPT, Poe, Claude, etc. via developer kits LLMKit that can work offline.
This will only work if their models (even their tiny or even medium / base models) equal (or are better than) GPT-3.5 / 4.
From there, Google will follow Apple in doing this offline / local LLM play with Gemini.
OpenAI's ChatGPT moat will certainly shrink a bit unless they release another powerful multimodal model.
Apple's moat has been and continues to be their insanely large installed base of high-margin hardware devices. Meanwhile, LLMs are rapidly becoming so commoditized that consumers are already expecting them to be built-in to every product. Eventually LLMs will be like spell check—completely standard and undifferentiated.
If OpenAI wants to survive, they will need to expand way beyond their current business model of charging for access to an LLM. The logical place for them to go would be custom chipsets or ARM/RISCV IP blocks for inference.
Yes, OpenAI can copy this, but they’ll still have less of a moat. That’s the problem with moats, once they’re gone even if you copy what others do, you don’t have a moat anymore.
Think of it in a physical sense. OpenAI is a high walled castle surrounded by a physical moat. This protects them and their business model. Apple comes along and builds a super tall tower right next to the moat. They can now see into OpenAI’s castle, fire arrows, catapult in a giant wooden badger, etc. Even if Open AI copies the design of Apple’s really tall tower and built it behind the moat and castle walls, it wouldn’t do much because Apple still would be able to get stuff over the moat and walls. The moat doesn’t matter anymore for the most part. The castle (OpenAI) can be compromised and needs bigger walls, relocating to someplace with a bigger, or a way of attacking the tower (Apple). Copying doesn’t really accomplish any of those three.
If you do bet on the phone, make sure you only bet on the first-party OEM. API depreciation, exclusive entitlements, user restrictions, payment-processing or flat-out Sherlocking are all realistic and credible threats to your business.
It does not matter much if they do not make competitor for iPhones and get consumers to choose them. Because consumers keep buying iPhones and will have then Apple hardware.
And they cannot bring software ecosystem for their hardware without Google, at least easily.
It doesn't matter because Apple would not offer OpenAI actual integration terms, period. Their only option is to create an alternative hardware platform, because Apple fights tooth-and-nail to pick and choose what software iPhone users are allowed to run.
My situation is a bit unique, so the term "manufacturer" might not be the most accurate way to describe who created me. Here's a breakdown of what you need to know:
Developed by Google AI: I was created by a team of researchers and engineers at Google AI, specializing in language models and artificial intelligence.
Trained on a massive dataset: My knowledge and abilities come from being trained on a massive dataset of text and code, containing books, articles, code, and other forms of information.
Continuously learning and evolving: I'm still under development, constantly learning and improving as I interact with users and process new information.
So, while I don't have a single manufacturer in the traditional sense, I'm the result of collaboration and advancement in AI research and development at Google.
I hope this helps clarify things! Let me know if you have any other questions.
Usually I would almost automatically vote down a comment where someone just stuck something into a LLM and pasted the output. It almost never adds to the discussion.
However in the case that we’re talking about the kind of output generated by the LLM in some circumstance, it can be instructive. Like you noted this is a perfect example.
I guess I didn't make it clear enough that the response was an answer to the example question in the comment I responded to, verbatim. I thought it interesting. Guess others didn't.
By "A series" are you referring to the Nvidia A100 or the Apple A-series iPhone/iPad chips? If the latter, I don't think you can draw that conclusion. Training has memory and processor requirements that are very different from inference. You don't need iPhones and iPads to train models—you need them to run models. These are two very different things.
I feel like Apple is only testing the waters with AI right now, but perhaps if they get involved enough they'll spend money on their own compute infrastructure? Nvidia is kind of the king at GPU compute right now, and developing comparable hardware is no small or cheap task, but I think Apple is in a very good position to be able to make it work---if they decide to invest in it. But honestly, as far as corporate feud goes, I feel like companies will happily suck it up if it makes some process cheaper and/or easier
I think you’re completely correct, but if they were caught off guard by the AI train, they shouldn’t be testing the waters now. It should be treated as an existential threat.
I've always wondered why Apple is never in the conversation of modern AI, like until now it almost feels like they've been just watching from the sidelines without taking part in all the commotion
Maybe they can just afford to observe before making major decisions on the direction of the company... For a company like Apple, I feel like they won't lose their customer-base just because they are taking the AI race slowly, in fact Apple has often been late to introducing very common feature
... Anyways, my point being that Apple will gladly introduce a polished product a couple of years after everyone else has already done it, and their target audience will still applaud their work and give them money. Apple for some reason simply _can_ afford to test the water
Apple doesn’t even sell NVidia cards on their Mac Pros. Are they training it on Linux?
I think Apple would strive to be great at all computing related tasks. “Oh, Macs are not good for that, you should get a PC” should make them sad and worried.
AI/LLM is the new hot thing. If people are using Windows or Linux, you’re loosing momentum, hearts and minds… and sales, obviously.
If a train with a GM diesel engine delivers raw materials to a Ford factory for making F150s, you would conclude that consumers whould start driving trains to work?
Not at all, just that the engineers at Ford would be more proud if the train used their own diesel engine. And that this kind of thing affects public perception. “Ford is not for heavy duty”
Apple doesn't even support NVidia cards on their Mac Pros. The technical details are above my head, but the way Apple M* chips handle PCIe make them incompatible with GPUs and other accelerator cards. Whether you use macOS or Linux.
AFAIK all their cloud services run on x86 hardware with Linux, including Xcode Cloud (which runs macOS in a QEMU VM, in a way that only Apple can technically and legally do).
But no one is training these kinds of models on their personal device. You need compute clusters for that. And they will probably run Linux. I'd be surprised if Microsoft trains their large models in anything else than Linux clusters.
> But no one is training these kinds of models on their personal device
on-device transfer learning/fine tuning is def a thing for privacy and data federation reasons. Part of the reason why model distillation was so hot a few years ago.
Apple would want to train models as fast as they could. Nvidia provides an off the shelf solution they can just buy and use for a very reasonable price and sell on the second hand market.
If they wanted to use their own hardware they would either need more of it, which would cost a lot and divert production from sellable devices; or they would need to make special chips with much bigger neural engines, which would cost even more.
Also Apple uses public clouds for service stuff. They may not even own any hardware and just be renting it from AWS/Azure/GCP for training.
> I think Apple would strive to be great at all computing related tasks. “Oh, Macs are not good for that, you should get a PC” should make them sad and worried.
What percent of Apple's customers train models? Does it even crack 1%?
Apple already fails for many types of computing, e.g. any workflow that requires Windows or AAA gaming.
They used to sell the best developer’s machines. Before Docker, at least. Don’t developers need to train models? And if they are doing so on Linux or Windows, wouldn’t that make it easier for them to better or exclusively support the end product on the one they’re more used to?
You want to be the platform where the excitement is happening.
Apple had a falling out with Nvidia a number of years ago. I believe they were using chips from Nvidia in their MacBook Pros, the first to come with both integrated and discrete graphics, but the solder between the chips and the motherboard kept cracking and a rather large number of MacBooks needed to be repaired.
That's not how s/w or any development work. Even if there is a team working on M4 SuperUltra++ which competes with H100, in meanwhile, s/w team will continue to use what is available which may be Intel/AMD PCs with Nvidia GPUs or GoogleCloud/Azure/AWS.
Presumable because this is Conda none of this can be run on any Apple hardware despite people managing to get M processors to do a bit of dabbling with AI?