Hacker News new | past | comments | ask | show | jobs | submit login

I wish they'd put a huge FPGA on every device.



You're not going to like the battery lif


I'm assuming your phone died before finishing this comment, thus proving your point.


I hope they put a laser on every shark.


For all Apple's flaws, I trust that they could disrupt the AI industry once they enter it. Their decisions are all highly calculated, and they employee vast swaths of researchers. I would not be surprised if they entered the market with a well functioning local LLM


Have you actually used Siri? They are absolutely garbage at anything even remotely resembling the basics of AI/LLM related after spending a decade on it.


Obviously, they haven't invested any money into it (well, apparently they will for iOS 17). When Apple does invest money, however, which they seem to be doing now, there are usually impressive results. Take the M1, for instance.


Im not debating the mechanics of why it’s so shit, I’m saying that they are many years behind everyone else and that it’s not obvious that simply throwing money at the problem is going to fix that for them or that they actually have much of a track record of making good software again despite unlimited budgets and many years of opportunity to prove otherwise.


Llama2 13b runs on an m1 at 13 tokens/s. So they will be able to do this on an iPhone.

https://gist.github.com/gengwg/26592c1979a0bca8b65e2f819e31a...


It's pretty clear they made a conscious choice to keep Siri simple (as frustrating as that is). But this does not reveal anything about their capabilities with LLM based systems.


It feels like it's too late. Microsoft, Meta and Google all beat Apple to the punch with highly usable and competitive local inferencing frameworks. Apple could release a Tensorflow-style library with full-fat CUDA and Linux support tomorrow, but they would just be Another Competing Solution next to ONNX and Pytorch.

The hardware side doesn't look much better. Apple's biggest userbase is located on iPhone, which poses hardware constraints on what kinds of model you can realistically deploy. The Mac has a deeply-ingrained audience, but it's unlikely that the AI will be a selling point to commercial customers (who have Nvidia) or PC users (that have other models).

Honestly, I believe AI research would be a wasted investment relative to supporting third-party libraries upstream and welcoming Nvidia back onto their platform.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: