Hacker News new | past | comments | ask | show | jobs | submit login

Aside from their spectacular laptops with arm, they seem like they are being left behind.

Vision Pro is probably a gimmick along with the whole VR world right now, which will change soon too but overall I don't see anything exciting about apple.

Their pricing is infuriating and so are their decisions (laptop 8gb ram in 2024???)

To me it looks like they got stuck in the "this is what worked for us, so let's only do this" mentality and take no risks.

They stand on the shoulders of giants and most importantly on their cultural presence...




Seems to me they ran course of strategic layout set by Jobs and are cruising on play it safe and more of it now. Hence wide variety of the sameness in their product offerings. Cook is a good operative, but not a strategic visionary. As for what's the hot topic about, Apple was always heavy handed, only now is the era they got a chance to have that hand be real heavy.


They missed AI along with everyone else except for OpenAI and MS. But, it's hard to say they're being left behind when they have products that are the defining product of the category. Obviously there's the iPhone, but also AirPods, iPad, and Apple Watch.

And the ARM changeover in the laptops has been so seamless, people seem to ignore the huge risks with switching architectures. And now everyone is chasing them for the same power/battery life.

They've had some missteps, but we need a few more years to really know if they have been left behind. Apple was never one to be first to do something.


Without a doubt, they have one hell of an engineering team.

After a life on windows and some periods on linux, apple managed to refine their os and hardware to the point where I can say, it doesn’t get in the way and it “just works”, which, I think, is what most professionals want.


I’m a mac user at home, and I don’t get their AI story/path now that they’re not supporting AMD/Nvidia GPUs since the Apple Silicon transition.

Maybe they’ll manage to get LLMs running well locally with the new low-bit developments? Not my area. But for training/learning it seems like Apple is DOA. They have the same problem as AMD, no one is doing research with their hardware or software.

Intentionally shipping low RAM/unified memory quantities seems short sighted too. Maybe with a 16GB baseline they could do something special with local LLMs.


I think you are looking at a very narrow use case and deciding that because they do not make a system you'd be happy with for your niche use that they are DOA. Someone selling just under 6.5 million units of anything seems like the opposite of dead to me. Are there vendors selling more? Of course, but there are also vendors selling less. Not every Mac user cares about AI and training or fine tuning a local LLM.


Very true, my needs are niche for sure. But I’m more thinking about the near future. AI/LLMs are going to have some general applications that users are going to want, and will become the norm, and I think it’s clear that will shake out soon. Apple is at risk of being left behind because the only people working on that stuff for Apple, work at Apple. Hobbyists and researchers are on Linux/Windows for the most part. Software development doesn’t have such a large platform difference, lots of developers use macOS. But ML is different and I think they should care.


> But ML is different and I think they should care.

It’s totally this time I promise, just like, one more ~~lane~~ model.

I’m sure they do care. I wouldn’t be surprised if they land significant support for on-app processing of models, they’ve already got the chip, dropping in local models is a sensible next step, and if close to zero effort for them.

> LLMs are going to have some general applications that users are going to want, and will become the norm

I have yet to see anyone, in my personal or professional circles, use any LLM:

- for more than a week

- for anything more than cutesy trivial things.

I’m sure there’s people around stapling models into their toaster, but this is so far from the normal.


Part of Apple's problem is that they're expected to vendor support for third-party stuff. Who accelerates Pytorch or ONNX for Apple silicon, if not Apple?

They've done an okay job of that so far, but their flagship library is diverging pretty far from industry demand. At best, CoreML is a slightly funkier Tensorflow - at worst, it's a single-platform model cemetery. No matter what road they take, they have to keep investing in upstream support if they want Nvidia to feel the heat. Otherwise, it's CUDA vs CoreML which is an unwinnable fight when you're selling to datacenter customers.

I think it's possible for Apple to make everyone happy here by reducing hostilities and dedicating good work where it matters. Generally though, it feels like they're wasting resources trying to compete with Nvidia and retread the Open Source work of 3 different companies.


> an unwinnable fight when you're selling to datacenter customers.

didn't Apple pretty much throw in the towel in this market simply by choice of form factor for their computers? The sheer desperation of their users wanting a device in this space is shown in the "creative" ways to mount their offerings in a rack.

all of the user friendly things they've done by shrinking the footprint, making them silent, etc are all things a data center does not care about. make it loud with fans to keep things cool so they can run at full load 24/7 without fear of melting down.

so from that lead alone, we can make the next assumption in that Apple doesn't care about vs CUDA. as long as they can show a chart in an over produced hype video for a new hardware announcement that has "arrows go up" as a theme, that is ALL they care about.


I mostly agree, which is why I question their strategy of even "competing" at all. The existence of CoreML feels strictly obligatory, as if the existence of Pytorch and Tensorflow spurned FOMO from the C-suites. It's not terrible, but it's also pretty pointless when the competing libraries do more things, faster.

Users, developers, and probably Apple too would benefit from just using the prior art. I'd go as far as to argue Apple can't thread the AI needle without embracing community contributions. The field simply moves too fast to ship "AI Siri" and call it a day.

> The sheer desperation of their users wanting a device in this space is shown in the "creative" ways to mount their offerings in a rack.

Well you and I both know that nobody is doing that to beat AWS on hosting costs. It's a novelty, and the utility beyond that is restricted to the few processes that require MacOS in some arbitrary way. If we're being honest with ourselves, any muppet with a power drill and enough 1U rails can rackmount a Mac Mini.


>I mostly agree, which is why I question their strategy of even "competing" at all.

If it makes their camera "smarter", it's a win. If they can make Siri do something more than "start a timer", then it's a win. If they can have images translate text more accurately, it's a win. There's a lot of things that an on device AI could help users without having to do all of the power hungry creation of a model or the fine tuning. They can do that in the mothership, and just push models on their device.

Not everyone needs to do AI the way you are trying to do it


I think that's a mistaken way of viewing it. Apple's failure in the gaming space is entirely a matter of policy; you look over at the Steam Deck and Valve is running Microsoft IP without paying for their runtime. Some people really do get their cake and eat it too.

Any of the aforementioned libraries could make their camera smarter or improve Siri/OCR marginally. The fact that Apple wasted their time reinventing the wheel is what bothers me, they're making a mistake by assuming that their internal library will inherently appeal to developers and compete with the SOTA.

The reason why I criticize them is because I legitimately believe Apple is one of the few companies capable of shipping hardware that competes with Nvidia. Apple is their only competitor at TSMC, it's entirely a battle of engineering wits between the two of them right now. Apple is going nowhere fast with CoreML and Accelerate framework, but they could absolutely cut Nvidia off at the pass by investing in the unified infrastructure that Intel and AMD refuse to. It also wouldn't harm the customer experience, leverages third-party contributions to advance their progress, and frees up resources to work on more important things. Just sayin'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: