Hacker News new | past | comments | ask | show | jobs | submit | Zandikar's comments login

> They can't just slap more memory on the board, they would need to dedicate significantly more silicon area to memory IO and drive up the cost of the part,

In the pedantic sense of just literally slapping more on existing boards? No, they might have one empty spot for an extra BGA VRAM chip, but not enough for the gain's we're talking about. But this is absolutely possible, trivially so for someone like Intel/AMD/NVidia, that has full control over the architectural and design process. Is it a switch they flip at the factory 3 days before shipping? No, obviously not. But if they intended this to be the case ~2 years ago when this was just a product on the drawing board? Absolutely. There is 0 technical/hardware/manufacturing reason they couldn't do this. And considering the "entry level" competitor product is the M4 Max which starts at at least $3,000 (for a 128GB equipped one), the margin on pricing more than exists to cover a few hundred extra in ram and extra overhead in higher-layer more populated PCB's.

The real impediment is what you landed on at the end there combined with the greater ecosystem not having support for it. Intel could drop a card that is, by all rights, far better performing hardware than a competing Nvidia GPU, but Nvidia's dominance in API's, CUDA, Networking, Fabric-switches (NVLink, mellanox, bluefield), etc etc for that past 10+ years and all of the skilled labor that is familiar with it would largely render a 128GB Arc GPU a dud on delivery, even if it was priced as a steal. Same thing happened with the Radeon VII. Killer compute card that no one used because while the card itself was phenomenal, the rest of the ecosystem just wasn't there.

Now, if intel committed to that card, and poured their considerable resources into that ecosystem, and continued to iterate on that card/family, then now we're talking, but yeah, you can't just 10X VRAM on a card that's currently a non-player in the GPGPU market and expect anyone in the industry to really give a damn. Raise an eyebrow or make a note to check back in a year? Sure. But raise the issue to get a greenlight on the corpo credit line? Fat chance.


A 128GB VRAM Intel Arc card at a low price would be an OSS developer’s dream come true. It would be the WRT54G of inference.

Of course. A cheap card with oodles of VRAM would benefit some people, I'm not denying that. I'm tackling the question of would it benefit intel (as the original question was "why doesn't intel do this"), and the answer is: Profit/Loss.

There's a huge number of people in that community that would love to have such a card. How many are actually willing and able to pony up >=$3k per unit? How many units would they buy? Given all of the other considerations that go into making such cards useful and easy to use (as described), the answer is - in intel's mind - nowhere near enough, especially when the financial side of the company's jimmies are so rustled that they sacked Pat G without a proper replacement and nominated some finance bros in as interim CEO's. Intel is ALREADY taking a big risk and financial burden trying to get into this space in the first place, and they're already struggling, so the prospect of betting the house like that just isn't going to fly for the finance bros that can't see passed the next 2 quarters.

To be clear, I personally think there is huge potential value in trying to support the OSS community to, in essence, "crowd source" and speedrun some of that ecosystem by supplying (Compared to the competition) "cheap" cards that aschew the artificial segmentation everyone else is doing and investing in that community. But I'm not running Intel, so while that'd be nice, it's not really relevant.


I suspect that Intel could hit a $2000 price point for a 128GB ARC card, but even at $3000, it would certainly be cheaper than buying 8 A770 LE ARC cards and connecting them to a machine. There are likely tens of thousands of people buying multiple GPUs to run local inferencing on Reddit’s local llama, and that is a subset of the market.

In Q1 2023, Intel sold 250,000 ARC cards. Sales then collapsed the next quarter. I would expect sales to easily exceed that and be maintained. The demand for high memory GPUs is far higher than many realize. You have professional inferencing operations such as the ones listed at openrouter.ai that would gobble up 128GB VRAM ARC cards for running smaller high context models, much like how you have businesses gobbling up the Raspberry Pi for low end tasks, without even considering the local inference community.


As an American, I can't recall "The next best thing" ever meaning "second best' but rather the up and coming latest and greatest thing.

Big country though, could be a regional thing.


Comparing a WSE-3 to a H100 without considering the systems they go in or the systems, cooling, networking, etc that supports them means little when doing cost analysis, be it CapEx or TCO. A better (but still flawed) comparison would be a DGX H200 (a cluster of H100's and their essential supporting infra) to a CS-3 (a cluster of WSE-3's and their essential supporting infra in a similar form factor/volume of a DGX H200).

Now, is Cerebras going to eventually beat Nvidia or at least compete healthily with Nvidia and other tech titans in the general market or a given lucrative niche of it? No idea. That'd be a cool plot twist, but hard to say. But it's worth acknowledging that investing in a company and buying their products are two entirely separate decisions. Much of silicon valleys success stories are a result of people investing in the potential of what they could become, not because they were already the best on the market, and for nothing else, Cerebras approach is certainly novel and promising.


> That is to say, I'm not convinced by the article's hypothesis about locking diffs.

I'm not an offroader, but I did own a vehicle without a locking diff, that I later upgraded to having a locking diff (slapped a G80 on the rear of an 80's GMC Sierra) and that made a huge difference even on pavement in inclement weather. Granted, that was a RWD pickup with very little weight (typically) over the drive wheels. I'd honestly be shocked if the impact was minimal in truly offroad conditions. Granted, that's RWD which is even less than AWD or 4WD, so by no means apples to apples comparison there, just my 2 cents.

That said, this isn't a binary thing (locking vs open). There's a wide variety of AWD technology out there, and I could nerd out on the specifics, but at the end of the day, some are very limited in their ability to send power to one set of wheels vs the other, and may not have locking/limited slip diffs at all, and just use brakes to prevent wheel spin. I will say, Subaru (especially the higher/sportier trims like WRX/STi) can often hang and even shame some 4WD vehicles in some conditions. There's no shortage of videos of Subarus helping a 4WD out of a jam, or completing a course they could not, but how much of that is a function of their specific AWD tech and limited slip diffs vs proper tires and lighter weight and any number of things is a matter of debate that I'm not qualified to weigh in on. Again, am a gearhead, but not an offroader.

So I suspect it's not so much the Park saying "Subaru/AWD can't cut it" but rather, keeping track of which years, brands, models, trims, and/or potential optional equipment does cut it is a much more massive headache to keep track of and verify than just saying "4WD yes, everything else no", and I can't really fault them for that.


I am not an off-roader by any stretch of imagination, but I figure the AWD works like single axle drive with a simple diff on each axle i.e. if one wheel has no traction then all torque goes to that wheel and zero goes to the opposite one. I once got stuck on pretty solid pavement in a RWD car when one rear wheel hanged off a curb and lost traction, after that the car could not move as the only wheel getting torque was the one hanging off the ground. I figure the other axle would still get torque on an AWD as they usually have some kind of limited slip mid diff effect from whatever scheme they use to distribute torque between axles, but if you hanged out a wheel on each axle then an AWD vehicle would become stuck too?


Depends entirely on the AWD system.

Many will do what you’re describing—getting a front and rear wheel off the ground at the same time will leave it stranded. A limited slip center differential will ensure if one axle loses traction the power goes to the other, but many vehicles cheap out and have open differentials on each axle, meaning when one on each axle loses traction it’s just spinning wheels.

Some vehicles have limited slip front/rear/front+rear differentials that avoid this issue. Many newer vehicles simply use the traction control and brakes to avoid it—if a wheel is spinning, it applies the brakes to provide resistance and redirect some torque back to the other wheel.

Like many others are saying, “AWD” is such a broad term as to be basically meaningless.


Depends on the PCIe/DMA topology of the system, but in short, in an ideal system you can avoid the bottleneck of the CPU interconnect (eg, AMD's Infinity Fabric) and reduce overall CPU load by (un)loading data directly from your NVMe storage to your PCIe accelerator [0]. You can also combine this with RDMA/RoCE (provided everything in the chain supports it) to make a clustered network with NVMeoF to serve data from a high speed NVMe flash array(s) to clusters of GPU's; even potentially using this to reduce cost/space/power by reducing the nead for high cost/high power CPU's. Prior to CXL's proliferation (which realistically we haven't achieved yet), this is mostly limited to bespoke HPC systems; most consumer systems lack the PCIe lanes/topology to really make use of this in a practical way.

On the consumer side, you're right, using the System ram is probably a better approach as most consumer motherboards would have the NVMe storage routing up to the CPU Interconnect then back "Down" to the GPU (or worse through the "southbridge" chipset(s) like on X570) so you take that hit anyway.

However if you have a PCIe switch on board that allows data to flow direct from storage to GPU without a round trip across the CPU, then NVMe/CXL/SCM modules would theoretically be better than system RAM. Depends on the switch, retimers, muxing, topology etc.

Regardless of what you're using for direct storage and how ideal your topology is, the MTps and GBps over PCIe is significantly slower than onboard VRAM (be it GDDR or especially HBM) and bandwidth limited to boot. Doesn't mean it's useless by any means, but important to point out that this doesn't turn a 20GB VRAM card into a 2.02TB VRAM card just because you DirectStorage'd a 2TB Drive to it, no matter how ideal the setup is. However, as PCIe increases in bandwidth and Storage-Class-Memory type devices and just storage tech in general continues to improve, it's rapidly becoming more viable. On PCIe gen 3, you're probably shooting yourself in the foot. on PCIe Gen 6, you can realistically see a very real benefit. But again, there's a lot of "depends" here and for now, you're probably better off buying a bigger or multiple GPUs if you're not on the cutting edge with the corporate credit line.

0: https://developer.nvidia.com/blog/gpudirect-storage/


Article is paywalled but I assume that's the reason they specified "creator earners" and not just "content creators" or "influencer"


> Hard to follow someone

IMO: Good. The harder it is to shout "look at ME" for clout and profit, the more productive and on topic the discourse tends to be, and the easier it is for moderation to weed out the trolls and off topic/hateful/spammy discussion. The fact that this place isn't about who any one given poster is, but what they have to say, is part of what makes it such a vibrant and valuable and informative place to have discussions.

> hard to form groups discussing various topics

Why is this necessary when various topics tend to get substantial discussion already? Sure, some more than others, but that activity tends to form a rather organic filter without facilitating echo chambers and mob mentality that tends to emerge when you start erecting walled gardens. Sure, that still happens to an extent, but much less than on say reddit or twitter.

I fail to see how making this more like platforms succumbing to the enshittification of the internet is a path to improvement here.


I mean, it's a matter of pedantics and subjectivity.

Technically, Social Media is a super set of all of those things. They're all media platforms that primarily operate around user socialization (aka engagement). They are, by definition, social media, and in turn social media has been around long before Facebook (eg, slashdot, BBS, Usenet, etc).

I do agree there's a difference/nuance to be recognized here though (eg, old web vs new web social interactions). I think user vs content focus kinda misses the mark, as the truly key differentiator for me is what influences the activity, both in terms of driving people to post in the first place, but also in curating what they can('t) or should(n't) post. In other words, is the platform a community trying to serve it's users, or a company trying to serve it's stake/shareholders?


Sams offers this actually, it's great


If the recommended course of action to contribute here is to involve the police and inform them there might be human remains on your property, then I strongly doubt you're gonna get many people willing at all. If this is a genuine and serious potential source of fact finding/analysis that is of value to the field, then the field needs to find a less... lets call it polarizing, option.


I think the other comment is more accurate: this isn’t about polarization, it’s a potential threat to your safety.


Involving the police for something like that is not a threat to your safety in a civilised country. It is, indeed, the best course of action in any country with a functioning police force.


Well, a functioning police force wouldn't mind if you reach to online communities and paleontologists to verify that those remains are human before reaching to police to file a report, I'd assume.

And so, if the law force action could have serious consequences, then the tiles would be better left untouched, no paperwork needed. And if it couldn't, then it's okay not to file paperwork first.


Yeah, like if they said "call the archeology department of your local university to see if they want to document it", I'd totally do that. But I'm not going to call the police, explain to them what I'm calling about and potentially open a crime scene investigation in my own home.

Though realistically, I don't expect that the police would even come out or do anything at all, they don't bother to come out for car breakins, so I don't see them coming out for "I saw something in my new countertop that looks sort of like it could be a 500,000 year old human fossil"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: