Hacker News new | past | comments | ask | show | jobs | submit | metadat's comments login

If you zoom in it's pretty cool how each letter is a matrix of squares, but agreed - very challenging and distracting to try and read when the end result looks interlaced.

Yes, 128x128.

Good for a database, maybe.

What else?


Serving remote desktops to several hundred developers. Maybe a video content server for a netflix or youtube type business. Hosting a large search index? Some kind of scientific computing?

Numeric simulation (HPC). Some, not all, simulations need lots of memory. In 2018 larger servers running such had 1TiB, so I'm not the least surprised that six years later it's 16.

Dumb question but why don’t we see more cracked out high memory machines? I mean like 1 petabyte RAM.

Or do these already exist


I'd think the market share for applications which need huge amount of space, but little CPU processing power and memory transfer rate is rather small.

Lenovo's slides indicate that they foresee this server be used for in-memory data bases.

Weren't there also distributed fs where the meta-data server couldn't be scaled out?


We don't see more of these machines because most tasks are better served by a higher number of smaller machines. The only benefit of boxes like this is having all of that RAM in one box. Very few use cases need that.

Would be fun for a graph db

A half dozen GPT-4 instances

LLM inference processors (GPUs) don't use DDR, it uses special, costly stacked HBM ram soldered to the board.

I tested out running Llama on a 512GB machine, it's rather slow and inefficient. Maybe 1-token/sec.


Large Language Models.

Qubes OS.

More satellites == more space debris pollution, not really something I'm interested in supporting. Eventually we won't be able to safely get off this rock if there's too much space trash orbiting.

https://en.wikipedia.org/wiki/Space_debris


I was addressing the comment about the economics of lower launch costs, not space debris. Similar to past pollution issues, I think it will be a problem but not a show stopper. There are already global standards for satellite end of life procedures. Most governments require that satellites be able to passivate themselves so that pressure vessels or batteries don't explode and create more debris. Geosynchronous satellites are required to have extra propellant so they can move to a graveyard orbit. Many satellites are put into low orbits so that atmospheric drag will cause them to deorbit within a known time frame. And lower launch costs will make it easier to launch spacecraft that can clean up debris.

Also, reusable spacecraft such as Starship actually reduce the amount of debris created per launch, as most space debris comes from spent upper stages. Of the 25 recent debris producing events listed on Wikipedia[1], 16 were caused by debris that would not be created by a reusable spacecraft (either an upper stage, a payload adapter, or a fairing).

1. https://en.wikipedia.org/wiki/List_of_space_debris_producing...



Thanks! This is the link I was searching for but didn't find.

By that same logic you think humans should not use ships right? I mean, more ships, means more ocean debris?

Unless you specifically send satellites to hunt for debris and bring it back. We have NORAD database of flying objects and Starship possibilities... hmm, I wonder if more satellites == less space debris pollution with such an approach...

What's a database going to do for you when your craft runs into debris? Nothing.

These databases (which include collision risks) are public. Satellite owners use them to make maneuvers so they can avoid getting too close to debris or other satellites. Since these collision risks can be predicted days in advance, it takes very little thrust to prevent them. Even cubesats without propulsion systems can change their orbits, as their orientation affects how much drag they experience.

Agreed, this will generally work up until the Kessler Threshold is reached.

But how will AMD or anyone else push in? CUDA is actually a whole virtualization layer on top of the hardware and isn't easily replicable, Nvidia has been at it for 17 years.

You are right, eventually something's gotta give. The path for this next leg isn't yet apparent to me.

P.s. how much is an exaflop or petaflop, and how significant is it? The numbers thrown around in this article don't mean anything to me. Is this new cluster way more powerful than the last top?


The API part isn't thaaat hard. Indeed HIP already works pretty well at getting existing CUDA code to work unmodified on AMD HW. The bigger challenge is that the AMD and Nvidia architectures are so different that the optimization choices for what the kernels would look like are more different between Nvidia and AMD than they would be between Intel and AMD in CPU land even including SIMD.

Only if the only thing one cares about is CUDA C++, and not CUDA C, CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging.

CUDA C works fine with HIP not sure what you're referring to. As for the other pieces, GPU graphical debugging isn't relevant for CUDA and I don't know what IDE integration is special / relevant for CUDA but AMD does have a ROCm debugger which I would imagine would be sufficient for simultaneous debugging of CPU & GPU. You won't get developer tools like nsight systems but I'm pretty sure AMD has equivalent tooling.

As for Fortran, that doesn't come up much in modern AI stuff. I haven't observed PTX / GCN assembly within AI codebases but maybe you have extra insight there.


> P.s. how much is an exaflop or petaflop, and how significant is it? The numbers thrown around in this article don't mean anything to me. Is this new cluster way more powerful than the last top?

Nominally, a measurement in "flops" is how many (typically 32-bit) FLoating-point Operations Per Second the hardware is capable of performing, so it's an approximate measure of total available computing power.

A high-end consumer-grade CPU can achieve on the order of a few hundred gigaflops (let's say 250, just for a nice round number). https://boinc.bakerlab.org/rosetta/cpu_list.php

A petaflop is therefore about four thousand of those; multiply by another thousand to get an exaflop.

For another point of comparison, a high-end GPU might be on the order of 40-80 teraflops. https://www.tomshardware.com/reviews/gpu-hierarchy,4388-2.ht...


How many teraflops in an exaflop? The tera is screwing me up.. Google not helping today, so many cards.


Anybody spending tens of billions annually on Nvidia hardware is going to be willing to spend millions to port their software away from CUDA.

First they need to support everything that CUDA is capable of in programing language portfolio, tooling and libraries.

A typical LLM might use about 0.1% of CUDA. That's all that would have to be ported to get that LLM to work.

Which is missing the point why CUDA has won.

Then again, maybe the goal is getting 0.1% of CUDA market share. /s


Nvidia has won because their compute drivers don't crash people's systems when they run e.g. Vulkan Compute.

You are mostly listing irrelevant nice to have things that aren't deal breakers. AMD's consumer GPUs have a long history of being abandoned a year or two after release.


CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging, aren't only nice to have things.

In the words of Gilfoyle-- I'll bite. Why has CUDA won?

CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging.

Coupled with Khronos, Intel, AMD never delivering anything comparable with OpenCL, Apple losing interest after Khronos didn't took OpenCL into the direction they wanted, Google never adopting it favouring their Renderscript dialect.


For the average non-FAANG company, there's nothing to port to yet. We don't all have the luxury of custom TPUs.

To slower hardware? What are they supposed to port to, ASICs?

if the hardware is 30% slower and 2x cheaper, that's a pretty great deal.

Power density tends to be the limiting factor for this stuff, not money. If it's 30 percent slower per watt, it's useless.

The ratio between power usage and GPU cost is very, very different than with CPUs, though. If you could save e.g. 20-30% of the purchase price that might make it worth it.

e.g. you could run a H100 at 100% utilization 24/7 for 1 years at $0.4 per kWh (so assuming significant overhead for infrastructure etc.) and that would only cost ~10% of the purchase price of the GPU itself.


Power usage cost isn't the money but the capacity and cooling.

Yes, I know that. Hence I quadrupled the price of electricity or are you saying that the cost of capacity and cooling doesn't scale directly with power usage?

We can increase that another 2x and the cost would still be relatively low compared to the price/deprecation of the GPU itself.


CUDA is the assembly to Torch's high-level language; for most, it's a very good intermediary, but an intermediary nonetheless, as it is between the actual code they are interested in, and the hardware that runs it.

Most customers care about cost-effectiveness more than best-in-class raw-performance, a fact that AMD has ruthlessly exploited over the past 8 years. It helps that AMD products are occasionally both.


CUDA is much more than that, and missing that out is exactly why NVidia keeps winning.

Again, I have AMD hardware and can't use it.

AMD is to blame for where they stand.

Software will bridge the gap. There are simply too many competing platforms out there that are not Nvidia based. Most decent AI libraries and frameworks already need to support more than just Nvidia. There's a reason macs are popular with AI researchers: many of these platforms support Apple's chips already and they perform pretty well. Anything that doesn't support those chips, is a problem waiting to be fixed with plenty of people working on fixing that. If it can be fixed for Apple's chips, it can also be fixed for other people's chips.

And of course there is some serious amount of money sloshing around in this space. Things being hard doesn't mean it's impossible. And there's no shortage of extremely well funded companies working on this stuff. All your favorite trillion $ companies basically. And most of them have their own AI chips too. And probably some reservations about perpetually handing a lot of their cash to Nvidia.

If you want an example of a company that used to have a gigantic moat that is now dealing with a lot of competition, look at Intel. X86 used to be that moat. And that's looking pretty weak lately. One reason that AMD is in the news a lot lately is that they are growing at Intel's expense. Nvidia might be their next target.


A high grade consumer gpu a (a 4090) is about 80 teraflops. So rounding up to 100, an exaflop is about 10,000 consumer grade cards worth of compute, and a petaflop is about 10.

Which doesn’t help with understanding how much more impressive these are than the last clusters, but does to me at least put the amount of compute these clusters have into focus.


You're off by three orders of magnitude.

My point of reference is that back in undergrad (~10-15 years ago), I recall a class assignment where we had to optimize matrix multiplication on a CPU; typical good parallel implementations achieved about 100-130 gigaflops (on a... Nehalem or Westmere Xeon, I think?).


You are 100% correct, I lost a full prefix of performance there. Edited my message.

Which does make the clusters a fair bit less impressive, but also a lot more sensibly sized.


4090 tensor performance (FP8): 660 teraflops, 1320 "with sparsity" (i.e. max theoretical with zeroes in the right places).

https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvid...

But at these levels of compute, the memory/interconnect bandwidth becomes the bottleneck.


According to Wikipedia the previous #1 was from 2022 with a peak petaflops of 2,055. This system is rated at 2,746. So about 33% faster than the old #1.

Also, of the top 10, AMD has 5 systems.

https://en.wikipedia.org/wiki/TOP500


> P.s. how much is an exaflop or petaflop

1 petaflop = 10^15 flops = 1,000,000,000,000,000 flops.

1 exaflop = 10^18 flops = 1,000,000,000,000,000,000 flops.

Note that these are simply powers of 10, not powers of 2, which are used for storage for example.


People have been chipping away at this for a while. HIP allows source-level translation, and libraries like Jax provide a HIP version.

There is ZLUDA to break the lock-in for those who are stuck with it. The rest will use something else.

Isn't porting software to the next generation supercomputer pretty standard for HPC?

its possible. Just look at Apples GPU, its mostly supported by torch, what's left are mostly edge-cases. Apple should make a datacenter GPU :D that would be insanely funny. It's actually somewhat well positioned as, due to the MacBooks, the support is already there. I assume here that most things translate to linux, as I don't think you can sell MacOS in the cloud :D

I know a lot developing on apples silicon and just pushing it to clusters for bigger runs. So why not run it on an apple GPU there?


> Apple should make a datacenter GPU

Aren't their GPUs pretty slow, though? Not even remotely close to Nvidia's consumer GPU with only (significant) upside being the much higher memory capacity.


> what's left are mostly edge-cases.

For everything that isn't machine learning, I frankly feel like it's the other way around. Apple's "solution" to these edge cases is telling people to write compute shaders that you could write in Vulkan or DirectX instead. What sets CUDA apart is an integration with a complex acceleration pipeline that Apple gave up trying to replicate years ago.

When cryptocurrency mining was king-for-a-day, everyone rushed out to buy Nvidia hardware because it supported accelerated crypto well from the start. The same thing happened with the AI and machine learning boom. Apple and AMD were both late to the party and wrongly assumed that NPU hardware would provide a comparable solution. Without a CUDA competitor, Apple would struggle more than AMD to find market fit.


well, but machine learning is the major reason we use GPUs in the datacenter (not talking about consumer GPUs here). The others are edge-cases for data-centre applications! Apple is uniquely positioned exactly because it is already solved due to a significant part of the ML-engineers using MacBooks to develop locally.

The code to run these things on apples GPUs exist and is used every day! I don't know anyone using AMD GPUs, but pretty often its nvidia on the cluster and Apple on the laptop. So if nvidia is making these juicy profits, i think apple could seriously think about moving to the cluster if it wants to.


Software developers using Macbooks doesn't mean Apple solved the ML problem. The past 10 years of MacOS removing features has somewhat proved that software developers will keep using Macs even when the featureset regresses. Like how Apple used to support OpenCL as a CUDA alternative, but gave up on it altogether to focus on simpler, mobile-friendly GPU designs.

The Pytorch MPS patches are a fun appeasement for developers, but they didn't unthrone Nvidia's demand. They didn't beat Nvidia on performance per watt, they didn't match their price, their scale or CUDA's featureset, and they don't even provide basic server drivers. It's got nothing to do with what brand you prefer and everything to do with what makes actual sense in a datacenter. Apple can't take on Nvidia clusters without copying Nvidia's current architecture - Apple Silicon's current architecture is too inefficient to be a serious replacement to Nvidia clusters.

If Apple wanted to have a shot at entering the cluster game, that window of opportunity closed when Apple Silicon converged on simplified GPU designs. The 2w NPUs and compute shaders aren't going to make Nvidia scared, let alone compete with AMD's market share.


> But how will AMD or anyone else push in? CUDA is actually a whole virtualization layer on top of the hardware and isn't easily replicable, Nvidia has been at it for 17 years.

NVidia currently has 80-90% gross margins on their LLM GPUs, that’s all the incentive another company needs to invest money into a CUDA alternative.


Maybe the DOJ will come in and call it anti-trust shenanigans.

Not that I would want this...


This is incredible. It's all I ever wanted.

Try using it, it has extra steps and decisions that add friction and are annoying / challenging for end-users.

That's neat, though why do the two metal rails stick out over the windshield?

Also, dcsm is compelling.. are you located in The Bay by chance?

https://igor.moomers.org/posts/secrets-in-docker-compose


I checked a few pages of https://news.ycombinator.com/front?day=2024-11-14

And didn't see anything stand out.

It would be a nice feature. If you feel strongly, email dang: hn@ycombinator.com

p.s. thanks @mtmail for sharing the submission for this case.


This type of solution could be pretty great if you don't drive your car everyday.

Generous of the designer to release it all for free! Directly downloadable from their site:

https://www.dartsolar.com/beta2


Is higher sugar content in fruits and veggies worse for insulin resistance / diabetes?

IOW, should folks who are at risk or otherwise sensitive to sugar avoid these sweeter fruits and vegetables?


I was thinking very much on the same path. If we are missing the flavors & textures from heirloom fruits and vegetables, why not just go back to heirlooms? If I understand it correctly, we bread out the features now we are going to genetically put back in. Weird circuitous way to get back to the start line.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: