Hacker News new | past | comments | ask | show | jobs | submit login

https://tinygrad.org is trying something around this; currently working on getting AMD GPUs to get on MLPerf. Info on what they're up to / why is mostly here - https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t... - though there are some older interesting bits too.



AMD / Intel should be throwing them 10's or 100's millions (or jsut straight up human hours of work) to make that work. If / when it does it would be 10's or 100's of billions of (additional) market cap for them.


So why doesn't AMD invest more in quality software? Everybody knows it's holding them back. Right now a tiny corporation has to jump through hoops to do work they could have done at any point in the last decade. Why don't they pull a great team together and just do the work, if the pay off is that big?


I honestly do not know.

Perhaps they think using GPUs for computation is a passing fad? They hate money? Their product is actually terrible and they dont want to get found out (that one might be true for intel)?


In general it's pretty rare for hardware first companies to put out good software. To me it looks like there are structural reasons for this, hardware requires waterfall development which then gets imposed on software, for instance.


They are. FSR tests side by side with DLSS most people can’t tell the difference or pick FSR. Then you tell them which is which and they turn around and say DLSS is better. People are just bias to Nvidia.

They haven’t had gfx card driver issues in years now and people still say “oh I don’t want AMD cos their drivers don’t work”.


FSR is vastly inferior to dlss, not sure what you're talking about, even xless from Intel is better.

As for driver: https://www.tomshardware.com/news/adrenalin-23-7-2-marks-ret...


I’m not sure what you’re talking about because side by side testing people can’t tell the difference with the exception of racing games (tho that’s screwed on DLSS 3 too anyway) and if you take screen grabs to look at. So fact is. All the compute in Nvidia cards is a gimmick. If you disagree then you’re wrong. The competitive edge of DLSS is gone.

1 bad driver update is not indicative of anything. Nvidia has had bad driver updates but you’re not shutting all over them. And running Nvidias own drivers on linux is still a pain point.

(And don’t try claim I’m an AMD fanboy when I don’t even have any AMD stuff at the moment. It’s all Intel/Nvidia)


FSR is pretty bad, like it's not even close to DLSS, no one like fsr. And saying that there is not difference is wrong, just play a game with fsr 2.1 and dlss 2 or 3 please.


So you drank the Nvidia coolaid. That’s fine.


I have an AMD card, 6800xt, really good card but fsr is not there yet.


I have a 4070. FSR/DLSS on quality looks the same. It’s only noticeable in Forza Horizon. If you notice it in a non racing game then you’re looking for the differences.


This thread isn't about gaming, it's about compute.


I replied to a comment about software.


Can I run pytorch on it?


Yes, but you need ROCm which mostly only runs on AMD's professional cards and requires using the proprietary driver rather than the wonderfully stable open source one.


ROCm only officially supports a handful of server or workstation cards, but it works on quite a few others.

I've enabled nearly all GFX9 and GFX10 GPUs as I have packaged the libraries for Debian. I haven't tested every library with every GPU, but my experience has been that they pretty much all work. I suspect that will also be true of GFX11 once we move rocm-hipamd to LLVM 16.


Have you tried Star Citizen? It’s why I didn’t buy AMD


Based on star citizen telemetry I don’t know what you have against AMD. It seems to rank the 6800/6900 quite high.


Intel is very much putting its money where its mouth is with SyCL/OneApi. They are spending a lot of money and advancing a lot faster than AMD, and in many ways it's a better approach (focused on CUDA-style DSL but portable across hardware) rather than just another ecosystem.

(to their credit AMD is also getting serious lately, they put out a listing for like 30 ROCm developers a few weeks after geohot's meltdown, and they were in the process of doing a Windows release (previously linux-only) of ROCm with support for consumer gaming GPUs at the time as well. The message seems to have finally been received, it's a perennial topic here and elsewhere and with the obvious shower of money happening, maybe management was finally receptive to the idea that they needed to step it up.)


Didn't they abandon development of AMD related software?


Where did you get that? They just received a shitload of GPU's and it appears AMD is actively cooperating: https://twitter.com/realGeorgeHotz/status/168616581138659737...


I don't remember the exact tweet, but here's one discussion [1]. I guess something changed in the mean time.

[1]_https://www.reddit.com/r/Amd/comments/140uct5/geohot_giving_...


[1] links to https://github.com/RadeonOpenCompute/ROCm/issues/2198 which has all the context (driver bugs, vowing to stop using AMD, Lisa Su's response that they're committed to fixing this stuff, a comment that it's fixed)


He had abandoned them for about a week but then talked to the CEO and that got things back on track IIRC


George Hotz: because eventually Elon Musk's false promises and over the top shenanigans just weren't doing it for you anymore...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: