Hacker News new | past | comments | ask | show | jobs | submit login
AMD's Stock Price Jumps on News of Earnings Spurred by Ryzen (hothardware.com)
203 points by artsandsci on April 26, 2018 | hide | past | favorite | 75 comments



Congrats AMD, I hope you'll sustain it for a few years (or decades)! We need a great competition in CPU space! ;-)


And GPU space, too.


They have some work to do there ;-) I can't use them for Deep Learning at all, just for mining.


They’re getting there with MIOpen etc. It’ll take some more time, but I’m sure they can pull it off. In terms of raw perf their GPUs are comparable.


This benchmark is worth looking at as well: blog.gpueater.com/en/2018/04/23/00011_tech_cifar10_bench_on_tf13/


I went to buy a Ryzen 7/Vega 10 laptop from Dell last Thursday. By Friday, it was gone, and only Ryzen 5/Vega 8 laptops are available, even for the 17" Inspiron. Not "Out of stock" gone, but "This product doesn't exist" gone.

Lenovo makes one with only 12Gb of RAM, Asustek's effort is 8Gb, a single channel. AMD makes the best integrated CPU/GPU for laptops but you cannot buy it anywhere.

Commentary elsewhere is that Intel is leaning hard on builders not to use the Ryzen 7/Vega 10, or if they use it, to put it in an otherwise shitty spec box that cripples it.


That last sentence sounds a lot like anti competitive behavior by Intel. On the bright side it bodes well for AMD that their chipsets are causing this type of response from Intel but it sucks that this may be driving the lack of AMD laptops available to consumers.


There are many in the pipeline. HP is going to launch a bunch in June.

AMD said they expect 25 Ryzen laptops to launch in Q2 and 60 by year end


I just hope there's one or two high-end models.

Also, AMD chipsets seem to use more power than Intel. I'd love to see more work in that direction.


AMD's share price is still 75% lower than its peak achieved after the release of AMD64/AMD X2.


Give them time to breath, they were almost under during the ATi/APU era.


is it even at 30 day high? it was like $13 in the begining of 2018.


So far today it's at $11.08, which puts it back where it was at on March 22. So yes, it's a 30 day high, but only just barely.

I love AMD and I'm rooting for them, but the stock price isn't the news - it's the unexpected sales strength of the Zen architecture. Don't expect Intel to take their success lying down.


This is what I want to see. Intel finally waking up and working hard again and maybe trying to compete on price. Without proper competition the market just goes stagnant.

The most interesting thing is that Intel used to be able to crush their opponents by being one step ahead on node size improvements, but that's hit a wall and it has given everyone else a chance to catch up.


Also meltdown is going to be a problem for them for years.

Intel is vulnerable if they don't get their act together and the competition do.

AMD has been on a roll since Ryzen was released and as long as time continues to go past without a meltdown level problem cropping up they might have a shot at eating some of the data center stuff.

We upgraded our Xeon servers the week before meltdown hit, if we'd know we'd have held on another year and gone EPYC.


Meltdown would likely increase Intel sales not decrease them.

The problem with AMD is that it stopped making server CPUs for 5 years, Epyc was their first release since December of 2012 and that is what essentially prevents any serious ramp up of Epyc currently.

Not having the fastest CPU out there isn't a problem if you can price it correctly but when you don't give your customers any options to upgrade or grow you essentially give them only one option and that is switch the the competition completely.

If people can't trust that AMD won't abandon them again for half a decade to sort their shit out they will never take the risk of using them again at scale.

That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.


> The problem with AMD is that it stopped making server CPUs for 5 years, Epyc was their first release since December of 2012 and that is what essentially prevents any serious ramp up of Epyc currently.

I don't see why that should be a concern. It's not as though the effect is any different than switching to a new socket. Existing systems can't be upgraded to the newer processors, which is mostly irrelevant anyway because by the time the processor is stale so is the rest of the system.

It's not as if they're different instruction sets. It's perfectly reasonable to buy Opterons in 2009, replace them with Xeon systems in 2014 and then replace those with Epyc systems in 2019.

> That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.

The Zen microarchitecture isn't based on bulldozer. Even if they had kept iterating on bulldozer in the interim, none of that ecosystem work would have been applicable to Zen regardless.


Your thinking is too narrow.

Let’s take simple examples

Virtual Machines. Intel and AMD aren’t “compatible”, you can’t cluster non heterogeneous servers together for thin provisioning since you can’t live migrate between them.

You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.

Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution. Does your management stack supports DASH? Are your IT peeps familiar with it? Doesn’t have sufficient traction and market adoption? Likely not and that is again because AMD slept for half a decade.

Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?

Heck even if you aren’t going to grow what about dealing with disasters? Do you really want to compound an already huge risk with another one?

And about what you said about Zen. Zen isn’t that different to bulldozer in many aspects I suggest you should read the intrinsics guides for both.

And even if it was 100% different it doesn’t matter a 5 year gap kills the entire infrastructure of partners and provide tools and education. If I need to optimize software today for an Intel CPU I have a plethora of resources, AMD can’t even release their instructions latency tables for 17h.


> Virtual Machines. Intel and AMD aren’t “compatible”, you can’t cluster non heterogeneous servers together for thin provisioning since you can’t live migrate between them.

> You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.

The premise is that you're migrating from one vendor to the other, so once you move something to the other pool it shouldn't have to move back. Having to reboot each guest once is inconvenient, but aren't you already doing this every month or two for security updates?

> Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution.

This absolutely is AMD's fault, but the real issue is that their remote management solution (like Intel's) is a closed source black box. If they would open it up then it might be adopted by ARM vendors and so on and no one would have to worry about being abandoned because the community could continue to support it for as long as enough people want to keep using it. And it would put pressure on Intel to do the same thing, at which point they could be consolidated.

> Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?

That would be the case if we were talking about some low volume product at risk of becoming unavailable. You can still source Opteron systems even today if you really want them. But nobody has wanted them for five years because the migration cost isn't that high.


Migrating VMs between vendors isn’t a reboot and you are done.

And as far as sourcing Opterons are you serious? Sure you can source them on eBay but take a look at when the likes of HP stopped supplying them.


Yes their current dollar range is slightly more exposed to volatility given it’s price range. We’d all wondered whether they would get a perk after the rounds of flack Intel’s received.

From the little research I’ve done though I’m unsure whether their new CPUs really compete with Intel. Any anecdotes here?


Well it looks like Bulldozer was still 60% of their client revenue, Ryzen is selling well the deal with Intel was also quite good but it looks like their enterprise figures are still abysmal Epyc double digit growth sure but considering that the majority of those 500 mill was still from semi custom it looks like Epyc is still struggling to get traction.


Quite expected too. Server markets tend to buy rarely, conservatively and in late volumes at a time.

It will take a few new data centres to nudge it.


The first part of your sentence is true, the second isn't.

Intel couldn't make Skylake Xeons fast enough it became their fastest ramp up of any Xeon release to date.

https://www.nextplatform.com/2017/12/01/booming-server-marke...

The problem is that AMD has lost the trust of the enterprise market because they gave up and stopped making Opterons without even a good reason to do so.

That screwed up their entire customer base because they had no option to expand or upgrade other than just replace all of their servers with Intel.

Opteron was the king and yeah they were losing the pure performance crown but in 2012 it wasn't that bad, server software could be optimized and an acceptable performance per dollar ratio could've been maintained.

Before Epyc the last enterprise Opteron that AMD was released was in December of 2012 on 32nm SOI process 16 core CPU (technically 8 since it's bulldozer but who cares) that's 5 years without an upgrade option.

If I would be a bet on the number 1 question AMD receives today about Epyc is what happens when you stop making CPUs again to which they would probably reply "we won't cuz we can't afford too" and then the cheeky retort "you couldn't afford it last time either".

Epyc ramp up was expected to be slow but I don't think this slow the amount of resistance I think it experience is above even AMDs internal pessimistic predictions.


I got in at ~$12 two years ago, was down 26% for most of that (I'm long, and don't care much for intra-day volatility). I hope it will crawl back up, but I'm increasingly bearish on this one.

Without investors pouring money in it, the R&D will stall and that's not something they can sustain. One of the few reasons they haven't been bought yet is because that will invalidate their license sharing with Intel. So I guess the other strategy is to simply bleed them dry. I mean just think about it. They build GPUs, just like NVidia. They build CPUs, just like Intel. They have their own fabrication process, logistics, research, the whole thing. Their chips don't perform much worse, even better in the mid-range tier. Yet AMD is valued at 9, instead of like 5x or even 10x as much. The people with the money are holding a grudge or something, and I can't see them keeping it up for much longer.


Without investors pouring money in it, the R&D will stall and that's not something they can sustain.

- AMD has a superior multicore manufacturing process to its competition. They are able to glue multiple die together to form their high-core chips. Others try to stuff as many cores as possible onto a single die. The problem is that the failure rate during fabrication increases exponentially in the number of cores in a die. Through this innovation, AMD has much lower costs when manufacturing many-core chips.

- EPYC has some serious advantages over its main competitor -- 128 PCI-E lanes, and comparable performance on other dimensions, at a lower cost.

- AMD APU's are not something that Nvidia can compete with. The fact that AMD can manufacture both CPU's and GPU's gives them a huge advantage over Nvidia in this space.

In other words, AMD already has a couple of nice moats. It's not like if they don't come up with a breakthrough within a year, they're screwed.


I'm not sure how you can value it at 5 or 10 times when earnings have been in the red since at least 2014. Also, Intel made $10 billion profit each of those years.


AMD does not have its own fabrication process. They spun off their fabrication business as GlobalFoundries in 2009.


But hasn’t it been that way for many, many years now?


Yes, but I was being naive and thought the Vega/Epyc/ThreadRipper stuff with the Intel Specter/Meltdown and crypto craze would help them more. I know my next machine will be an AMD one, if only just for wishful thinking


invalidation of license sharing with Intel would mean no more 64bit x86 Intel processors - not going to happen


Thanks to Jim Keller.


That's a common misperception. Michael Clark played a larger role in Ryzen: https://venturebeat.com/2016/08/24/how-amd-designed-what-cou...


Iirc Jim Keller's title was VP and chief of the microprocessor division. So he was probably more of a technical manager anyway. I believe Suzanne Plummer and Mike Clark were project directors so they'd be more hands on anyway


Sure but also thanks to a few patents having expired, especially that one about hyper threading[1].

edit:

https://patents.google.com/patent/US3728692A/en

https://patents.google.com/patent/US3771138

[1] Should have said simultaneous multithreading (SMT) instead of hyper threading which is specific to Intel.


Weren't those patents part of the AMD-Intel cross licensing deal or are there some x86 CPU related patents that are kept separate?


Those are 1971 IBM patents.

It would be interesting to know anything more on the AMD/Intel agreements than the vague "x86 cross licensing" though for sure.


https://www.zdnet.com/article/intel-and-amd-settle-agree-cro...

https://www.sec.gov/Archives/edgar/data/2488/000119312509236...

The first will be a quicker read than the second ;). The other commenter mentioned SMT was an IBM patent rather than an Intel one, so that's probably the answer to my question. It seems weird that they wouldn't have just licensed that from IBM as well.


The IBM patents have been expired for decades though. There's nothing to licence.

But that SEC link is awesome; I though the agreement was completely confidential and we'd only ever see it in a historical context. Thanks!


Before Ryzen, AMD's CPUs were relying on Cluster-based Multithreading (CMT). you can read more about CMT vs SMT here

https://scalibq.wordpress.com/2012/02/14/the-myth-of-cmt-clu...


And he has decided to join Intel.

https://news.ycombinator.com/item?id=16933931


He did just quit Tesla after a short time there and re-joined Intel.


I have been watching the dynamics between AMD and Intel recently and some interesting things are happening. Jim Keller and Raja headed to Intel. Intel announcing plans to get into the discrete GPU space (a place ridden with patent landmines). The KabyLake chip released with Intel CPU + AMD GPU. Wonder if the companies are aligning to work closely to counter NVidia or if something else entirely is at play.


> Wonder if the companies are aligning to work closely to counter NVidia

I mean, yeah, that's actively happening, as can also be seen by the Intel/Vega combo.


He was the architect of Ryzen.


I'm aware.


And Lisa Su.


Agreed. Su had the smarts to get a performant chip made and hire the people to make it happen. You don't get market share with an also-ran offering.


Didn't Rory Read hire them and start the projects?


I thought Read was a placeholder CEO who was installed to conduct cost cutting. But you're right - a quick google search confirmed that Keller was hired during Read's tenure.


FWIW my next “office” PC is based on an AMD Ryzen APU. Price/perf of that can’t currently be beat by anything.


Why hasn't EPYC picked up stream? It's been a year since release, some say they were already selling as many as they can. GF capacity problems again?


It wasn't until recently that Dell and HP were offering fully tested EPYC servers in their server lines. Many IT professionals were complaining that EPYC was simply unavailable from their suppliers last year.

The typical server-farm waits for OEMs to make well-tested complete machines. So it won't be till this year that EPYC even has a chance to make it to your typical serverroom setup.


I tried to get single socket EPYC-based servers for a virtualization host refresh that we were doing last year, and the processors simply weren't available anywhere. _Everyone_ was out of stock indefinitely.

As far as I can tell, unless you were a hyperscaler, EPYC's release was essentially a paper launch up until Q1 2018.


AMD should put a good effort into GPUs too, and replace GCN with brand new and efficient architecture.


Most of Nvidia's architectural changes have moved it closer to GCN (eg, adding async compute in Pascal). In other areas like multiple compute engines (A reason GCN generally has better VR latency numbers), Nvidia still lags behind (not sure if Volta changes that). AMD generally does quite well on compute tasks compared to Nvidia and is still generally much faster for anything using Vulkan.

http://ext3h.makegames.de/DX12_Compute.html

The big issue with AMD is software and money. Even with the most talented engineers it takes loads of man hours to build up an ecosystem. AMD's is currently valued at 8.3B. Nvidia currently has 7.1B cash on hand. With that kind of money, Nvidia will do things like loan AAA gamemakers some devs to re-write large parts of the game to optimize them for Nvidia graphics cards or build up the CUDA ecosystem.


It seems like AMD has hardware bottlenecks (or missing optimizations relative to Nvidia) in the fixed-function geometry processing parts of GCN. The compute benchmarks are always better than the graphics benchmarks. If it was only optimizations in games, there would be outliers in less popular games (Doom is a bit of an outlier but popular). And in any case, Xbox and PS4 consoles use AMD GPUs so game devs have reason to optimize for AMD.

Also, AMD has been improving its Windows drivers in the last 2-3 years according to reports. I use AMD GPUs on Linux where the driver codebase is different - the improvements there are huge. So much so that it may make sense for AMD to switch to that stack on Windows in the future.


ATi hasn't really competed with Nvidia at the top end of the market for a very long time

They've basically thrown in the towel as far as the datacenter went with CUDA


So they should make an effort to compete more.


They're behind in a number of areas.

Their ROCm environment finally is catching up with NVidia's software / CUDA environment. OpenCL was simply a disaster and hampered AMD's role as a serious software solution for years.

I think its more important to get ROCm fully functional with Tensorflow and other technologies (their current path). AMD can compete on price/performance, selling Vega HBM2 chips at ~$1000 to $2000 (compared to NVidia's chips at $8000 for their HBM2-based V100).

With AMD's drivers properly in the Linux kernel and a nicer licensing model, AMD can achieve a position in the server world. But only if developers start coding in ROCm instead of CUDA.

Honestly, I bet you that software is the bigger issue in the near term. No one wants to code in OpenCL / separated C-environment. ROCm achieves CUDA-parity to some degree with "single source" C++ programming and is beginning to become compatible with Tensorflow.

Sure, they'll be slower than NVidia. But at least your python machine learning code will actually run at all. And paying 1/8th the price for ROCm acceleration will be fine as long as AMD is 1/4th the performance or better. That's how you can actually build a value argument.


OpenCL is not a disaster at all. It is just that NVidia were (and still are) too scared to have people move away from their proprietary solutions, so they tried to hide OpenCL has much as they could and only pushed Cuda.

Even today OpenCL is a viable solution for GPU. It works fine on both AMD and NVidia GPUs. It is also pushed a lot by Intel for FPGA, which probably scare even more NVidia.

OpenCL kernels are compiled at runtime, which is brilliant since you can change the kernels code at run time, use constants in the code at the last moment, unroll, etc.. which can gives better performances. (Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7!)

The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.


> It is also pushed a lot by Intel for FPGA, which probably scare even more NVidia.

These tools aren't available for the wide majority of developers, and are still exceptionally difficult to use and maintain without hardware engineers. I'm going to assume you haven't used FPGAs at all? The ones that can compete at the same tasks for GPUs are not as easily available in terms of price, volume, or even over-the-counter availability (be prepared to ask for a lot of quotes), and the tools have only become more accessible very recently -- such as Intel slashing the FPGA OpenCL licensing costs, and Dell EMC shipping them in pre-configured rack units.

> Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7

In the mean time, Nvidia also completely dominated the market by actually producing many working middleware libraries and integrations, a solid and working programming model, and continuously refining and delivering on core technology and GPU upgrades. Maybe those things matter more than runtime compilation and speculative claims about peak performance...

> The "single source" argument is completely overrated.

Even new Khronos standards like SYCL (built on OpenCL, and which does look promising, and I'm hoping AMD delivers a toolchain after they get MIOpen more fleshed out) are moving to the single-source model. It's not even that much better, really, but development friction and cost of entry matters more than anything, and Nvidia understood this from day one. They understood it with GameWorks, as well. They plant specialist engineers "in the field" to accelerate the development and adoption of their tech, and they're very good at it.

This is because their core focus is hardware and selling hardware; it's thus in their interest to release "free" tools that require low-effort to buy into, do as much dirty integration work as possible, and basically give people free engineering power -- because it drives their hardware sales. They basically subsidize their software stack in order to drive GPUs.

> Furthermore, you can have single source in OpenCL putting the code in strings.

This is a joke argument, right?


> OpenCL is not a disaster at all.

I'll probably need to be more specific. OpenCL 1.0 through 1.2 is fine, but fell hopelessly behind NVidia's CUDA efforts. NVidia CUDA has more features that lead to proven performance enhancements.

OpenCL 2.0 was the "counterpunch" to bring OpenCL up to CUDA-level features. However, OpenCL 2.0 is virtually stillborn. Only Intel and AMD platforms support OpenCL2.0. Intel Xeon Phi are relatively niche (and their primary advantage seems to be x86 code compatibility anyway. So I doubt you'd be running OpenCL on them).

AMD OpenCL 2.0 support exists, but is rather poor. The OpenCL 2.0 debugger simply is non-functional and you're forced to use lol printfs.

That leaves OpenCL 1.2. Its okay, but it is years behind what modern hardware can do. Its atomic + barrier model is strange compared to proper C++11 Atomics, its missing important features like device-side queuing, shared virtual memory, unified address space (no more copy/paste code just to go from "local" to "private" memory), among other very useful features.

> Even today OpenCL is a viable solution for GPU

OpenCL 1.2 is a viable solution. An old, crusty, and quirky solution, but viable nonetheless. OpenCL 2.0+ is basically dead. And I think only Intel Xeon Phi supports the latest OpenCL 2.2.

I bet you there are more Vulkan compute shaders out there than there are OpenCL 2.0. Indeed, there are rumors that the Khronos project is going to be focusing on Vulkan compute shaders in the future.

> The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.

I like my compile-time errors to be during compile-time. Not during run-time on my client's system. Compiler-bugs in AMD drivers are fixed through device driver updates (!!!) which makes practical deployment of plain-text OpenCL source code far more of a hassle in practice.

Consider this horror story: a compiler bug in some AMD Device Driver versions which cause a segfault on some hardware versions. This is not theoretical: https://community.amd.com/thread/160362.

In practice, deploying OpenCL 1.2 code requires you to test all of the device drivers your client base is reasonably expected to run.

-----

But that's not the only issue.

"Single Source" means that you can define a singular structure in a singular .h file and actually have it guaranteed to work between CPU-code and GPU-code. Data-sharing code is grossly simplified and is perfectly matched.

The C++ AMP model (which has been adopted into AMD's ROCm platform) is grossly superior. You specify a few templates and bam, your source code automatically turns into CPU code OR GPU-code. Extremely useful when sharing routines between the CPU and GPU (like data-packing or unpacking from the buffers)

With that said, AMD clearly cares about OpenCL and the ROCm platform looks like it strongly supports OpenCL through then near term, especially OpenCL 1.2 which seems to have a big codebase.

However, if I were to do any project these days, I'd do it in ROCm's HCC / single-source C++ system or CUDA. OpenCL 1.2 is useful for high-compatibility but has major issues as an environment.


The point I really wanted to make here is that OpenCL is only a disaster because NVidia was scared of the competition it would bring from AMD.


I'm sure NVidia deserves some blame.

But AMD drivers which cause OpenCL compiler-segfaults and/or infinite loops is a problem that rests squarely on AMD's shoulders.


I have extensively used OpenCL on both AMD and NVidia for a few years and never had such problems. If anything, found a few more bugs with NVidia.


Interesting. I'll take your anecdote for what its worth.

My personal use case with OpenCL didn't seem to be going very well. I was testing on my personal Rx 290x. While I didn't have the crashing / infinite loop bugs (See LuxRender's "Dave" for details: http://www.luxrender.net/forum/viewtopic.php?f=34&t=11009) that other people had, my #1 issue was with the quality of AMD's OpenCL compiler.

In particular, the -O2 flag would literally break my code. I was doing some bit-level operations, and those bit-level operations were just wrong under -O2. While the -O0 flag was so unoptimized that my code was regularly swapping registers into / out of global memory. At which point the CPU was faster at executing and there was no point in using OpenCL / GPU compute.

It seems like AMD's OpenCL implementation assumes that the kernels would be very small and compact. And it seems to be better designed for floating-point ops. Other programmers online have also complained about AMD's bit-level operations returning erronious results under -O2. My opinion of its compiler was... rather poor... based on my limited exposure. And further research seems to indicate that I wasn't the only one having these issues.


Only did and do floating point for image processing. In fact, looking into my logs, I registered 5 bugs with NVidia in the last 2 years, none with AMD.


That's an incredibly ignorant comment

Just because they can trade blows with Intel in the CPU space, doesn't automatically translate to them having equal engineering talent to take on Nvidia on GPU's


I didn't say they have talent (which they might have anyway). I said they should compete, which means they should find needed talent obviously if they don't have it.


I'm old to remember the excitement over AMD's K6 processors, which were quickly overtaken in speed by Intel for a decade. I just hope that AMD continues to be competitive.


I'm not sure what you mean.

From the time AMD released the K7 'Athlon' CPU, until the time Intel released Core2, AMD was king of performance.

That's a long spawn of time.

To understand why Intel kept selling CPUs like pancakes despite this, check this out: https://www.youtube.com/watch?v=osSMJRyxG0k


I was a kid so I might be wrong, though I seem to remember there was some crazy overclockable Celeron that was killing K6, right?


Yep. And you could even run a pair of them with SMP with delicate modifications.

I used to have a 440BX board with a pair of 300A's clocked to 450mhz and 128MB of SDRAM. Excellent poor-mans workstation.


The Celeron 300 could hit 450MHz and barely break a sweat. Of course I discovered that after splashing big cash on a P2 450. 50% OCs are unheard of today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: