I went to buy a Ryzen 7/Vega 10 laptop from Dell last Thursday. By Friday, it was gone, and only Ryzen 5/Vega 8 laptops are available, even for the 17" Inspiron. Not "Out of stock" gone, but "This product doesn't exist" gone.
Lenovo makes one with only 12Gb of RAM, Asustek's effort is 8Gb, a single channel. AMD makes the best integrated CPU/GPU for laptops but you cannot buy it anywhere.
Commentary elsewhere is that Intel is leaning hard on builders not to use the Ryzen 7/Vega 10, or if they use it, to put it in an otherwise shitty spec box that cripples it.
That last sentence sounds a lot like anti competitive behavior by Intel. On the bright side it bodes well for AMD that their chipsets are causing this type of response from Intel but it sucks that this may be driving the lack of AMD laptops available to consumers.
So far today it's at $11.08, which puts it back where it was at on March 22. So yes, it's a 30 day high, but only just barely.
I love AMD and I'm rooting for them, but the stock price isn't the news - it's the unexpected sales strength of the Zen architecture. Don't expect Intel to take their success lying down.
This is what I want to see. Intel finally waking up and working hard again and maybe trying to compete on price. Without proper competition the market just goes stagnant.
The most interesting thing is that Intel used to be able to crush their opponents by being one step ahead on node size improvements, but that's hit a wall and it has given everyone else a chance to catch up.
Also meltdown is going to be a problem for them for years.
Intel is vulnerable if they don't get their act together and the competition do.
AMD has been on a roll since Ryzen was released and as long as time continues to go past without a meltdown level problem cropping up they might have a shot at eating some of the data center stuff.
We upgraded our Xeon servers the week before meltdown hit, if we'd know we'd have held on another year and gone EPYC.
Meltdown would likely increase Intel sales not decrease them.
The problem with AMD is that it stopped making server CPUs for 5 years, Epyc was their first release since December of 2012 and that is what essentially prevents any serious ramp up of Epyc currently.
Not having the fastest CPU out there isn't a problem if you can price it correctly but when you don't give your customers any options to upgrade or grow you essentially give them only one option and that is switch the the competition completely.
If people can't trust that AMD won't abandon them again for half a decade to sort their shit out they will never take the risk of using them again at scale.
That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.
> The problem with AMD is that it stopped making server CPUs for 5 years, Epyc was their first release since December of 2012 and that is what essentially prevents any serious ramp up of Epyc currently.
I don't see why that should be a concern. It's not as though the effect is any different than switching to a new socket. Existing systems can't be upgraded to the newer processors, which is mostly irrelevant anyway because by the time the processor is stale so is the rest of the system.
It's not as if they're different instruction sets. It's perfectly reasonable to buy Opterons in 2009, replace them with Xeon systems in 2014 and then replace those with Epyc systems in 2019.
> That 5 year gap also essentially killed the AMD optimized software ecosystem and toolset which now needs to be built from the grounds up again.
The Zen microarchitecture isn't based on bulldozer. Even if they had kept iterating on bulldozer in the interim, none of that ecosystem work would have been applicable to Zen regardless.
Virtual Machines.
Intel and AMD aren’t “compatible”, you can’t cluster non heterogeneous servers together for thin provisioning since you can’t live migrate between them.
You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.
Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution.
Does your management stack supports DASH? Are your IT peeps familiar with it? Doesn’t have sufficient traction and market adoption?
Likely not and that is again because AMD slept for half a decade.
Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?
Heck even if you aren’t going to grow what about dealing with disasters? Do you really want to compound an already huge risk with another one?
And about what you said about Zen.
Zen isn’t that different to bulldozer in many aspects I suggest you should read the intrinsics guides for both.
And even if it was 100% different it doesn’t matter a 5 year gap kills the entire infrastructure of partners and provide tools and education.
If I need to optimize software today for an Intel CPU I have a plethora of resources, AMD can’t even release their instructions latency tables for 17h.
> Virtual Machines. Intel and AMD aren’t “compatible”, you can’t cluster non heterogeneous servers together for thin provisioning since you can’t live migrate between them.
> You essentially need to convert them and depending on what the OS it might be much more than a simple conversion especially on Linux where you might use specific kernels for each CPU vendor.
The premise is that you're migrating from one vendor to the other, so once you move something to the other pool it shouldn't have to move back. Having to reboot each guest once is inconvenient, but aren't you already doing this every month or two for security updates?
> Then we have monitoring and remote management both Intel and AMD provide completely different remote management solution.
This absolutely is AMD's fault, but the real issue is that their remote management solution (like Intel's) is a closed source black box. If they would open it up then it might be adopted by ARM vendors and so on and no one would have to worry about being abandoned because the community could continue to support it for as long as enough people want to keep using it. And it would put pressure on Intel to do the same thing, at which point they could be consolidated.
> Say you are an architect you now need to buy 100 servers with an expected yearly growth of 10% can you see the risk of dealing with a vendor who previously just threw in the towel and stopped making CPUs?
That would be the case if we were talking about some low volume product at risk of becoming unavailable. You can still source Opteron systems even today if you really want them. But nobody has wanted them for five years because the migration cost isn't that high.
Yes their current dollar range is slightly more exposed to volatility given it’s price range. We’d all wondered whether they would get a perk after the rounds of flack Intel’s received.
From the little research I’ve done though I’m unsure whether their new CPUs really compete with Intel. Any anecdotes here?
Well it looks like Bulldozer was still 60% of their client revenue, Ryzen is selling well the deal with Intel was also quite good but it looks like their enterprise figures are still abysmal Epyc double digit growth sure but considering that the majority of those 500 mill was still from semi custom it looks like Epyc is still struggling to get traction.
The problem is that AMD has lost the trust of the enterprise market because they gave up and stopped making Opterons without even a good reason to do so.
That screwed up their entire customer base because they had no option to expand or upgrade other than just replace all of their servers with Intel.
Opteron was the king and yeah they were losing the pure performance crown but in 2012 it wasn't that bad, server software could be optimized and an acceptable performance per dollar ratio could've been maintained.
Before Epyc the last enterprise Opteron that AMD was released was in December of 2012 on 32nm SOI process 16 core CPU (technically 8 since it's bulldozer but who cares) that's 5 years without an upgrade option.
If I would be a bet on the number 1 question AMD receives today about Epyc is what happens when you stop making CPUs again to which they would probably reply "we won't cuz we can't afford too" and then the cheeky retort "you couldn't afford it last time either".
Epyc ramp up was expected to be slow but I don't think this slow the amount of resistance I think it experience is above even AMDs internal pessimistic predictions.
I got in at ~$12 two years ago, was down 26% for most of that (I'm long, and don't care much for intra-day volatility). I hope it will crawl back up, but I'm increasingly bearish on this one.
Without investors pouring money in it, the R&D will stall and that's not something they can sustain. One of the few reasons they haven't been bought yet is because that will invalidate their license sharing with Intel. So I guess the other strategy is to simply bleed them dry. I mean just think about it. They build GPUs, just like NVidia. They build CPUs, just like Intel. They have their own fabrication process, logistics, research, the whole thing. Their chips don't perform much worse, even better in the mid-range tier. Yet AMD is valued at 9, instead of like 5x or even 10x as much. The people with the money are holding a grudge or something, and I can't see them keeping it up for much longer.
Without investors pouring money in it, the R&D will stall and that's not something they can sustain.
- AMD has a superior multicore manufacturing process to its competition. They are able to glue multiple die together to form their high-core chips. Others try to stuff as many cores as possible onto a single die. The problem is that the failure rate during fabrication increases exponentially in the number of cores in a die. Through this innovation, AMD has much lower costs when manufacturing many-core chips.
- EPYC has some serious advantages over its main competitor -- 128 PCI-E lanes, and comparable performance on other dimensions, at a lower cost.
- AMD APU's are not something that Nvidia can compete with. The fact that AMD can manufacture both CPU's and GPU's gives them a huge advantage over Nvidia in this space.
In other words, AMD already has a couple of nice moats. It's not like if they don't come up with a breakthrough within a year, they're screwed.
I'm not sure how you can value it at 5 or 10 times when earnings have been in the red since at least 2014. Also, Intel made $10 billion profit each of those years.
Yes, but I was being naive and thought the Vega/Epyc/ThreadRipper stuff with the Intel Specter/Meltdown and crypto craze would help them more. I know my next machine will be an AMD one, if only just for wishful thinking
Iirc Jim Keller's title was VP and chief of the microprocessor division. So he was probably more of a technical manager anyway. I believe Suzanne Plummer and Mike Clark were project directors so they'd be more hands on anyway
The first will be a quicker read than the second ;). The other commenter mentioned SMT was an IBM patent rather than an Intel one, so that's probably the answer to my question. It seems weird that they wouldn't have just licensed that from IBM as well.
I have been watching the dynamics between AMD and Intel recently and some interesting things are happening. Jim Keller and Raja headed to Intel. Intel announcing plans to get into the discrete GPU space (a place ridden with patent landmines). The KabyLake chip released with Intel CPU + AMD GPU. Wonder if the companies are aligning to work closely to counter NVidia or if something else entirely is at play.
I thought Read was a placeholder CEO who was installed to conduct cost cutting. But you're right - a quick google search confirmed that Keller was hired during Read's tenure.
It wasn't until recently that Dell and HP were offering fully tested EPYC servers in their server lines. Many IT professionals were complaining that EPYC was simply unavailable from their suppliers last year.
The typical server-farm waits for OEMs to make well-tested complete machines. So it won't be till this year that EPYC even has a chance to make it to your typical serverroom setup.
I tried to get single socket EPYC-based servers for a virtualization host refresh that we were doing last year, and the processors simply weren't available anywhere. _Everyone_ was out of stock indefinitely.
As far as I can tell, unless you were a hyperscaler, EPYC's release was essentially a paper launch up until Q1 2018.
Most of Nvidia's architectural changes have moved it closer to GCN (eg, adding async compute in Pascal). In other areas like multiple compute engines (A reason GCN generally has better VR latency numbers), Nvidia still lags behind (not sure if Volta changes that). AMD generally does quite well on compute tasks compared to Nvidia and is still generally much faster for anything using Vulkan.
The big issue with AMD is software and money. Even with the most talented engineers it takes loads of man hours to build up an ecosystem. AMD's is currently valued at 8.3B. Nvidia currently has 7.1B cash on hand. With that kind of money, Nvidia will do things like loan AAA gamemakers some devs to re-write large parts of the game to optimize them for Nvidia graphics cards or build up the CUDA ecosystem.
It seems like AMD has hardware bottlenecks (or missing optimizations relative to Nvidia) in the fixed-function geometry processing parts of GCN. The compute benchmarks are always better than the graphics benchmarks. If it was only optimizations in games, there would be outliers in less popular games (Doom is a bit of an outlier but popular). And in any case, Xbox and PS4 consoles use AMD GPUs so game devs have reason to optimize for AMD.
Also, AMD has been improving its Windows drivers in the last 2-3 years according to reports. I use AMD GPUs on Linux where the driver codebase is different - the improvements there are huge. So much so that it may make sense for AMD to switch to that stack on Windows in the future.
Their ROCm environment finally is catching up with NVidia's software / CUDA environment. OpenCL was simply a disaster and hampered AMD's role as a serious software solution for years.
I think its more important to get ROCm fully functional with Tensorflow and other technologies (their current path). AMD can compete on price/performance, selling Vega HBM2 chips at ~$1000 to $2000 (compared to NVidia's chips at $8000 for their HBM2-based V100).
With AMD's drivers properly in the Linux kernel and a nicer licensing model, AMD can achieve a position in the server world. But only if developers start coding in ROCm instead of CUDA.
Honestly, I bet you that software is the bigger issue in the near term. No one wants to code in OpenCL / separated C-environment. ROCm achieves CUDA-parity to some degree with "single source" C++ programming and is beginning to become compatible with Tensorflow.
Sure, they'll be slower than NVidia. But at least your python machine learning code will actually run at all. And paying 1/8th the price for ROCm acceleration will be fine as long as AMD is 1/4th the performance or better. That's how you can actually build a value argument.
OpenCL is not a disaster at all. It is just that NVidia were (and still are) too scared to have people move away from their proprietary solutions, so they tried to hide OpenCL has much as they could and only pushed Cuda.
Even today OpenCL is a viable solution for GPU. It works fine on both AMD and NVidia GPUs. It is also pushed a lot by Intel for FPGA, which probably scare even more NVidia.
OpenCL kernels are compiled at runtime, which is brilliant since you can change the kernels code at run time, use constants in the code at the last moment, unroll, etc.. which can gives better performances. (Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7!)
The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.
> It is also pushed a lot by Intel for FPGA, which probably scare even more NVidia.
These tools aren't available for the wide majority of developers, and are still exceptionally difficult to use and maintain without hardware engineers. I'm going to assume you haven't used FPGAs at all? The ones that can compete at the same tasks for GPUs are not as easily available in terms of price, volume, or even over-the-counter availability (be prepared to ask for a lot of quotes), and the tools have only become more accessible very recently -- such as Intel slashing the FPGA OpenCL licensing costs, and Dell EMC shipping them in pre-configured rack units.
> Nvidia only introduced the possibility of having runtime compilation as a preview in Cuda 7
In the mean time, Nvidia also completely dominated the market by actually producing many working middleware libraries and integrations, a solid and working programming model, and continuously refining and delivering on core technology and GPU upgrades. Maybe those things matter more than runtime compilation and speculative claims about peak performance...
> The "single source" argument is completely overrated.
Even new Khronos standards like SYCL (built on OpenCL, and which does look promising, and I'm hoping AMD delivers a toolchain after they get MIOpen more fleshed out) are moving to the single-source model. It's not even that much better, really, but development friction and cost of entry matters more than anything, and Nvidia understood this from day one. They understood it with GameWorks, as well. They plant specialist engineers "in the field" to accelerate the development and adoption of their tech, and they're very good at it.
This is because their core focus is hardware and selling hardware; it's thus in their interest to release "free" tools that require low-effort to buy into, do as much dirty integration work as possible, and basically give people free engineering power -- because it drives their hardware sales. They basically subsidize their software stack in order to drive GPUs.
> Furthermore, you can have single source in OpenCL putting the code in strings.
I'll probably need to be more specific. OpenCL 1.0 through 1.2 is fine, but fell hopelessly behind NVidia's CUDA efforts. NVidia CUDA has more features that lead to proven performance enhancements.
OpenCL 2.0 was the "counterpunch" to bring OpenCL up to CUDA-level features. However, OpenCL 2.0 is virtually stillborn. Only Intel and AMD platforms support OpenCL2.0. Intel Xeon Phi are relatively niche (and their primary advantage seems to be x86 code compatibility anyway. So I doubt you'd be running OpenCL on them).
AMD OpenCL 2.0 support exists, but is rather poor. The OpenCL 2.0 debugger simply is non-functional and you're forced to use lol printfs.
That leaves OpenCL 1.2. Its okay, but it is years behind what modern hardware can do. Its atomic + barrier model is strange compared to proper C++11 Atomics, its missing important features like device-side queuing, shared virtual memory, unified address space (no more copy/paste code just to go from "local" to "private" memory), among other very useful features.
> Even today OpenCL is a viable solution for GPU
OpenCL 1.2 is a viable solution. An old, crusty, and quirky solution, but viable nonetheless. OpenCL 2.0+ is basically dead. And I think only Intel Xeon Phi supports the latest OpenCL 2.2.
I bet you there are more Vulkan compute shaders out there than there are OpenCL 2.0. Indeed, there are rumors that the Khronos project is going to be focusing on Vulkan compute shaders in the future.
> The "single source" argument is completely overrated. Furthermore, you can have single source in OpenCL putting the code in strings.
I like my compile-time errors to be during compile-time. Not during run-time on my client's system. Compiler-bugs in AMD drivers are fixed through device driver updates (!!!) which makes practical deployment of plain-text OpenCL source code far more of a hassle in practice.
Consider this horror story: a compiler bug in some AMD Device Driver versions which cause a segfault on some hardware versions. This is not theoretical: https://community.amd.com/thread/160362.
In practice, deploying OpenCL 1.2 code requires you to test all of the device drivers your client base is reasonably expected to run.
-----
But that's not the only issue.
"Single Source" means that you can define a singular structure in a singular .h file and actually have it guaranteed to work between CPU-code and GPU-code. Data-sharing code is grossly simplified and is perfectly matched.
The C++ AMP model (which has been adopted into AMD's ROCm platform) is grossly superior. You specify a few templates and bam, your source code automatically turns into CPU code OR GPU-code. Extremely useful when sharing routines between the CPU and GPU (like data-packing or unpacking from the buffers)
With that said, AMD clearly cares about OpenCL and the ROCm platform looks like it strongly supports OpenCL through then near term, especially OpenCL 1.2 which seems to have a big codebase.
However, if I were to do any project these days, I'd do it in ROCm's HCC / single-source C++ system or CUDA. OpenCL 1.2 is useful for high-compatibility but has major issues as an environment.
Interesting. I'll take your anecdote for what its worth.
My personal use case with OpenCL didn't seem to be going very well. I was testing on my personal Rx 290x. While I didn't have the crashing / infinite loop bugs (See LuxRender's "Dave" for details: http://www.luxrender.net/forum/viewtopic.php?f=34&t=11009) that other people had, my #1 issue was with the quality of AMD's OpenCL compiler.
In particular, the -O2 flag would literally break my code. I was doing some bit-level operations, and those bit-level operations were just wrong under -O2. While the -O0 flag was so unoptimized that my code was regularly swapping registers into / out of global memory. At which point the CPU was faster at executing and there was no point in using OpenCL / GPU compute.
It seems like AMD's OpenCL implementation assumes that the kernels would be very small and compact. And it seems to be better designed for floating-point ops. Other programmers online have also complained about AMD's bit-level operations returning erronious results under -O2. My opinion of its compiler was... rather poor... based on my limited exposure. And further research seems to indicate that I wasn't the only one having these issues.
Only did and do floating point for image processing. In fact, looking into my logs, I registered 5 bugs with NVidia in the last 2 years, none with AMD.
Just because they can trade blows with Intel in the CPU space, doesn't automatically translate to them having equal engineering talent to take on Nvidia on GPU's
I didn't say they have talent (which they might have anyway). I said they should compete, which means they should find needed talent obviously if they don't have it.
I'm old to remember the excitement over AMD's K6 processors, which were quickly overtaken in speed by Intel for a decade. I just hope that AMD continues to be competitive.
The Celeron 300 could hit 450MHz and barely break a sweat. Of course I discovered that after splashing big cash on a P2 450. 50% OCs are unheard of today.