That is the problem, AMD is not taking all the advantage of Intel's misstep. In that regards I think the Sales and Marketing department in AMD is no where near Intel's.
I think it has to do with AMD being way too conservatives with their estimate and not booking enough 7nm Capacity.
Now we know AWS is using those 7nm Capacity as well for AWS, and given Amazon has shown they intend to have all their SaaS running on their ARM CPU, I think their order are going to be big.
Edit: One thing I forgot was It could also be GF's problem where they cant keep up with the I/O Die.
They cut prices, but that was not what I was saying.
They don't have to undercut AMD to sell CPU's if they can deliver more volume. They may cut prices for total $3 billion worth, but they start from so high profit levels ($21 billion) that they can outmuscle AMD financially and stay profitable.
You can only play financial trickery with obsoleted the h for so long and Intel's entire product stack is dead in the water except for low cost consumer class CPUs with the lowest realtime latencies in the world (and those are mostly used for gaming).
At no price point are they competitive with AMDs offerings and they kind of cursed themselves with their legacy of inflated unchanging prices - they tear apart a decade of profit margin and consumer confidence to shortchange their product stack into staying relevant. C levels at Intel must still be dissatisfied with this outcome.
Retail market is where the developers are. Once you win developer mindshare, it turns into a trickle-up exercise and before you know it a single developer workstation turns into a datacenter.
There are only two companies capable of fabricating the most advanced processor technologies: TSMC and Intel. The disaster of 10nm at Intel has meant that all of its product lineup has had to internally compete for 14nm fab space, greatly constraining supply there. Meanwhile, Global Foundries' decision to abandon chasing smaller nodes means that fabrication for pretty literally everybody else has had to compete for space in TSMC, so smartphones, GPUs, CPUs, TPUs, etc. are all vying for the same fabrication capacity. Furthermore, it is extremely expensive--$10 billion or so--to make a new fab, so there are few companies that could even afford to throw money at TSMC to expand production.
Put more succinctly: there is literally not enough semiconductor fabrication supply today to meet demand, and the supply is very inelastic.
I believe that AMD relies on another company to produce its CPUs which may limit their supply. If they can't produce enough, and there is still a demand then there will be no choice but to go with Intel. This is some guessing on my part, since I don't have any economic background.
I don't think it is purely a TSMC supply issue. To put it bluntly; AMD isn't used to selling that many CPUs. Zen 1/+ brought them parity with intel performance-wise (something AMD has been laking for over a decade). It's only with Zen 2/ the 3000 series that AMD has had both core-count and single-threaded. Before you had to choose between lots of slower cores or fewer/ faster cores.
I do wonder if the 3900x/ 3950x shortage is at least partially manufactured (they're so hard to get they must be good right?). Intel halved their prices pre-lunch which is something that has never happened before, but they still can't match AMD for price/ performance.
AMD has both good CPUs & GPUs (something that is very rare!). Being an owner of an RX 5700, I'm surprised how low the power usage is. If they put a cut-down RX 5000 on a low-power CPU, they could make some decent laptops that don't need dedicated GPUs (memory speed is a bottle-neck especially on a laptop with a single memory channel). That could pose a problem for Intel. I'm actually a little surprised it hasn't been more of a priority for AMD, all Intel can do to compete right now is to literally give their CPU/ APUs away (better to make a loss than lose a customer to AMD). Next year will be a good time to buy a laptop. It looks as though Zen 3 will first launch on laptops, that may pose a problem for Intel...
Honestly, I feel that Intel could have the better laptop chips if they get 10nm in a better state. And I say this after building a 3600X +RX 5700 XT desktop this year.
Intel's R&D budget is several times larger then AMD's revenue so they'll always be able to squeeze a little more out of their current chips.
Could just be a timing issue though. Intel might abandon their current 10nm leaving AMD as the default winner by mid/late 2020.
Disclaimer: I'm long on AMD and wishing I bought more 5 years ago.
> Intel's R&D budget is several times larger then AMD's revenue so they'll always be able to squeeze a little more out of their current chips.
But Intel has to compete with TSMC. Since TSMC is making Apple, AMD, NVidia, and Qualcomm chips, TSMC's R&D Budget seems to have won over in this generation.
TSMC amortizes its RD/CapEx/Op costs over Apple, Qualcomm, NVida, Broadcom, Xilinx, etc.
Apple, Qualcomm paid for most of the heavy 7nm RD/Capex long before to AMD's 7nm production commitments. AMD basically got the 7nm production advantage without the RISK on Fab investments like Intel. It was good that the AMD 's board had the foresight to divest fab to Global Foundry a few years back.
Intel has to amortizes its fab cost over Altera and x86 CPU. Altera part is probably trivial. Intel's huge 10nm investment in RD, Capex depending entirely on one x86 product line to recover. When execute well like 15 years before 2017, it was big advantage. Any execution misses, the FAB RD/Cap cost became a liability.
"Only the Paranoid Survive" - Last few years, Intel was definitively not Paranoid enough.
I do remember hosting on consumer CPUs was a thing a decade ago, but then declined rapidly after n-core server chips took over because they provided better per core price.
Now I believe, things may go the other way around as consumer chips are now gaining a lot in core count.
DIY CPU retail market for processors is extremely tiny market. Very high margin, but insignificant.
AMD's challenge is getting enough processors out in the enterprise market. They need to grow the market share now when they have the technical advantage. Intel can sell their inferior CPU's for higher price because they can deliver large volumes and satisfy demand.
Revenue comparison 2018:
Intel $70.8 billion
AMD $6.5 billion
AMD is fabless, so they are in competition with other GF and TSMC customers for manufacturing capacity. Apple,Nvidia,Qualcomm and many others.
Indeed. I'm trying to find a suitable vendor here in Germany (as a small software making business), but all I can find (when I look for Ryzen powered PCs) are small local dealers - with usually very mixed reviews and questionable support. I always got Dell, and while I would not want to rely on their support for any software issues, at least whenever we had trouble with hardware I could get that replaced easily and quickly. Those smaller dealers don't even have any decent bundled support options, and no guarantees similar to what I can get from the "big guys". The reviews accordingly show a lot of variation and randomness (basically, all those who have no issues with their purchase give five stars, but of those needing support, even if it is for issues clearly caused by the vendor, have a good chance of telling a nightmare support story).
So while I still want a Ryzen PC, I also want five years of 24 hour hardware support, and I feel more confident to get that from Dell rather than from some small local dealer for whom any support issue is a big deal for their own bottom line and where employees are under more pressure than those working for a big vendor. The big ones still are "Intel only" - apart from >$2,000 gaming PCs (I usually buy ca. $1,000 PCs with minimal graphics card, videos only (hardware accelerated decoding is nice to have; but too much of a card costs too much energy for no reason, even base energy when not in gaming mode is much higher - GTX 1050Ti using just motherboard energy is perfect, as an example), no gaming ever, but good components and large RAM and SSD, for developers).
PS: Any Germans here who found a solution for needs such as mine?
Depending on how many you need, you should consider building them yourself. I just ordered a new Zen2-based build from alternate.de in parts and I think that's the way to go.
That's far too much detail for me. When I look at the "PC configurator" I feel like I'm drowning in details. I would need a week to research all those components. CPU is easy (Ryzen 5 3600 seems appropriate), but next comes the motherboard - and I'm already lost. Never mind the 100 different cases and the list goes on and on. I'm also not so sure that whatever components I select really work flawlessly together.
I run a pc recommender that is basically made for you. Parts that fit together with some optimizing algorithm selecting an optimal configuration for your price point. I changed this configuration a bit manually given your statements: https://www.pc-kombo.com/share/mX6w2675
This will fit together and is stronger than with a GTX 1050 Ti. If you really want a gpu powered only by the motherboard step down to a GTX 1050, or look at the (rather overpriced) GTX 1650 (though it depends on the model there). Mail is in the profile if you have questions.
Damn... I looked for a PC configurator for several hours two weeks ago and eventually settled for a 3800X. Your site managed to get me a 3900X in my 1200 eur budget (with 64 GB of RAM). Oh well, I will buy another PC in the next six months. Definitely bookmarked!
I have put together many PCs myself, and honestly many parts aren't that important.
Power supply, motherboard, SSD and Ram will "just work", given you buy a recent product from a known brand. The +/- 5% of whatever will not make a difference to most people.
For mobo, think of features you must have ( built-in wireless?) And that will direct you.
Oh please don't do built-in wifi.
It's ok to have the adapter inside, but you should put the antennas outside. Include them in the design, if you wish.
to clarify, these are vetted parts lists by a leading german computer magazine, complete with optional upgrades, videos to aid with the build process and even detailed BIOS settings.
Which also means they can jump to whatever the current hot stuff is in the fab world, letting the big fabs (which are few in number, but still >1) compete on technical merits.
Where Intel has a corporate need to use whatever fabs they've invested in, and to be anchored by whatever the limitation of their fabs are. They aren't really limited by that -- they could just get TSMC to make some stuff for them or something --- but they're Intel so they'll just make 10+++++++
Don't you mean 14++++++? They still can barely get 10nm working in laptops, much less desktop or servers. By the time they do, AMD will be far ahead of them process-wise.
They don't jump. Usually make long term deals for each process and design their microarchitecture for that process. AMD moved all 7nm CPU, GPU production to TSMC and they are tied to them for the duration of 7nm.
The question is if they can get all the capacity they need for price they can afford. It seems like high margin customers like Apple and Nvidia are always served first because they can pay more.
In effect, AMD spends very little money and die-area on expensive 7nm process, while leveraging their GloFo 14nm contracts to cheaply make I/O and memory controllers.
----------
The 7nm chiplet is a single design: mass produced for EPYC, Threadripper, and Ryzen. The 14nm I/O die is what differentiates between EPYC, Threadripper, and Ryzen.
The I/O die can have 2x memory controllers (Ryzen), 4x memory controllers (Threadripper), or 8x memory controllers (EPYC)... supporting 2x dies, 4x dies, or 8x dies as appropriate.
It's not really "rumored" to be so much as it is actually. The I/O dies are built at GlobalFoundries. The Epyc's larger I/O die is on GF's 14nm, and the consumer Ryzen I/O dies are GF's 12nm. https://www.anandtech.com/show/14525/amd-zen-2-microarchitec...
The fact that Intel sells inferior product at higher price means that AMD has supply bottleneck.
I'm almost certain that in AMD takes every sale from Intel when they can deliver. They have superior product but not enough capacity to reap all the benefits from their success.
> The fact that Intel sells inferior product at higher price means that AMD has supply bottleneck.
Only if Intel is also _perceived_ as the inferior product. Remember that the value to the customer goes beyond raw benchmarking specs; it also includes brand reputation and marketing.
At the level of truly huge purposes, "value" also encompasses a real bilateral relationship between the company and vendor. A cloud provider might be tempted to stay with Intel now over AMD if by doing so they preserve guaranteed access/preferential pricing over the next set of server chips. To crack that hypothetical relationship, AMD would have to both dominate now and be perceived to dominate for the medium-term future.
It's only the retail market that can be so fickle as to consider a CPU a one-off purchase.
Sure they do. That's where the saying "nobody ever got fired for buying IBM" comes from. If anything enterprise is more brand sensitive and less price sensitive than retail customers.
Not to mention there is the entire surrounding ecosystem from the CPU parts themselves. The chipsets, boards, laptop availabiilty. Not to mention things like vendor lock in to the particular out of band management technology. Getting better benchmark scores and saving a few hundred per CPU is only part of a much larger matrix for making a purchasing decision.
Due to the needs to reduce complexity and have less vendors to deal with most companies use a limited amount of suppliers for datacenter equipment as well like Dell or HPE or whatever.
Data center customers care about support, vendor relationships, keeping complexity down, politics, reliability (= aversion to new products), lots of things. If anything the custom PC market is more focused around specs and prices solely.
Yes they absolutely do and AMD vs Intel is more than a matter of branding. It's fundamentally switching vendors. If something goes wrong it's on the head of the person who pushed for the change. There's also some things you can't do seamlessly between two servers that don't have a processor from the same brand.
Enterprise if anything has more inertia than say the custom PC market which will turn on a dime to chase the best deals.
> The fact that Intel sells inferior product at higher price means that AMD has supply bottleneck
Except they don't? There's a reason Intel's flagship HEDT part, the i9-10980XE, is half the price of the part its replacing despite almost nothing changing about the chip itself. And at the same time AMD is raising the price of its HEDT parts.
And it's not just the HEDT parts, but Intel also cut the prices on a bunch of Xeon chips a month or two ago as well, as well as their consumer stack.
> The fact that Intel sells inferior product at higher price means that AMD has supply bottleneck.
...or that Intel controls supply to retail vendors by other means. If a vendor is bound to keep putting Intel products on their stores to comply with supply agreements made in the past, for example.
AMDs mobile and APU line is generally about a year behind desktop, so expect to see Zen 2 cores in mobile mid 2020, numbered as Ryzen 4xxx.
Zen 2 combined with much better than intel integrated GPU is likely going to make a very nice dent in marketshare, unless Intel pulls a rabbit out of their hat.
There are no rabbits in the hat until late 2021 when 7nm comes to market. The 10nm process is completely broken. Watch how the majority of the roadmap is still on 14nm into 2021, there never will be a desktop 10nm for example. The Rocket Lake desktop chips in 2021 still on 14nm will drop down to eight cores to cram the new GPU architecture into desktop chips at a truly ridiculous 125 W. The Ryzen 9 3900 today does 65W w/ 12 cores and and you can easily squeeze in a 1650 into the remaining 60W.
Now TDP isn't really reflecting output and can't be compared well across brands. But a 2x difference in TDP is a lot more than the different ways of calculating them between amd and intel
> There are no rabbits in the hat until late 2021 when 7nm comes to market. The 10nm process is completely broken. Watch how the majority of the roadmap is still on 14nm into 2021, there never will be a desktop 10nm for example.
From what I heard from a man close to Intel process crowd is that Intel will simply slap a 7nm marketing designation on their current 10nm process when they finally get it going.
They will then change design rules to deliver enough density change, comparable to a node shrink without any change in the process.
Design rules don't change because marketing says so.
It is reasonable to begin early processor development in a new process with conservative design rules (in order to be sure that everything works and yields are acceptable) and rework some components, pushing the envelope a little more, if tests allow it; but such improvements are going to be small "without any change in the process". Maybe the same specifications on a slightly smaller die to reduce costs or a slightly higher clock or lower power SKU. Or nothing at all because the tooling costs for marginally improved revised processors aren't justified.
Well from that roadmap, I see 6 core Comet Lake U (high end mobile) set for Q2 2020; we don't know how many cores AMD is planning for Zen2 mobile. If AMD sticks to 4, and Intel has 6, that's going to swing some buyers. The (leaked) Intel roadmaps tend to have more specifics than the AMD ones though.
For as long as the ultrabook design remains popular (and it shows no signs of waning), the high-end CPU options will be riding the edge of their thermal limits during sustained use. OEMs aren't going to suddenly start over-building their cooling solutions just to help out marginally with a niche use case. As long as we don't get back to the problems from a decade ago with dying mobile GPUs, there's not really anything wrong with having CPUs that boost up to the thermal limits of the system form factor.
If someone comes out with a laptop CPU that can't boost up to those thermal limits, it means the chip's undersized and that vendor will probably need a different microarchitecture for the desktop or server markets.
> OEMs aren't going to suddenly start over-building their cooling solutions just to help out marginally with a niche use case.
A friend from a laptop engineering company has worked on this exact problem recently. Chinese OEMs are all trying to squeeze 35-45w chips into small chassis now.
To my big surprise, doing so in even thin bezel 13 inch models is not that big of a deal actually. Big OEMs simply were never bothered enough to try that before.
You have to take into consideration what kind of workloads lead to throttling. Laptops are usually not used for the kind of tasks that keep a CPU fully loaded for several minutes or hours at a time. People who do use laptops in that manner are a tiny fraction of the market, and when they experience throttling that does not have any bearing on whether the cooling system of an ultrabook is adequate for the kinds of more typical workloads it is actually designed for.
There have been some ultrabook-style designs that offered inadequate cooling even for fairly normal use cases, but that's a separate issue. Mainstream laptops will be designed around mainstream workloads, and heavier workloads will push them to their limits. Better cooling doesn't come free, and if it doesn't benefit mainstream workloads it's unreasonable to expect mainstream laptops to put more emphasis on cooling capabilities.
Right. I get the sense that most developers browse with adblock on, but the average user experience is for their computer to be effectively running Prime 95 during regular web browsing.
Sorry, but if I buy a six core laptop, I'm not going to be in the casual notepad user category.
In many laptops, thanks to bad thermals I'd be better off with a 4 core where the thermals can keep up. That's where the 7nm stuff could really bring advantages.
I've been able to load up my desktop six core plenty using e.g Docker and a bunch of microservices. It has a fairly decent 360mm AIO water cooler so stays pinned at max perf. Had a bad cooler before, though, and it really impacted perf and stability.
The point still stands that most people buying these machines are generally not running them at 100% CPU usage for extended periods of time; their usage is much more bursty, with short periods at full power separated by longer periods of idling or low power. This gives the CPU plenty of time to cool down in between bursts.
If OEMs optimize for that use case, I suspect that a more efficient CPU will simply mean that they cut even more corners on the cooling, not that the thermals will actually be significantly better.
It's sad that this is the norm. Aluminum is cheap, a bigger heatsink in a regular laptop (not Ultrabook) should cost what, a dollar more? Yet laptops are never designed for full load for hours. Just "mainstream" use. Even business "workstations" have the same problem. I've had to do hardware mods or undervolting on all laptops. WTF.
If only the industry stopped for one second to pursue angstrom thick laptops designs in favor of more thermally efficient ones. Laptops could have their CPU and chipsets facing downward in contact with the bottom cover entirely made of aluminium, then use a second aluminium made upper shell with small thick fins facing outward that when closed works as a sturdy cover to protect the lid carrying the screen but when opened it could be removed then attached to the lower one to increase thermal exchange with the environment.
People obsessed with the thinnest hardware wouldn't touch it with a 20 meter pole but those in need of serious performance and mobility would probably find it interesting.
So, have the bottom cover be the heatsink? That actually sounds brilliant.
HP thinned the ZBook series by turning everything upside down and having the bottom be just a dumb panel instead of the main frame.
Sadly they, once again, used a standard, barely capable heatsink. It will run for days loaded, but it will go over 90 degrees and even throttle, which is unacceptable imo.
Yes, the upper cover with the fins up should be designed to match perfectly with the lower one when mounted with the fins facing down. Cuts should be arranged so that the bottom cover rubber feet wouldn't prevent perfect surface contact. It would become a fairly large heatsink in which the size and combined thickness would likely be enough to counteract the small fins size and absence of fans.
Battery/disk/memory covers on the bottom side would be accessible by removing the additional cover.
"You have to take into consideration what kind of workloads lead to throttling."
I'm driving four displays at work with Windows 10 (two 21.5' 1080p monitors, my laptop flipped open, and an iPad Pro 12.9 connected via USB C running Duet Display) and my idle desktop CPU utilization hovers around 15-20%. Having Outlook and Chrome open gets it into the mid 30s. This is a four core i7 Dell Latitude 7490 with 16GB memory and an NVMe drive that I was given in May 2019.
Yes, all the OS/applications I'm using are resource hogs but I'm not even doing software development - this is all business analyst work. Seeing that the general trend of applications/OS will continue to be resource hogs, let's hope that six core thermal chassis design for 14' ultrabooks is figured out in the next two or three years.
Throttling is what happens when your CPU stays at 100% for a long time. Perhaps you could describe the part of your workload that exhibits that behavior, rather than describe a workload that is obviously not causing thermal throttling?
We're talking about the current state of the real market here, not abstract hypotheticals. Laptops overheating at 30% CPU usage is not a widespread issue in the real world; to a first approximation, the only way to get an ultrabook's CPU to thermally throttle is to keep at least one of its cores completely busy so that the processor stays in its boost state long enough to pump out serious thermal energy. Bursty workloads give the CPU too many opportunities to cool off.
I think AMD is still (slightly) behind in power consumption, especially if the CPU is idling. Some tests suggest that this is heavily dependent on the motherboards chipset though. But power consumption is probably very important for the industry and for notebook devices.
That said, AMD becoming a worthy competitor on the market again is awesome.
I recently bought a new laptop and actually wanted AMD, but none of the laptops I was inteested in were available with one. It appears that high-end laptops are still ruled by Intel. Do AMDs run too hot for mobile or something?
It's not about heat, more probably vendors have a contract with Intel that makes Intel parts more expensive if they are not bought exclusively (it will be worded differently because Intel already got sued for this practice).
I recently bought a Thinkpad E495 with a Ryzen 3700u and thermals are just fine. The surface barely heats up under sustained load. The E series is more budget, though. But the T495 (14") or X395 are of higher build quality (and price) and also have Ryzen 3x00u CPUs.
Typing this on my x395 (with 3700u) and can confirm these are really well made laptops.
Linux support is excellent. Openbsd is good but doesn't support this generation of wireless cards yet, which is quite inconvenient; next version perhaps.
Integrated Vega full support landed in June 2018 (kernel 4.17, Mesa 18 for 3D), as long as you have these or newer and a firmware for your card in the system — you'll get full acceleration.
I've been running Debian Testing on 2400G since 04/2018, where I had to put firmware and compile the kernel with the config changes to enable the support, but it was already in kernel tree nonetheless. It's out-of-the-box since 07/2018, way before the release happened.
At least on Arch, X doesn't even start 4.19 (linux-lts package), but works fine on 5.2+ (linux package); probably also in some kernels in between, but I only bought the laptop recently.
Likely specific support for 3700u was introduced at some point between these two kernels.
AIUI 5.4 is meant to become the next LTS, so it won't be an issue going forward.
There's the amdpro drivers, which nobody uses; they're partially open source, as I understand it.
The point is that AMD releases detailed documentation for every GPU they sell, and thus it's very easy for mesa to support their hardware; AMD themselves also contribute code to Mesa, of course. This is in contrast to NVIDIA, which even uses encryption and signatures to twart nouveau efforts. I avoid NVIDIA entirely, for this reason.
I haven't had issues so far. Life has been good, same as with the vega 64 on my workstation.
GPU Accelerated graphics (2d, 3d and video codecs) on both Linux (5.2+, probably earlier) and Openbsd (6.6+), with open source drivers (kernel DRM, userspace mesa3d and xf86-video-amdgpu).
If you get tearing, try:
xrandr --output eDP --set TearFree on
To enable the anti-tearing workarounds. This shouldn't be necessary on a modern composited desktop, but I do need it with a simpler i3 setup, to not tear videos on youtube. Mpv seems to not need it either.
My guess is that laptops probably have a much tighter integration with the whole system, and so it takes more effort to change the CPU. It might take a few years of consistent performance from AMD to convince the laptop manufacturers to invest in the engineering to do that integration with a new CPU.
T495, T495S, X395 are AMD laptops. I would guess the P line don't have an AMD option because AMD doesn't have a high end laptop CPU line like Intel does.
USB4 is essentially Thunderbolt 3. Since TB3 also defines how a USB 1-3 connection can be pushed over the wire, that's what most devices can fall back to.
So you are telling me a usb microphone needs to implement hdmi and displayport if it wants to be compliant? Or are these devices expected to stay on lower usb versions? What if the device needs some of the features from usb4 but it makes no sense to have thunderbolt features?
> So you are telling me a usb microphone needs to implement hdmi and displayport if it wants to be compliant? Or are these devices expected to stay on lower usb versions? What if the device needs some of the features from usb4 but it makes no sense to have thunderbolt features?
All USB Type-C connectors all have a pair of USB 2.0 wires in them. You can still be a USB 2.0 device and talk on those USB 2.0 pairs on USB3 (and presumably USB4) just fine with just a little bit of care (get your resistors correct on the CC lines, for example).
If you want some features of USB3 or USB4, then things get a little more complicated. For example, if you want to be able to draw slightly higher power (900mA or 1.5A), then you need to have some active circuitry on the CC lines (One of these is about 70 cents: https://www.ti.com/product/TUSB320HAI) and you have to respond properly when the system tells you to draw less power even though you don't need to use the full-blown USB3 communication pairs.
If, however, you want USB3 or USB4 speed or very high power, then you need a full-blown controller chip and you incur all the grief that demands. Of course, if you actually need a couple Gbps, you're in the realm of doing serious signal integrity analysis anyway, and you're probably not going to balk at the $3-$5 required for a true controller chip to handle it all.
The only new feature in USB4 is Thunderbolt, so yes, if a device doesn't need Thunderbolt features it should stay on an older version. Of course, older USB versions will end up rebranded as something like "USB 4.0 1x1 High Speed".
I bought a 3900X also. I, too, would like a 3950X, but the 3900X is so fantastic it's hard to be too upset. I also have 4 Threadripper systems and I just could not get work done without them. My build times for compiling software would be atrocious without them.
I’m still on an i5 4670k and 16 gigs of ram. I built this system in 2013. I secured a day one 3950x and it arrived this weekend - the rest of the components are getting here throughout next week - and the speed up of compiles and ability to run multiple VMs at once for testing distributed apps is what I’m most excited about! It’s basically a workstation and a gaming rig in one. Gaming is fun but man I can’t wait for some faster feedback loops when I’m developing.
I was able to get rid of multiple computers and convert from physical to virtual and save on power bills. Once your build is complete you will be incredibly happy. AMD is killing it with these processors. I've never been happier with a tech product line. It just blows me away how much I'm able to accomplish on a shoestring budget now.
There will be a BIOS option entitled "Fan Smoothing" or similar. On my ASUS board, it gives a range of times over which to average CPU temp rather than use the instantaneous value for fan control. I have mine set to 7 seconds I think, and it completely stopped the fan revving up/down issue.
There's also the 4xxx series on the same socket due sometime next year. The talk for zen3 is 15% ipc uplift and 10% higher clocks.
Sure the 3900x will carry you until then.
I'm curious as to why these processors make it easy to debug threading issues? Is it just due to the sheer number of cores or something? not sure why that would help either.
More cores _actually_ running simultaneously means more likely chance of encountering sporadic race conditions. Also makes it easier to measure (and thus work to improve) high thread-count scaling.
I wish my home server performed more tasks, so I can justify an upgrade from my i7. Unfortunately even as it stands today it’s mostly idle despite me doing quite a bit with it.
Besides, it’s really annoying I need to upgrade from DDR3.
I’ve been sticking with Ivy Bridge-EN chips for my lab at home because used DDR3 RDIMM’s are dirt cheap, not to mention the chips and systems themselves. In a few years once used Naples gear hits the market I may consider upgrading, but I’m also quite fond of the low power draw my 2450L v2’s have.
I bit the bullet and upgraded from 4670k + DDR3 --> X570 Mobo + R5 3700X. Costs ~$500 assuming you can keep your GPU, PSU, Case, fans etc. and you don't want to go all our RGB on everything.
Haha, I think it can be tastefully done. Of course, the PC shouldn't look like a Unicorn puking rainbows, but I'm a fan of having a glass panel that shows off components with some subtle single / dual color lighting.
So yeah, I made some questionable technical decisions purely for aesthetics but yeah form is kind of important too.
It's a nice bonus if something happens to have it. My AM4 motherboard has an RGB LED header, and the Ryzen 1700's included cooler has an LED ring on it, so it gently illuminates my PC build with my favourite colour.
> the PC shouldn't look like a Unicorn puking rainbows
i have 0 RGB on or in my case.
your comment just made me change my view on RGBs. I.Want.Them.
My next built is going to puke rainbows! its gonna have a builtin display for a unicorn which moves according to the load - walking on <10, running on >60. its gonna be amazing!
It’s truly bizarre. Think a lot of it comes from people being influenced by YouTube , and rgb lights being one of those things that really only exists to appear on video.
I think the most annoying part of this fad is that if you want to build a gaming PC it's very hard to find good components (especially cases and motherboards) that _don't_ have LEDs. It wouldn't bother me much if the same products were available with and without all the bling.
I have a Ryzen 3700x on my new PC. First PC I ever built by hand. Mostly used for music production. 10 instances of Serum in Ableton with 50+ tracks and the thing barely crosses 15% CPU utilization
MicroCenter had the Ryzen 7 2700x for $129 and the Asus ROG Strix B450 Gaming Board for $79 today. I picked that up and some G.Skill 3600 DDR4 fir $59.
AMD is still gaining back mindshare. Yes, they’re not optimizing for profit right now, because they’re also gaining something valuable - entrenched market position. Intel’s dominance is incredibly sticky, and their name brand alone is worth 10s of billions. By blowing away Intel at every price point by a wide margin (and with Threadripper halo SKUs with no Intel competition), AMD is flexing and showing they are winners, and Intel are losers. People want to feel like winners when they buy products.
This and their Minix backdoor are the reasons why I wanted an AMD for my laptop, but I was sad I couldn't find a high-end Thinkpad with AMD processor. I hope that will change next year.
I'm not blaming it on Minix, but it does make use of Minix. I'm mentioning it to identify what I'm talking about. I believe the official name is Intel Management Engine. Or maybe that's just part of it.
Possibly. The Intel version is better publicised. I have no idea what PSP can do, but Intel's IME makes it possible to remotely completely override anything about a PC, which can be convenient for sysadmins for large organisations, but hasn't been disabled for consumer products.
AMD supports KVM redirection, too, via a standard called DASH. You can see examples at https://community.amd.com/community/devgurus/dmtf-dash/blog. From the standard body's description: "DASH provides support for the redirection of KVM (Keyboard, Video and Mouse) and text consoles, as well as USB and media, and supports the management of software updates, BIOS (Basic Input Output System), batteries, NIC (Network Interface Card), MAC and IP addresses, as well as DNS and DHCP configuration. DASH specifications also address operating system status, opaque data management, and more."https://www.dmtf.org/standards/dash
The extensive research on the ME I actually conside a pro for Intel, since I know more about what it does and how to disable it. The PSP is still more of a black box.
It's still a security risk – code is running in the ME that can be exploited locally.
Without vPro or with remote management and the network stack turned off there's a much smaller (probably close to zero) remote attack surface. With a vPro-capable chipset that has remote management enabled, the ME has its own IP address, plenty of potentially unsafe services, an insecure-by-default provisioning mechanism and much more.
Not with the Directors of those system administrators though. Nobody got fired for buying Intel is a thing. I had to fight to get our recent server purchases to be AMD.
What if it's actually priced below cost and supply is deliberately restricted? Making the top bins of a part "marketing only" products has been done in the past, and fits the evidence here...
The binning on 3950x chips is nothing extraordinary, thats why the chiplet model is so powerful, because you don't need to have 16 perfect cores on one die to sell the CPU.
Well, the 3950X just launched, so there's a demand spike that's going to follow from that. But that aside these are sharing parts with the Epyc series. If AMD is getting volume orders for Epyc that is definitely going to take priority.
The allocation order is going to go enterprise first, system builders second, retail boxes a distant 3rd.
The Epyc devices are lower binnings though! That's exactly my point. They don't seem to have the 3.5+ GHz parts available in a reasonable quantity. When Intel was doing this with "1 GHz" Pentium 3's way back when, they got crucified for a paper launch.
Are you sure they're lower binnings, or just equally good chips being run more slowly for reduced power consumption? The power budget per core on Epyc is quite a bit lower.
The 3700X and 3800X, both of which have higher base frequencies than the 3950X and also use 8-core chiplets, have been in stock near continuously since launch.
That's not how binning works. If you don't have enough of your top bin to supply the demand at your price point, you're not selling it at a loss, merely leaving money on the table.
Supply isn't deliberately restricted, it's just you can't ramp up production of your top bins without also making more of everything else.
If it were a different die entirely this might make more sense. But it's not.
The 3950X is the most expensive consumer platform CPU on the market, and AMD's HEDT lineup has a starting price higher than Intel's flagship HEDT part. So how much higher priced should it be?
Your ability to gain market share ends when supply ends.
If the demand is exceeding AMDs manufacturing capacity, then they should have priced higher or need to increase manufacturing capacity in order to actualize the potential market share to be gained.
> If the demand is exceeding AMDs manufacturing capacity, then they should have priced higher or need to increase manufacturing capacity in order to actualize the potential market share to be gained.
That's the standard micro economics answer. You're not wrong.
However, I don't think you're right either. The pricing of consumer products is a black art because people don't necessarily behave like economics textbooks say they should.
Some possibilities:
1. This is a short term supply hiccup. The good will of keeping the price the same of the medium term is worth more than the money AMD would make by raising and then lowering the price of their chips.
2. They are constrained in their ability to raise their prices by Intel, which is broadly thought of as a superior product (leaving out the technical analysis of the current generation).
3. They want to create the perception of scarcity.
They've been binning these chips for months maybe even a year at this point. The binned CCDs used in the 3950X are also used in Epyc. I'm sure server has a higher priority over DIY.
From what I've read/heard - AMD are selling every chip they can make.
So whatever the highest test the chip passes, it's sold as that.
This is apparently why nobody is having much luck getting any kind of overclocking beyond the on-box specs - AMDs just very good at ensuring chips go where they're required.
> If the demand is exceeding AMDs manufacturing capacity, then they should have priced higher
If they did that the news would have been AMD is expensive which is not much of a news-story. Whereas now the news is that AMD is sold out because it is so good.
Except if you want to be known for being as the cheaper option and not just thinking about short-term profits; even if it takes a little delay to actually get the supply needed to satisfy the demand.
If they're supply-limited, they can increase price all the way until demand falls to meet supply. Selling widgets priced below the equilibrium point achieves nothing for capturing market share, it just loses potential revenue.
I.e. if they can only manufacture 1000 widgets, but 10,000 people want to buy one at the given price point, the gained market share is still only 1000 individuals. The number of people who want one but can't get one due to lack of supply is not market share.
Temporary under-pricing might be a valid strategy if AMD is planning to ramp up manufacturing capacity, but comments further down in this thread suggest they lack the fabs to do that.
You're completely forgetting about the future loss of sales when someone only looks at AMD CPUs once. The buyer will switch to Intel immediately and then when prices are finally down again won't be shopping around for prices. The impression that AMD is overpriced remains.
They could have priced higher and still dominated intel price/perf ratio wise. The supply/demand equilibrium point is somewhere higher than the current price and lower than intel break-even price.
The only way they'd set the impression of overpricing is if they actually overpriced. They did the opposite.
Whether it was a mistake in pricing or some calculated second-order strategy, we'll never know. What we do know is they're leaving cash on the table whether intentionally or not.
Temporarily (keyword) underpricing to hurt competitors is not unheard of but I don't see how it will hurt intel unless it significantly shifts mindshare for the next ~2 years and allows AMD to get a real foothold in the server market.
They have basically one opportunity to set the expected price point and as it turns out, they prefer to get all the positivity and hype on their side, than perfectly optimize to marginal demand.
Selling out of inventory isn't a bad move if you seeking to raise capital or debt to increase production capability. E.g. were profitable and selling out at this price point and arguably would continue to do so as we ramp up production. Especially if looking down the road long term. In the short term it generates buzz, thus this very post here on hacker news, which isn't necessarily negative unless it is a prolonged inventory shortage.
It could be that this is the long-term efficient price after demand from early adopters goes down. Or maybe they underestimated demand altogether and it's too late to raise prices now.
The new threadrippers are on par or more expensive than Intel's solutions if judged by performance/cost per core. Especially if ridiculously priced motherboards are considered.
The way your sentence reads is that Sandy Bridge is better. I suspect you mean that the advancement with this generation is as significant as any since sandy bridge. Can you clarify?
An increasing majority of the average person's everyday computing needs have consolidated into smartphones, perhaps even moreso than other countries. This ate into the markets of both home consoles and gaming PCs. (Though, since Japan already as a console-centric country, the latter weren't really big to begin with.)
Question: I can drop Ubuntu, Debian, Mint, Arch, etc onto a Core series whatever and get class-leading KMS/DRM graphics acceleration the moment the kernel is running, long before X11/Wayland start. So bootup is reasonably seamless, no flickering etc. Basic 3D works. WebGL is getting there.
And thanks to DRM, I can play around with alternative environments that talk directly to the kernel without X being involved.
So... if I get an AMD APU... how will the hacker/tinkerer experience compare?
AMD GPUs should work better than Intel ones with open drivers lately. In particular, Intel has serious tearing issues with Xorg in all kinds of nontrivial configurations, and I've also seen weird glitching issues with compositors after many days of uptime that require compositing resets. I haven't had a solidly good Intel Xorg experience in 5 years (on Ivy Bridge). Functional yes, but not good. Bugs go years without getting debugged, and it's a shame the tearing issues have gone on for so long. The xf86-video-intel driver is deprecated/unmaintained, but the modesetting driver isn't quite up to par because it can't include GPU-specific knowledge.
AMD supports all the same DRM/KMS interfaces, has better performance than Intel with open drivers, and also offers proprietary userspace drivers (GL/Vulkan side only, they work with the open source kernel and modesetting side, no weird binary compat issues to worry about) in case you prefer those, but they are absolutely not required.
I'm seriously looking forward to replacing my IVB laptop with a Ryzen once AMD releases 6+ core APUs. Maybe next year.
My Ryzen 3400G APU worked perfect on a fresh Manjaro installation using the mainline AMD driver.
I moved to a GTX 1660 yesterday and immediately hit a bug where KDE isn’t able to detect display refresh rate when using the closed source Nvidia driver.
They have actual shops in Japan, contrary to Amazon. For some reason the 3970X is (slightly) more expensive on Amazon Japan than the Japanese MSRP. I haven't checked the other models, but I wouldn't be surprised if that were the case. I'm not too familiar with the Japanese market (only from researching for the 3970X in the past few days), so I wouldn't recommend anything in particular, but it did show up during my research. PC Koubou, too.
Sold out on Newegg and Amazon too, as far as I can tell. Looks like AMD did not quite expect this kind of uptake on what is now their "top of the line but reasonably priced" CPU.
The 3950X sold out in seconds on Amazon and Newegg, I was watching all morning and I've read the reports from the people who were able to purchase one.
> AMD doesn't own fabs and can't saturate market (as seen with 39XXx shortages
Intel owns fabs and has been having shortages for most of the year. AMD spinning out fabs seems to have been a good choice for them, as GlobalFoundry, their spun out fab, ran into problems with process shrinking and AMD was able to move Zen to another fab that got their shrink to work.
They spun it off to avoid bankruptcy. Let's not pretend it was done for agile reasons. Intel's issues are unrelated, they simply ignored negative internal feedback which is a management problem.
Sure, they spun off fabs to keep themselves afloat. It seems to have worked, and now they're able to take advantage of the agility.
I'm not saying Intel should spin out their fabs, I'm just saying not owning your own fabs has some advantages sometimes. Owning your own fabs also has some advantages sometimes; it's certainly been a historic advantage for Intel.
TSMC has 6 month lead times on 7nm, and you best believe Apple gets the bulk of their fab capacity. AMD could have the best CPUs on earth. They’re not worth anything if you can’t actually buy them.
No significant improvement in sight for Intel until 2021, while both TSMC and AMD yields and capacity on 7nm are going to ramp (not to mention Zen 3 and 7nm+ in summer 2020).
Not to mention, it's hard to get the top SKUs from both AMD and Intel, but something that costs 85% of Intel consumer parts and has 110% of the performance is readily available.
Maybe people expect that growth in TAM will outweigh loss from decrease in market share? Intel also has areas other than CPUs which might supplement CPU profits.
I'm betting Intel will just dust off tech they have already developed but didn't need to put into products due to lack of competition, and will be back on top soon.
I doubt they have it. They're on the third or forth iteration of their 14nm tech now. If they really did have something up their sleeves, now would be a great time to use it.
Leaving aside the GPU market, if you believe AMD is going to take and sustain a sizable share of the CPU market in the next 2 years (in particular server, mobile and high-end desktop), then the answer is yes.
If Intel manages to keep AMD market share below ~18% in the next couple of years, then probably not.
The value of the stock revolves around how much money AMD is able to generate and the biggest variables here are volume of sales and margins on those sales. AMD's variable costs are probably higher than Intel's since they outsource their manufacturing to TSMC, but their fixed costs dwarf Intel's.
Intel AMD
Market Cap $252B $44B
Revenue $19.2B 2B
Earnings Q3 $6B $0.12B
Keeping aside the fact that AMD's numbers include GPU sales, notice how AMD's earnings to revenue ratio is very small compared to Intel's. Also notice that Intel's market cap is almost 7 times AMD's (again, AMD has a GPU business too).
So the key is really, do you believe that AMD will be able to break the ~18% market share (in particular in high margin segments)? My back of the envelope calculations indicate that at around 20% market share AMD should be able to generate earnings close to the billion dollars per quarter, making AMD a buy. If they cannot break and sustain a market share close to that, my answer would be not.
Disclaimer: I have AMD shares and I don't plan to buy or sale given current pricing.
It seems odd to mention that AMD has a GPU business without mentioning intel has a substantial amount of non-CPU businesses (indeed, their PC business, which includes stuff like NICs and similar, makes up less than half their revenue).
A lot of people are already betting big on AMD since before the release of zen2 so it may be overpriced. It's anyone's guess, not every investor will agree. Some holding it may just say yes because they want a bubble to sell out of. Look at price to earnings and all that and consider that sentiment is high right now.
no. the buy point has passed.
but there are 8 stocks in the fabless semiconductor space that have better fundamentals than AMD. for example: IPHI, AVGO, MPWR.
Ryzen 9 series seem to be sold out pretty quickly everywhere. They go like hot cakes. Same thing happened with Ryzen 9 3900X after release. Only relatively recently it became easier to buy it, and it's still often sold out.
It is unbelievable to me, the level of inaction from Intel's managements just sit there all these years with all the advantage and cash they have had, and watch all of these washed away.
There was some action from Intel. They slashed the prices of their 10000 series by up to 50% to be able to compete in terms of price. But those CPUs are just refreshes of their 9000 series so it didn't do much in terms of improving performance.
From what I've heard throughout the hardware reviewer scene it also gained them a lot of flack because they announced it a day before the new Threadrippers I think.
In terms of new, much more powerful hardware Intel can't do much for now. Maybe 2020 or 2021. By which time AMD will have Zen 3 which promises some more big improvements.
The issue with the press backlash was that originally the press embargoes for both the AMD and Intel HEDT parts were both set for the same day and time (Mon Nov 25 @ 9AM Eastern) but then a few days before embargo lift, Intel shifted their embargo time a few hours earlier (3AM Eastern), presumably in a bid to get press to publish their Intel 10980XE reviews without being able to compare to the new (3970X/3970X) Threadrippers (which, incidentally, by and large crush the Intel chip's performance).
This only partly worked - due to how late they changed the times, I saw a bunch of publications just waited until the original embargo time anyway rather than redoing all their comparison slides, and of course, it got everyone to talk about how badly the Intel chips fare, but that was somewhat inevitable no matter what embargo shenanigans they did.
More interesting I think is that in some of the workloads where Intel did fare well, this was via some pretty questionable means (suggesting to reviewers to use MKL-apps that specifically cripple AMD chips without mentioning that): https://www.extremetech.com/computing/302650-how-to-bypass-m... - at least in the case of Matlab, this can be worked around via an environment setting.
The best part is that he didn't wait out the embargo, and published graphs that still included AMD's Threadrippers at the top, just blurred out and unmarked, making it painfully clear that Intel is just beat and a very sore loser: https://i.imgur.com/RleR7Ou.png.
I'd say it's not so much inaction as inability to fix 10nm. By coupling architecture to process node for so long it really hosed them since they couldn't execute 10nm in a reasonable timeline.
Reposting what everyone knows? Intel announced this May they are planning to release their first 7nm product late 2021 and I believe they used the accelerate word. Based on what we know about their 10nm and 7nm processes, the chances of 7nm actually working is much higher than the 10nm.
> Intel moved three of the four fabs slated for 10nm to other processes, some are going to 7nm while others backported to 14nm.
It's very clear Intel can't make 10nm work as they hoped it would and they will limp along with a limited number of halo chips for notebooks. They hope for server chips late 2020 https://hothardware.com/news/intel-10nm-ice-lake-sp-q3-2020-... but at this juncture I wouldn't be surprised if those would be first postponed then cancelled.
If this guy thinks I am posting insider info or stock manipulating, well, you need to go after wccftech and semiaccurate.
It's the USA, baby, you can sue for whatever you want! I am not a lawyer to know whether this has any merits. AFAIK even Meltdown only got a few class actions, not a shareholder one despite the fact they released vulnerable chips months after they knew but before the public knew.
Intel bet everything on 10nm. If it had paid off they wouldn't be behind. Unfortunately for them they did. Double unfortunately AMD has done a great job on their designs after learning some painful lessons from Bulldozer. Triple unfortunately AMD contracted TSMC for this arch and TSMC has been killing it on 7nm (roughly equivalent to Intel's 10nm node).
Servers I'd imagine is where AMD excel, not Intel, considering their primary purpose is serving many users where multi threaded performance comes to the fore. Intels usecase, compiling?
On the server it depends. For front end servers, where you are running many connections with no shared state, AMD crushes Intel so bad it's shameful.
On the backend, for things that are not highly parallelizable or memory sensitive, sometimes Intel wins. Things like databases and the like. That was when Intel still used a single ring bus, the mesh should add latency similiar to Epyc's IIRC. I haven't seen any benchmarks of the newest EPYCs vs the newest Xeons though.
AMD is flawless all ways around. I use Gigabyte motherboard + AMD Ryzen 3600 as a temporary substitute for a more powerful Ryzen 3950x I'm planning to get a hand on soon.
One of the best experiences so far. No conflicts or issues whatsoever.
Not exactly. It's a newly-released product that uses cream-of-the-crop components, but it's sold at consumer prices instead of enterprise prices. So AMD doesn't have a huge incentive to make this specific model first in line for the top-binned chiplets, and they had good reason for getting this product onto shelves two weeks ago rather than stockpiling inventory until after Black Friday.
In the broader scheme of things, AMD is having no trouble selling every CPU they make this year. That's due to a combination of generally high demand for CPUs, Intel's own production problems (which have caused Intel to prioritize other product segments over the desktop CPU segment where AMD's dominance is now clear), and AMD having to compete with many other chip companies that are placing orders for the same TSMC 7nm fab capacity—orders which now require much more lead time than they did at the beginning of the year.
So what's the rationale for them pricing it at a point where it makes sense for people to line up around the block? Is it that the initial volume is low enough that the positive press and goodwill is more valuable than what they would gain from price discrimination based on time?
a) based on the 3900, the supply will be coming in the next few months, and the price is right, once that happens.
b) pricing it higher now and dropping the price in 3 months looks bad
c) the pricing has to line up sensible against other chips in the marketplace, both Intel's and also AMD's other chips (including last generation). For many people, they may be willing to wait to get a 3950 at today's MSRP, but would opt for something else if the MSRP was $100 more, and that something else might be an Intel chip.
d) it can be hard to know how much demand is really there --- AMD had to order production several months ago, and their pre-launch planning included an assumption that Intel was going to launch something notable in 2019, which didn't happen.
Launch pricing is often more about PR than short-term supply and demand, especially for high-end flagship products. You don't want to establish a product's reputation as overpriced if you expect to be able to drop prices in a few weeks or months.
Having an initial shortage can reinforce the impression that a product is very popular, and the total lost revenue from under-pricing and undersupplying at first is insignificant for a low-volume high-end part.
I think AMD would much rather leave money on the table as long as they are still turning a profit than leave products in a warehouse because the pricing was a bit too high. Marketshare & mindshare are significant areas that AMD needs to claw back as well.
Especially since these chips are likely already super profitable. The 3950X ($750) is "just" 2x 3700X's ($300) glued together. It's not like the 3950X is showing any particularly good binning or anything like that.
Probably not problems per se, but rather they didn't book enough production capacity. Sounds like the lead time is ~6 months so any course corrections will probably take about that long to see results of.
A lot of higher-bandwidth, real-time networks to be enabled when 7nm multi core mobile processors enter the market. Personal servers will get a boost. Hopefully 5G rollout will be going well then too.
I don't know why they put the pictures of the lines in the article, because these lines are ridiculous in Akihabara. There are lines every day there for the most mundane products as long as it's new, and this looks far from the lines you get from a new iPhone release, for example.
How do you even know they are only queuing for the CPU? As I said in Tokyo there are lines like that for pretty much ANY shop in the morning. This is such non news.
https://www.amazon.com/Best-Sellers-Computers-Accessories-Co...
It looks like today the top 10 are all Ryzen. This is crazy.