Hacker News new | past | comments | ask | show | jobs | submit login
Intel publishes misleading benchmarks against AMD (servethehome.com)
618 points by PaulHoule on Nov 6, 2019 | hide | past | favorite | 202 comments



They wouldn't have an answer to AMD untill 2021: https://newsroom.intel.com/news/2019-intel-investor-meeting/...

Ps. Long AMD, perfect execution the last two years. It's crazy, even their late firmware fixes improve performance as it had bad press for a moment for false benchmarks advertisement ( it was not false so it seems, the firmware was just not ok yet. Not everything is fully ok now, it's the only minor nitpick as far as I can tell )

And they are only just now starting in the server space and soon the laptop + graphics space. They are going mainstream ( OEM offering is still lacking, Microsoft Surface changed this) AND they still customize their superior offering for big customers.

In the contrary, Intel has a lot of failures the last 2 years. Spectre, meltdown, ... This isn't bad luck anymore, it's more like bad karma if you are religious.

If you think I'm opinionated. Just check /r/AMD vs. /r/Intel . AMD is even suggested in the Intel threat, lol


"Long AMD"

are you expecting a higher stock price? because their 90 PE TTM doesn't look like it's in line with the rest of the industry.

don't get me wrong, i was also long AMD, and still partly am, but i don't think the stock can go higher than 36-37, or even maintain that level long term, without some some mega quarters. which according to their own estimates will not be the case.


Yes, i'm expecting stock price according how much they capture from Intel. And they are capturing it fast. As said before: Servers, laptops, ... are coming and that's the profit-market for CPU's and they aren't abandoning their nich ( semi-custom) also.

it was 40 in 2000, the current buy prediction is 40. They are doing way better now and they have another full year to win market share. I compare against Intel, not against AMD at the current ( minimalistic) state, mostly because OEM's weren't onboard in the previous financial quarter.

I have also explained my financial relation to them and my belief. Which seemed enough, no? PS. I'm probably one of the few to share my financial relation. Not because i'm scared, but because i'm confident :)

And also, i feel kinda sorrow for you that you don't attack my comments/facts. You just attacked my personal gain, which is a feedback loop to my arguments.

PS. I don't know of any other company that is in such a positive position. One full year untill their competition has a competitor to their "current" product line. That's insane. I haven't even mentioned TSMC 3 nanometer process for 2023 and theri future product line :)


Exactly market cap of AMD is 35 and Intel is 250. Even if AMD can only manage to steal half of that they will do phenomenally well.


last quarter AMD spend ~$400MM on R&D, had a revenue of ~$1800MM and net income of ~$120MM. Intel had a revenue of ~$19000MM, or about 10 times that of AMD, so it's completely realistic for AMD to double their revenue in a 'short' time frame.

So if we assume no other fixed costs, then to get $1800MM of revenue costs (1800 - 400 - 120) $1280MM. So doubling revenue would cost $2560MM. Then net profit could be (3600 - 2560 - 400) $640MM, or about a 5x of current net profit, which would put the PE at 18.

A PE of 18 would be very nice. And this is assuming there are no fixed costs besides R&D (like marketing of office buildings) and AMD only taking 10% of Intel's revenue.


Agreed, while AMD has a lot of market potential, their stock is overvalued given current performance.

I'm rooting for them, and I want to put money on them, but I sold my shares I got for <$10 and I can't imagine getting back in at over $30.


I was in the same situation ( 3 to 10 and then 25 to 36 now). It ain't going to stop untill 2022.

And if you believe Intel, it's 2021 instead of 2022 ;)

PS. Facts related: AMD Gains Chip Market Share in Nearly Every Category.


Sure, but it's really easy for them to post massive growth in the server category since they started with market share in the low single digits.

They have nowhere to go but up, but that can still mean their stock price is too high.


AMD just had their Q3 and then guided Q4 to be higher than Q3 for the first time in years. That means they are suddenly experiencing revenue growth that is outstripping their typical seasonal pattern and thats a huge deal.


> 90 PE

High P/E can mean that company reinvests profit to the business, instead of stockpiling it. Amazon has 80 P/E for example.

I would be more concerned about very moderate revenue growth, looks like consumers don't rush switching to AMD.


DIY builders already changed ( https://www.amazon.com/Best-Sellers-Computers-Accessories-Co... ) and OEM's will probably join.

I think one reason their PE is so high is because their price is very competitive and they are very slow on raising average sale price.

It's only very recently that the flagship of Microsoft changed to AMD. I'm waiting for some OEM's to follow after this. This happened on the end of previous quarter. As explained before.

I think the cloud custumors can change faster than the consumer market though. As they will analyze price with performance and AMD has another year of "playing around", until 2021 or even 2022


As soon as people's posting style moves from "don't take investment advice from comments on the internet" to "long AMD" is when I start getting suspicious that I maybe shouldn't long AMD.

Granted I'm a penniless student so I'm not longing or shorting anything but you get the point


AMD bringing the competition forces Intel to up their game and bolster R&D—they may come with some beast chips as a result.

Such is the way things work, scales tip one way then the next.

What about NVIDIA??

Also isn’t Amazon working on their own chips?


x86 licensing makes this an AMD vs Intel game, with occasional celebrity appearances by VIA, et. al.

So, effectively AMD gets this private 1v1 game with Intel, and has a strong part in the GPU side of the market as well.

GPU is certainly a more open playing field compared to x86 (Intel is trying to get into it), but AMD is very competitive here as well. I think we will start seeing a lot more unification of their concepts between CPU and GPU. Multiple GPU dies interconnected using things like I/F, etc.

Dr. Su said AMD was more about architecture than process technology, so perhaps they can prove what most of us suspected all along - Intellectual property is the most valuable strategic asset we have. Even if Intel reverse-engineered AMD's reticles, by the time they could implement the same architecture, AMD would be pushing products to market that are 2-3 generations ahead.

My question right now is, how long will it take Intel to reach parity on this architecture front (especially if it's the key to success as AMD's CEO claims)? The other question is, even if Intel figures out the architecture and comes up with something even better, will they be able to catch up on process technology? Is it only going to be TSMC and Samsung left standing at 3nm and beyond? What sort of damage would offloading the foundry business cause Intel if they had to go pure-play to become competitive on x86?


Hold on a min,

>Dr. Su said AMD was more about architecture than process technology, so perhaps they can prove what most of us suspected all along - Intellectual property is the most valuable strategic asset we have. Even if Intel reverse-engineered AMD's reticles, by the time they could implement the same architecture, AMD would be pushing products to market that are 2-3 generations ahead. My question right now is, how long will it take Intel to reach parity on this architecture front (especially if it's the key to success as AMD's CEO claims)?

This is not true. As much as I support AMD and loathe Intel, architecture wise I would say even Zen 2 and Intel SomethingLake is about the same, with many benchmarks backing the claim. And Icelake being slightly in front of Zen 2. Icelake did deliver a roughly 15% improvement as promised, so there is no sign Intel's architecture department has any problems ( yet ).

The problem is Icelake performance improvement doesn't matter if 10nm cant yield.

And when Dr Su said about IP, she was referring to the whole chipset strategy and how it blinds everything together.


I agree. Also, good luck to AMD and Intel both as the next few iterations of ARM stuff comes out and people start porting software over to that! It's looking like ARM will be taking quite a jump.


People have been saying this about ARM for more than half a decade. Why would people suddenly start porting stuff to ARM that they weren't before? I only see this happening if Apple replaces the Intel chip in their Macs with a desktop variant of their mobile chips.


The new cloud based ARM cpus are new, and that's a reasonable motivator


They aren't new, and more to the point, most server software is not the stuff end users run.


One of the applications I've heard for the new ARM server CPUs is doing ARM builds. Seems convincing to me! Going forward, another catalyst will be an ARM laptop that appeals to developers, and that makes it easier for them to directly target ARM without going through a cloud service.


people are having trouble going AMD in cloud because their monitoring, while written for x86, is intel specific. All that virtualization software is going to take a few years to be rock solid on AMD.


> In the contrary, Intel has a lot of failures the last 2 years. Spectre, meltdown, ... This isn't bad luck anymore, it's more like bad karma if you are religious

It's more like regression to the mean.


> In the contrary, Intel has a lot of failures the last 2 years. Spectre, meltdown, ... This isn't bad luck anymore, it's more like bad karma if you are religious.

AMD also suffered from Spectre and Meltdown. Admittedly not all the variants of it that have since come out, Intel has done way worse there, but AMD instances _are_ vulnerable if they lack microcode updates and OS patches.


According to this: https://www.techrepublic.com/article/spectre-and-meltdown-ex...

> Presently, 13 Spectre variants and 14 Meltdown variants have been identified. Initially, AMD processors were thought to be immune to Meltdown, though one variant has been successfully demonstrated on AMD systems.

ONE variant. So technically true. But the situation on the Intel side is far worse.

Intel got way too greedy with their optimizations.


Yeah, but isn't being greedy with optimizations good?


Not when it comes at the expense of security. Perhaps there are contexts where security is not important and this rule does not apply, but it clearly is a problem for CPUs.


I recall a few months ago they released a marketing pack benchmarking themselves against AMD with a small print disclaimer that they hadn't applied the spectre/meltdown fixes.


That's the thing - they keep doing it, which means regardless of people coming out with stories like this, it must be effective.


It may be effective, but I don’t buy that reasoning. Corporations are just as capable as people when it comes to developing bad habits and self-sabotaging.


Hehe you'd be doing it if you were in a boiling pot with gun to your head, desperately trying to find an upside buying intel chips.

They are really trying, much harder then making better chips :)


They can't fake public PassMark results though. They currently hold four of the top five results for high end processors there (https://www.cpubenchmark.net/high_end_cpus.html) and a lot of the top price/performance results (https://www.cpubenchmark.net/cpu_value_available.html). A lot of folks use charts like these to guide buying decisions or make recommendations to others.


Why can't you fake a PassMark? I'm legitimately asking. Seems like an aggregated public data set would be particularly prone to intentional cheating.


You're right, they could, but they don't seem to be, and it would be easily discovered (since anybody can run the same software on their system and compare the results for their specific CPU), and probably make quite a splash in the tech media.


If I were Intel I'd have whole teams of people trying to invent new benchmarking tools and apps. And all those apps would be compiled with options that favor Intel and include only tests where Intel looks good...


Nah, they prefer having all the reviewers and online press in the pocket, you just need money for that, and if there is one thing Intel has, is money.


Are we looking at the same charts? Both show AMD with a majority in the top 5.


I believe that when he said "They currently hold four of the top five results", he was talking about AMD, which does hold four of the top five results in both charts.


Yes, thanks. I really shouldn't type anything before coffee.


Unless I'm reading incorrectly, it looks very much like AMD holds 4/5 top spots, not Intel.


Many applications don't need Spectre or Meltdown mitigations, but they should at least put the test parameters up front and not in the fine print.


Many?

Anything running a browser needs it because of js.

Anything running connected to the internet at all would be well advised to have them because you can take advantage of the exploits via the network stack.

Anything running on a vm has to have them since microcode is set by the hypervisor.

So, we're talking about desktops not used to browse the internet, and bare metal servers whose network stacks aren't exposed to hostile actors?


> So, we're talking about desktops not used to browse the internet, and bare metal servers whose network stacks aren't exposed to hostile actors?

To be fair the context here is HPC. Nobody is installing Chrome on something like IBM's Summit supercomputer. Nor is it running "untrusted" code. It's configured to go as fast as possible, nothing more.

Anyone in cloud hosting or desktop would be insane to not install spectre/meltdown fixes. But in this particular niche of HPC supercomputers it's actually reasonable to not have those mitigations.


Is it though? There's a significant security bug that has a known fix, on your supercomputer. The second anything remotely security-related happens and it gets out that you've opted to hedge your bets that no untrusted code will ever execute on that machine... and you could be in some seriously hot water.

I'm not really in the HPC space, but I can't really see any vp/cto/cio reasonably putting that asterisk on a security audit.

If you've got cyber insurance, you probably wouldn't be able to keep coverage if you did that.


I agree - factors involving insurance, compliance, auditing, etc... will play a big role for any company using such systems to process any kind of even slightly confidential information.

Any CIO that decides not to use such security patches will carry on his/her shoulders the full risk. Big bosses don't like risk and like even less to be personally accountable for anything => I don't see that happening.

Even if the tests mentioned in the article might be related to HPC, the HW is for general computing therefore its usage doesn't have to be restricted to HPC, but people might still decide to use (as well) those HPC-results to compare the types of CPUs.


What do you think is the actual risk? Who is attacking whom, and how? If you're running on HPC you basically rented the hardware. It's a time-share system. The entire thing is yours, all yours, for the hours you managed to procure. So who are you attacking? And what vp/cto/cio are you even talking about? The ones who own the hardware, or the ones who are renting it?

And why do you think a security policy is needed here at all instead of just, you know, laws & contracts? You can't just anonymously rent time on these systems even if you have the money - you need your usage approved, too, and it's a big whole process. If you then go and successfully launch a security attack, why does that require technical safeguards instead of just the FBI knocking down your door and hauling your ass to jail?


Except HPC clusters do need all the Spectre/Meltdown fixes and more. Those are shared environments (with the exception of a handful of people who got research grants for their own internal clusters). In fact, hyper-threading has also been disabled completely is some clusters.


I'm sure there are some, I'd be very surprised it it was most though.

The main problem with Spectre/Meltdown and friends is two VPSs under the same hypervisor sharing a core running some webstack. With careful analysis of tlb, cache timings, and related you can extract SSL private keys, bitcoin addresses, and similar information that would normally be quite hidden since you are running in a different kernels sharing the same hardware.

With a HPC cluster you are (generally) running on bare metal (no hypervisor), single linux kernel, and can see what the other user (if there is one) with simple tools like ps, w, top, and friends. Additionally there's generally not private keys for SSL or bitcoin addresses.

In fact often unencrypted network filesystems are used sending plain text over the wire. Even on more sensitive clusters the security is highest between the internet and the head node and less so between compute nodes.

Hyperthreading being on vs off is usually just an issue with the performance characteristics of whatever application is most common and the limitations of certain batch queues like sun grid engine (SGE). If your popular app hates hyperthreading then you turn it off. Or sometimes you want to minimize the performance impact of users sharing a core.

Additionally sometimes cores, or even nodes are not shared. But again it's for performance reasons not security. Last thing you want is a 10,000 core job to run 50% as fast because one node is shared with a resource intensive application.


> So, we're talking about desktops not used to browse the internet

You can browse the internet without JS. Many people here seem to do so.


Technically true.

The number of people who do isn't statistically interesting though. If you are I still wouldn't advise doing so while turning off the mitigations since you're still parsing and displaying hostile ip/tcp/http/(html/CSS/image/video/...).

For example, http://www.misc0110.net/web/files/netspectre.pdf


>Many people here seem to do so.

And many more are ardent that "I should not build a non-js version of my website just for the >1% of users who disable javascript".

Personally I do not disable javascript because it makes huge portions of the web inaccessible, in weird and frustratingly ambiguous ways. But I used to, and I am the kind of person who would again if it were a remote option.


I think it's also possible that the people who do are just much more prone to talking about it. Who talks about how they don't disable javascript unprompted?


If you're browsing the internet without JS, you don't need a CPU that was built this decade to do so.


Yet, Meltdown is said to probably effect every processor since 25 years ago[1], so I doubt people would have one from before then by chance or be willing to use it.

[1] https://meltdownattack.com/#faq-systems-meltdown


Every Intel processor.


Indeed, I need my CPU for other things that browsing the web.


> Anything running a browser needs it because of js.

All the javascript engines have implemented spectre and meltdown mitigations.

EDIT: Are the downvotes because people don't believe that the JS mitigations are effective?


I'm not clear what your point was about the VMs. Why do hypervisors need Meltdown mitigated?

I think you're underestimating the % of HPC networks with properly-configured firewalls and only running code from trusted sources.


If you trust all the code (and other inputs, or outputs) on all the VMs, and you are running it in a secure environment, you don't. I admit, it's possible I'm underestimating how common this is.

Why you generally do need it is the above isn't true, and microcode has to be loaded at the hypervisor level for everyone or for no one


> it's possible I'm underestimating how common this is

This is quite common in the engineering/research sectors, which love clusters. Think Sandia, oak ridge, etc. I also personally know of several private companies with research clusters. I'm curious how common hpcs are outside of the research community, because honestly I'm struggling to think of a practical need for one.


HPC is quite common in mechanical engineering type industries. Think mechanical stress simulations, wind simulations, that kind of thing. I wouldn't call that part of the research community.


HPC doesn't generally use VMs or Hypervisors.


HPC is not the real use case (target). The percentage of HPC compared to the rest of the target market (public cloud) is insignificantly small.

The real question is do you trust any potential co-tenants in a public cloud that share a hypervisor with you, when said hypervisor does NOT have Spectre/Meltdown fixes applied? (There is only one correct answer...)

Also you can fairly easily mitigate this in a public cloud setting by running your VM on a dedicated hypervisor. AWS calls these "EC2 Dedicated Instances". Interestingly enough for most compute sizes the cost different is negligable. I imagine that isn't the case everywhere / for-everything, but if you A) need pre-Spectre/Meltdown perf and B) need protection against co-tenancy attacks, paying a few more dollars for a dedicated hypervisor host seems like a no-brainer to me.


HPC is the use case in this thread, considering the topic is revolving around HPC benchmarks.


Let me clarify: When it comes to Spectre/Meltdown, HPC users and environments are most likely not prioritized targets of interest. The low hanging fruit in that scenario is stealing cryptographic keys from co-tenants on a public cloud.


Indeed, and generally HPC doesn't use hypervisors anyways.


Many users won't know how to disable Spectre and Meltdown mitigations. They will be active by default if the user installs all the latest updates.


For some reason, I doubt people in the server/workstation space don't know how to do this, especially if they are leaving double digit percentage gains on the table if they don't.


You are not supposed to disable the mitigations... Especially on servers and workstations... And keep in mind that the usual mitigations aren't even holistic. The only truly 100% secure fix for Spectre/Meltdown is to disable hyperthreading altogether.


Why shouldn't you disable them on servers that don't run Javascript or any other untrusted code?


For servers that aren't running arbitrary third party code, like PaaS or IaaS, why not?


If your server is running baremetal, no problem. If it's running on a shared hosting environment, a co-tenant VM can exfiltrate your data. See L1TF/Foreshadow.


Haven't all the big cloud providers have patched their hypervisors for that?


You really want to rely on such assumptions for security?


It doesn’t keep me up at night.


No. Spectre does not depend on hyperthreading, only speculative execution.


I was lumping them all into MDS class of vulnerabilities. It was misleading.

https://linuxreviews.org/Microarchitectural_Data_Sampling:_T...


You absolutely need Spectre/Meltdown mitigation otherwise someone could attack you with JavaScript running in your web browser.

https://linuxreviews.org/HOWTO_make_Linux_run_blazing_fast_(...

   noibrs - We don't need no restricted indirect branch speculation
   noibpb - We don't need no indirect branch prediction barrier either
   nospectre_v1 and nospectre_v2: Don't care if some program can get data from some other program when it shouldn't
   l1tf=off - Why would we be flushing the L1 cache, we might need that data. So what if anyone can get at it.
   nospec_store_bypass_disable - Of course we want to use, not bypass, the stored data
   no_stf_barrier - We don't need no barriers between software, they could be friends
   mds=off - Zombieload attacks are fine
   mitigations=off - Of course we don't want no mitigations


> You absolutely need Spectre/Meltdown mitigation

The article context is HPC - it seems very unlikely Spectre/Meltdown fixes are needed for HPC loads (no browser, no VM sharing).


Not only no VM sharing, but no VMs.


You just leaped to the unfounded conclusion that every computer needs to run untrusted JavaScript. The vast majority of computers have no such need. Especially in a performance-sensitive contest like buying CPU for your database server that lives in a datacenter and never runs a web browser you will certainly disable these security things.


Disabling Spectre/Meltdown mitigations on a server especially would be the dumbest thing imaginable. These attacks are not restricted to JavaScript, they are side-channel attacks and have a huge exploitable surface. Just look at the number of patches for the MS SQL server needed to mitigate:

https://support.microsoft.com/en-us/help/4073225/guidance-pr...


No, it is NOT the dumbest thing imaginable!

If your server is not reachable from untrusted sources, and you are running only what you trust, it is prudent to disable mitigation’s and make use of the extra performance.

Example, I run many compute nodes (only private network) in my HPC cluster with mitigation’s off. If an attacker could reach the compute nodes over the network to push the attack payload, I’ve already been heavily compromised that these attacks are just insult to the injury.

If you run a database server that can never be reached directly by an attacker. You may spend your time watching your application server and let the DB server use the extra oomph.


That's a fascinating take on internal security.

Naturally, everyone must determine their level of risk aversion and take the steps they feel most prudent. I've not heard this perspective before. Thank you for sharing!


It's basically the historical approach to enterprise security: secure the perimeter, and don't worry about the intranet. It's still hugely popular in enterprise IT. Cryptolocker/Wannacry bit these IT departments hard and they've sort of slowly learned some lessons, but there's huge inertia, low budget, and these things are changing glacially.


No, I think you're mixing things up, these two are two distinct things: 1) the old "M&M" approach of securing just the perimeter is basically asking for trouble, but 2) doing proper risk assessment and choosing performance over security for one particular setup.


Scroll down to the table and many paragraphs in which they describe when the various scenarios in which the various mitigations are recommended. That wouldn't be needed at all if not having them was always 'the dumbest thing imaginable'.


Can you point out where Microsoft recommends that you do nothing besides if the user is using "Azure SQL Database and Data Warehouse" (and that's because mitigations are already deployed on their cloud services)?

   Microsoft has deployed mitigations across all our cloud services.
https://docs.microsoft.com/en-us/azure/virtual-machines/wind...


If your threat model is "all the threats all the time" then I assume you also wear a helmet when walking down the street.


> You just leaped to the unfounded conclusion that every computer needs to run untrusted JavaScript.

You're right about that...

> The vast majority of computers have no such need.

You're probably wrong about that. Every, no. Majority or substantial minority? Yes. There are a ton of consumer computer devices that act as web browsers, and smartphones absolutely count here.


I would think the "vast majority" of computers are used interactively by people using Web browsers. Surely the number of servers in the world doesn't massively outnumber clients, and most of the clients browse the Web?


The only applications that will disable these mitigations are maybe home power users or certain niches where access to the system is always trusted. For everyone else, there's performance stealing mitigations.

If you or your employer doesn't apply mitigations I certainly wouldn't go around bragging about it. Let it be a deep, dark secret until the next hw upgrade cycle. Otherwise you'll get pilloried and/or sued for any data breach.


Is there a working JavaScript exploit for Spectre/Meltdown? I though it was just hypothetical.


There was a proof-of-concept included in the whitepaper:

https://spectreattack.com/spectre.pdf

That's a working exploit, not a hypothetical.


No, that's not a working exploit, as is said in the paper:

> Chrome intentionally degrades the accuracy of its high-resolution timer to dissuade timing attacks using performance.now() [62]. However, the Web Workers feature of HTML5 makes it simple to create a separate thread that repeatedly decrements a value in a shared memory location [24, 60]. This approach yields a high-resolution timer that provides sufficient resolution.

They did it on Chrome 62.0.3202, but SharedArrayBuffer was disabled in all major browsers right when spectre dropped. Chrome enabled it again in Chrome 67 if Full Site Isolation was enabled. Full site isolation is now enabled by default afaik.

So no, that is in no way a working exploit in any major browser I know of. There is no know spectre (or other variant) exploit that works in a default updated major browser (and there has never been as far as I know), but feel free to link any PoC's.


You're right. When the white paper was announced; it was pre-released to specific entities to work around. The fact that the exploits have since been patched doesn't remove the fact that they were a valid exploit.

Was heartbleed not an exploit because it's since been patched? Or the thousands of DOS TSR viruses? What is even your point or logic here?


All the JS engines have been patched. As far as I'm aware there are no known spectre or meltdown exploits on any of the major JS engines.


A proof-of-concept is not a real world exploit. The PoC operates under perfect conditions.


No. It's almost unexploitable in the real world. An insanely stupid overreaction that we're all being forced to pay for. Security theatre for the post-Web 2.0 age.


Appliances might be a better word than applications. If you've got a single privilege level (and that itself isn't a security design flaw), yeah, the info leak isn't a problem. HPCish environments are the obvious example.


The chips should be considered defective and "not as advertised" without spectre and meltdown in place.

They fucked up, and are trying to pawn the slowdowns as some sort of Faustian choice.


And if anyone from AMD is reading this. I'm truelly awaiting AMD's NUC in multiple price ranges of 150-700€, if possible :)! I'm currently building a DIY RPi 4 cluster for microservices/kubernetes setup and would like to replace my DIY home streaming setup.

http://www.fanlesstech.com/2019/10/exclusive-amds-nuc-is-com...


Same. In an effort to minimalize, my work PC is a monitor with a NUC sitting under it. No bulky case, wires everywhere, etc. I've been trying for two years to find an AMD equivalent to no avail.


You can get a VESA mount Mini-ITX[1] case with a current-chipset (X570, etc.) motherboard, as well as all the other Mini-ITX options like monitor-mount base cases[2].

1. https://www.amazon.com/Antec-ISK110-VESA-U3-Mini-ITX-Compati...

2. https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcQQB9...


Thanks for the links. This is significantly larger than a NUC, but still manageable. That said, part of what I appreciate about the NUC is that it's all built together...no weird hardware/compat issues, etc.


>That said, part of what I appreciate about the NUC is that it's all built together...no weird hardware/compat issues, etc.

? Has something changed in the laptop world because you're describing issues frequently heard with Laptops and their screenless cousins, NUCs.


The weird hw/compat issues tend to be OS-related in my experience. I checked a few of these boards and the integrated graphics tended to be Radeon, but I think it'd really only matter under Linux.


Frequently bought together section on Amazon is probably exactly what you wanted ^^



And if you're reading this, I'm waiting for a very decent mATX PCIe4 board for EPYC, please make it happen!


Checkout EPYC 3251. Getting ready to do a home buildout with kvm vms.


I don't think we'll see socket SP3 Epyc on mATX for a few reasons. The chip itself is huge and you'd have to throw away a lot of PCIe lanes and DIMM slots that couldn't fit on the motherboard. Supermicro's ATX boards already halve the the number of DIMMs of their proprietary form factors or Gigabyte's eATX boards hold.

*Edited to clarify I meant socket SP3 boards rather than embedded.


I've been waiting for that as well, and while the hyped models from usual cheap brands are unobtanium, HP and Lenovo have quietly kickstarted the sector with Lenovo m715q Tiny, HP ProDesk 405 G4 and HP EliteDesk 705 G4.

The prices could be better however.


Yes, please! And if possible, a lower frequency, higher thread count processor version plus a low core count, high frequency processor one.


Can I ask for anyone from AMD to sell a 4 socket, epyc mobo. Im looking for 256 (4*64) core desktop PC.


Offtopic:

I was looking for a decent AMD replacement for a Intel NUC and didn't found it then, wanted to suggest a new niche in the previous comment ( looked for it a month or so ago). I only found out after my comment it was coming and didn't want to remove my comment ;). As i'm truelly interested in a AMD replacement for my current slow ( and cheap ) Intel NUC

:)

And euh, looking at the top comments. I'm not alone, sarcasms aside

Ontopic:

If you order enough of them. They can address a custom team for you! You could earn money like hell launching this on the market!

https://www.amd.com/en/products/semi-custom-solutions

Build your dream!


Desktop? Not only will that be huge, but I doubt such a system would work within the typical 1500-2000w power budget of a household electrical run.


Rome is nominally 225W/socket on the very high end of the price range; a 4P config puts you at 900W for the CPUs. Depending on other peripherals you might be at 1200-1500W total but I don't think you necessarily go much past that unless you're loading it up with power hungry graphics cards or something. You'd definitely want an efficient power supply!

You're right to express some concern -- practical residential limit for continuous load on a perfect incarnation of the typical residential circuit is something like 1440W total (80% of nominal 15A breaker on 120VAC).

Maybe more realistic mid-range products are 155-180W/socket, and that drops the CPU draw down to 620-720W.


Wait, in the US you use 15A at 120v? Here in Europe (Spain at least) the typical is 16A at 240v. 1500W isn't that much of a problem. Oven breakers are even 20 or 32A.


> Wait, in the US you use 15A at 120v? Here in Europe (Spain at least) the typical is 16A at 240v.

Yep! The typical US residential circuit is only 120v, 15A breaker (so 12A at 80% load). (And that may be only 110-120v.) 20A circuits are also fairly common, but it's not the most common; the majority of wall outlets in a US house will be 15A.

In this chart, https://en.wikipedia.org/wiki/NEMA_connector#/media/File:NEM... , the commonly used US residential sockets are NEMA 5-15 (labelled "Typical Outlet," for 15A breaker circuits), NEMA 5-20 (for, duh, 20A breaker circuits), and NEMA 1-15 ("Old Outlet," in older buildings).

> 1500W isn't that much of a problem. Oven breakers are even 20 or 32A.

We do have higher amperage circuits for appliances like ovens and clothes driers, but they're usually dedicated circuits and have different socket shapes. You can see the labels "Clothes Dryer" and "Electric Oven" in the chart linked above :-).


Computer PSUs are generally more efficient at european electical standards. I read many a thread two years ago about cryptominers in America wiring our heavy appliance 220v lines into their racks to approach the efficiency that PSUs can reach at higher volts.

And of course some datacenters just wire DC to the whole schebang instead of having an individual, well two redundant, PSUs per server.


> 4 socket, epyc mobo

AFAIK, there aren't any, Epyc only has 1 socket (with 129 PCIe lanes) or 2 socket (using 64 or 48 of the 129 lanes to connect to the other socket) options.


Yeah, it's architecturally infeasible on the current platform, nevermind that a single rando on HN's Tesla-priced "desktop" isn't a market segment worth pursuing.


What will you use that many cores for?


Running the Slack client?


Slack client is single-threaded.

However, 'chrome' would be valid. :)


Haha I must admit I have run Slack inside Firefox for the past 3 years myself.


In Photogrammetry, at least, some programs have codepaths that run concurrently: part heavily parallized for CPU, and part heavily parallized for GPU. I built a monster system with dual 1080s when they first came out, and my i7 became the bottleneck.

There's algorithms that haven't yet been optimized for GPU parallelization. And there's datasets that require memory access patterns that aren't amenable to GPU.

That said, now I'm curious what a 128-core system would do to memory access. Better hope the algorithms are L2/L3 coherent?


Encoding multiple video streams in real time simultaneously.


Probably can't get enough memory bandwidth on a motherboard for that to be practical


How do you plan to optimize for the increased NUMA overhead?


And please have at least two server quality 1gb nics. :-) My home router needs an upgrade.


Even Mac minis come with a 10gig NIC. Gotta future proof, right?


10G stuff still carries a substantial price premium, unfortunately. And if you've got existing Cat 5/5e cable (esp. in the wall), now you've got to replace the cable, even for short runs. I don't find 1Gbps hugely limiting myself at this time, or at least, not enough to push me to the extra spend on 2.5-5 or 10Gbit hardware. (Also, we've got 5e in the walls, no conduit, and that stuff is never getting replaced.)


> Cat 5/5e cable (esp. in the wall), now you've got to replace the cable, even for short runs.

Not always. I have a working 10GBASE-T link over about 50 feet of Cat 5e, going through two couplers too. No frame loss.

Probably depends a lot on the quality of the cabling and the noise environment.


And then there's NBASE-T, which takes care of situations where the cabling isn't up to scratch, scaling bandwidth accordingly.


Maybe they could include NBASE-T nics that’d also support 2.5 and 5 Gbps speeds?


That would be great too!



Charlie doesn't mince words.


There's often a long lead time between writing up benchmarks/white papers and publication. At my last employer it could take well over a month to let everyone have their 2 cents and get through Legal for approval. This is a long enough interval to explain the old/new GROMACS usage.

As the saying goes, don't attribute to malice that which can be adequately explained by stupidity (including corporate bureaucracy).


The author pointed out that they published benchmarks using the same test that had not been optimized for AMD yet. But they included that information in the publication, including on the graphs that would otherwise appear to show the outright superiority of Intel in that regard.

If the publication was delayed to allow time for due diligence then this disclosure should have made the cut. I like to think that Intel had some top-notch benchmarking nerds who were superseded by unscrupulous executives and would not have published a dishonest benchmark otherwise.


Also, the difference in sub-NUMA configuration actually favors AMD as well. NPS=4 is optimal for AMD, as shown by ServeTheHome's own work earlier. Both systems are tested in their proper (fastest) configuration.

Despite ServeTheHome's own previous work showing this, they're whining about it being different. But if Intel hadn't tested AMD in the proper configuration they would have complained about that too.

Yeah, you take first-party benchmarks with a big grain of salt like always, but STH is just looking to stir some shit here.


> Both systems are tested in their proper (fastest) configuration.

AMD is running at half the threads it can. I'm pretty sure that's not its proper nor fastest configuration.


oh really one thread per core is the optimum for EPYC server? I don't know why AMD care about hyper threading then.


We're not talking about the general case. One thread per core might be optimal for that specific workload.


BTW turns out that was a typo and they used two threads per core anyway.


I don't see where I said one thread per core was optimal for Epyc or not.


Unless that company is well-noted for their malice.


That's not to deny the possibility of both. :)


Haha true enough. I'm sure the truth in this case in somewhere in the middle. Reading any marketing material should always be done with a healthy dose of skepticism.


The value proposition with AMD's current offerings is staggering. I didn't know computers would be this good this soon.

The one thing I'm not pleased with is the general poor experience with integrators, on both platforms. Who told them they shouldn't put prices on gigabyte.com? I see the actual price on a distributor's site and I'm nothing but pleased.


Sidebar question: Would anyone be interested if we adapted the TechEmpower benchmarks [1] to provide a composite score for the hardware environment? This could, in theory, give some insight into the capability of server hardware with respect to traditional web API-style requests.

[1] https://www.techempower.com/benchmarks/


That would be extremely interesting, but at least in short term I'd rather see more bandwidth so we can see the top 10 stretch their legs :D


Sky blue, water wet. I have seen this headline written in 1996, 2002, 2006, 2018...


The long interval between 2006 and 2018 makes it easy to forget.


Yes, synthetic benchmark results in general are to be taken with a grain of salt, leaving a lot of potential for things like these that can easily produce the result you desire than what really is.. there is a big question of how this translates to what your workload needs/does.

We just bought our first AMD Epyc server for our HPC cluster. I cant wait until next week to test it against our Skylake nodes. 1 socket EPYC 16C cpu vs 2X Xeon Skylake. I already have a suite of our internal workloads to hammer them with.. one of them is a key in house software compiled usually with intel compiler and intel mkl for performance reasons. Really curious how it will do with the EPYC cpus.


Really curious as well, will you please do a write-up if you can make the results public?


I can definitely make the results public as it is a server we purchased to test before a bigger purchase.

I will try to do some basic perf tests as well along with our workload specific tests.

I plan to share the results as. Blog post here - https://aravindh.net


The article is more or less BS.

For one, the claim is that Intel misleads intentionally. Given the timeline there's no particular reason to think there is something intentional here. The AMD-optimized version of GROMACS just came out ~5 weeks ago.

Second, while this is misleading in a sense, it's in the way that benchmarks are all generally misleading. By their intrinsic nature they don't tell the whole story. (And of course marketing departments cherry-pick benchmarks to tell the story they want to tell. Also by intrinsic nature.)

BTW, this isn't even all that bad for Intel. The AMD chip might outperform the Intel one, but only if your software has been specifically optimized for it.


They are publishing statements of fact about a competitor. The onus is on them to check those facts, and companies should be held accountable for the things they say - even in advertising


I wonder how much influence these benchmarks have in terms of (a) promoting Intel's performance and (b) eroding trust and confidence in Intel


remember when 3dmarks presented superior performances if you changed the vendor id in a VIA chip to Genuine Intel?


I'm shocked I tell you, just shocked.


There are small and middle corporations that don't have people brave enough to choose AMD. I know, I worked at two of them. In there, nobody has the time to think about AMD and to swallow and explain eventual issues that might arise.

And frankly I've never seen AMD trying to reach those people.


laptop + graphics space. They are going mainstream ( OEM offering is still lacking, Microsoft Surface changed this) AND they still customize their superior offering for big customers ok


tl;dr: using outdated benchmarking tools optimized for the latest Intel chips and not the latest AMD chips. GROMACS 2019.3 v. 2019.4, the latter of which is the latest but still came out over a month ago and addressed AMD Zen 2.

Source is still a good read for explanations on why this is impactful.

---

Disclosure: I'm long AMD


They used the Intel compiler. We know for a fact Intel’s compilers deliberately don’t optimize as well for non-Intel chips.

That’s damning in and of itself.


I guess we're better off with Phoronix then, as they're more independent, and benchmark with a myriad of different compilers and OSes (I'm not aware of any software on my Linux AMD64 machine which was compiled with ICC).


That is a little disingenuous. The whole point of Intel’s compiler is to support Intel chips well. Results on other chips are what they are. No effort to make other chips perform eiher better or worse is expended. Keeping up with Intel chips is more than a full time job without other distractions.

The question to ask is whether or not they used the best complier/swithes available for the other chips.


>No effort to make other chips perform eiher better or worse is expended.

This is not true. Intel compilers cripple non-intel CPUs on purpose, they generate code that literally checks whether CPUID is equal to "GenuineIntel" and if it isn't, it executes the less optimized code path[1].

[1] - https://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler#Recepti...


> No effort to make other chips perform [...] worse is expended.

I don't know about nowadays, but it used to be that the intel libs and compiler would specifically disable SIMD instructions if they found that the CPUid returned a non-"Genuine Intel" string. Doing nothing - and just relying on the CPU feature flags - would have been more fair.


Intel is the one who defined feature presence flags in the CPU, but then doesn't use them for their compiler.


> That is a little disingenuous.

Nonsense. Please do some research before you comment.

In the past Intel would build the benchmarks with commonly available compilers (Intel, GCC, Clang) then select the best benchmark results, eg if GCC gave the best numbers on AMD chips they'd use GCC to compile the tests on AMD. That was a fair comparison.

They deliberately changed to using their own compiler specifically to disadvantage AMD in this comparison.


This is a debatable topic. The generated code could check cpuid and use whatever instructions are available but AFAIK it would check the vendor instead and not use SSE/AVX on non-Intel processors that support those instructions.


It's been 30 years since benchmarks used icc .. it was already shocking when we realized that in the 2Ks.


I WANDERED why? My PC Never wanna turn the screen Display on AMD pro Costs me to buy a New laptop Connex


Intel core constantly does this. They play dirty when AMD is consistently kicking their ass


I guess this is what Intel has to do at this point. ¯\_(ツ)_/¯


Don't they do this every time they release new CPUs?


Intel has a long history of price gouging coupled with nefarious practices - like this fakery. When did they turn to the dark side? 20 years ago it started with their attempts to kill AMD.


gamed benchmarks? in other news, ocean is wet.


Why : is it to guard against AMD ?


A word in the title needs a spelling correction, please :)


Fixed, thanks


Pro tip: generic benchmarks, in general, are misleading.


All benchmarks are misleading.


The map is not the territory.

All models are wrong, some are useful.

Blah, blah, blah.



Could you expand on that please


The fastest animal on earth is the fastest not because it has huge muscles, but because it falls. Is it fair compare top speed of falcon to cheetah? Yeah but... x y z.

Every benchmark can be set up in a way that will give edge to one part or another. Especially if there is a vested interest.


And even in your example, the Cheetah can hit 65mph for a whopping 330 feet before risking total exhaustion. Meanwhile Pronghorns, Impala and Antelope can reach 55mph-60mph and maintain it for about half a mile, and vary their speed to increase distance.

So which would you consider faster? The one that can give you a few seconds of effort really quickly; or the one that gives you 90% of that effort but for hours?


The search space has many, many dimensions--CPU, MMU, OS implementation details, how well the compiler translated source to machine code for that particular CPU, how well did the benchmark implementation cater to the target architecture, etc, etc.

Immense opportunities to fool others, and even yourself.


All benchmarks are chosen to show precisely what the person presenting the benchmark wants them to, no more, no less. If the benchmark didn't tell a story that they want to show you, they wouldn't be showing it to you (or they'd be showing you a different benchmark).

This doesn't make them wrong, but benchmarks are never the full story.


Not if you designed a benchmark for your special requirements.


It still is misleading, because your special requirements will not perfectly match your benchmark.


That's true, but like many human endeavors we can reach a point that is good enough, or the process of implementing the benchmark leads to useful discoveries. It is not going to be perfect for all situations, but that doesn't mean it's not a valuable tool to use in some situations.


Or your requirements will change in ways that you didn't expect, invalidating your benchmark, or ...


they might, if your benchmark IS your special requirements.


..but some are more [obviously] misleading than other.


s/publises/publishes

(Bat signal @dang or anyone else)

Edit: thanks!


[flagged]


But you'll comment in it? Spite mechanics deliver lackluster results.


feedback instead of anonymous downvotes? crazy!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: