Hacker News new | past | comments | ask | show | jobs | submit login
Intel brings a six-core i9 CPU to laptops (anandtech.com)
238 points by jpalomaki on April 3, 2018 | hide | past | favorite | 190 comments



There appears to be no LPDDR4 support in these processors, which seems like a major omission:

https://ark.intel.com/products/134903/Intel-Core-i9-8950HK-P...

Which means they'll be stuck using LPDDR3-2133 memory with the same bandwidth as previous generations in any power conscious design.

For example, Apple's MacBook Pro line continues to use LPDDR3-2133: https://www.apple.com/macbook-pro/specs/

I wish AMD would add LPDDR4 support to their mobile chips, if only to light another fire under Intel to have feature parity like they did with core count.


I don't understand how long will this bullshit continue. At the moment your average phone can technically support more than 16GB of RAM (LPDDR4), but not Intel's laptop CPUs. Is there some kind of massive obstacle that we don't know about?


Intel is just behind, Cannonlake which supports LPDDR4 is now ~18 months late.

LPDDR4 also uses a different physical signaling scheme than LPDDR3 so it's not a minor change to adapt current processors, if it is even possible without a packaging change.


Technical management failure in how to mitigate late or stalled deps, fallback should have been to add ddr4 modems to existing chips.


That seem plausible to me, but aren't memory pretty tightly integrated into the design in a modern processor? How would you be able to take advantage of the additional RAM?

Second question, do modems exist that could handle the bandwidth of existing designs?

It seems like something that would be extraordinarily expensive, and there wouldn't be much of a benefit considering Intel doesn't have competition on this front.


DRAM controllers used to be on the motherboard, not the CPU. That changed for AMD in 2003 and Intel in 2008, but it is not at all interwoven with the CPU core itself. Processors within the same product line with varying core counts use the exact same DRAM controller blocks. This is really obvious on the Intel server chips where the CPU cores are linked by a ring or mesh bus and the DRAM controllers are just another stop at the far edges of those interconnects.

Intel can update their DRAM controller independently of the CPU core microarchitecture, just like they can update the integrated GPU microarchitecture independently of the CPU core microarchitecture. The only reason why they might not be releasing a CPU with LPDDR4 yet is if they never expected to need an LPDDR4 controller before their 10nm process was ready, and never started designing a 14nm LPDDR4-capable controller. If so, that's a clear miscalculation on their part and a sign that the processor architects are probably insufficiently skeptical of what the fab guys are telling them.


That makes a lot of sense. Thanks for the explanation


Probably not trivial though, and I would not be surprised if many of these has already taped out and sampled. In general a modern DDR3/DDR4 memory controller is not a trivial thing to do.


> Is there some kind of massive obstacle that we don't know about?

Intel's profits?


Maybe Apple's. LPDDR4 is surely more expensive.


Possibly another reason apple may be looking to do their own laptop cpus?

six to 8 iphone's "glued together" would be somewhat appealing.


Intel's laptop CPUs do support more than 16GB, just not more than 16GB of low power RAM. You can even get a Lenovo P51 with a mobile Xeon and 64GB ECC RAM.


All common Intel laptop CPUs (core ...) don't. The Xeons are the sole exception and only available in a few workstations.


The obstacle is lack of demand, the average user still has 4GB or less of RAM: https://hardware.metrics.mozilla.com/#goto-os-and-architectu... . People that need more than 16GB (as of today) are an incredibly niche market and can be better served by remoting into a desktop/server anyway.


Just download more ram...

https://downloadmoreram.com/


The ARK page says the CPU can handle 64GB. LPDDR4 memory isn’t required when you’re going to be sucking as much power as this chip will


I bet that this is based on four SO-DIMM laptops, not 16Gbit DDR4.


With super fast SSDs, does anyone really need 32GB of RAM?

I don't even shut down my multiple JetBrains IDEs and gazillion browser tabs or bloated Slack when I take a break and pin the CPU with Ableton Live and a bunch of soft synths on my MBP. Nothing skips a beat.

Is anyone seriously running into issues with only 16GB of RAM?


Oh yeah! When I showed up at my current employer, I had a laptop with 32gb and 2 PCI SSDs in RAID0. Almost immediately, I had to upgrade to 64GB.

I'm a data scientist and regularly work with multiple datasets simulataneously that require the RAM usage. Both Python and R rely on in-memory processing. Loading on/off disk is substantially slower and does not fit with what I am trying to do. For really large datasets I also have a 28 core Xeon with 196GB that I can remote into, but it is nice to not have constraints on my laptop.

Of course, you could go with Hadoop or Spark to process some of these datasets, but that requires quite a bit of overhead and its easier (and cheaper) to just buy more RAM


Same story for me, give or take a few percent. I have to recommend dask for python though, it made out of memory errors largely disappear for me. It allows parallel processing of disk-size datasets with the convenience of in-memory datasets (almost).


Holy cow, dask looks useful


The most impressive is how easy it is to use. Basically just reuse of the numpy API.


They replicate the Pandas API, not the numpy one. Just for clarity's sake.


What a peculiar comment. Their homepage has examples of both - with Numpy first.


Really? It's been a while since I've used it, and I remember a good portion of the documentation talking about how they replicate some, but not all of Pandas' API's (because of the sheer number of them).

They've probably updated it since then...


yup it really is


I'm a C++ programmer working in games and I run out of 64GB of ram in my workstation daily. I can't wait until we finally get all upgraded to 128 or 256GB of ram as standard.


Well, thats why we consumer have to buy better hardware and more ram? As an old former Amiga programmer I have always till this day been a less is more kind of guy. Make code run faster and make the program use less ram.


Good in theory unless you need all of the data at once. There are things we do now that wouldn't have been possible (in the same sense) 25 years ago without a lot of work. We might use languages that are 200x slower, but they might be 10x more productive. That's a winning tradeoff for many people.


Nope, it has nothing to do with what you as a customer get as a final product. Loading the main map of the game uses about 30GB of ram in the editor + starting the main servers in a custom configuration will use that amount again. Systems like fastbuild can use several gigabytes when compiling. None of this has anything to do with the client, which will run with as little as 4GB of ram.


Buy a 2 x Xeon workstation with 768GB ram?


Once your datasets go out of the bounds of single reasonable machine, it's time to switch to Apache Spark cluster (or similar).

You can still write your data analysis code in Python, but you get to leverage multiple machines and intelligent compute engine that knows how to distribute your computation across nodes automatically, keeping data linkage and parentage information, so computation is moved closest to where data is located.


You know, sometimes you are in that uncomfortable spot where you have too much data for a single laptop but too little to justify running a whole computing cluster.

That is the kind of spot where you max out everything you can max out and just go take a break when something intensive is running.


This - honestly depending on the task hundreds of GB can be still the "single computer" realm because it's just not worth it to set up a cluster in terms of time and money and also administration overhead. However parallel + out of core computation doesn't necessarily imply a cluster: single-node Spark or something like dask works fine if you're in the python world.


Setting up ad hoc (aka standalone) Spark cluster with a bunch of machines you have control over is ridiculously trivial task though. It's as easy as running spark --master=x where you designate one machine as master. All others started with --master=x become slaves of x. Then you just submit jobs to x and that's all.


Spark is slow though. On the other hand, Pandas is also extraordinarily slow :D


Then you remote into a workstation as some one else in this thread said they did.


Running distributed like that always has a cost, both in inefficiency of the compute and in person-time.

If you still can run on one machine, it's almost always a win. 32Gb is a perfectly reasonable amount of memory to expect. 64Gb isn't outlandish at all for a workstation.


Really depends on the computation.. what you say only make sense for some niches of computations.


What laptop are you using, I think my thinkpad W540 is capped at 32GB.


Thinkpad P50


Cloud is an option for really large memory requirements. You can provision machines with nearly 2TB of RAM in AWS, and its pretty cost effective if you only spin them up when you actually need them.


My minikube dev environment (microservices with lots of independent databases) can be crammed into about 24gb of ram and mimic our production environment almost 1:1, there's a number of various databases (couchabse, elastic, redis, rabbit, etc). If a developer is limited to 16GB they have to run a more crammed and far less simliar dev environment. Instead of using one database per service like we do in production we have to cram multiple services into one database (say couchbase with buckets for each service). I can chunk this down to use 13-14GB but if the user has 16GB max that means they're left with 2-3GB of ram for their IDE, Chrome, spotify, etc.

It's severely limited the freedom of range with our dev environment and we're constantly fighting to stay within that 16GB spec. Do we deviate heavily from our staging/prod environments?

What's the sweet spot? A bunch of us have built hackintosh desktops at this point so we can have 32-64+ and more cores so we're not constantly fighting resource contension with all of docker containers we need to run.


It is really irritating that Apple don't offer 256GB iMac Pros with the current 64GB LR-DIMM prices (and it being officially supported in Intel), and even the current XNU kernel can do 252GB.


While not a true solution because you need to break open your computer, the newest iMac Pro has socketed DDR4. iFixIt’s teardown[0] suggests the max it’ll support is 4x32 GB for a total of 128 GB.

[0]: https://www.ifixit.com/Teardown/iMac+Pro+Teardown/101807


Fast SSDs are still orders of magnitude slower than RAM access - SSD latency is on the order of 10s of microsecons, while RAM access latency is on the order of 10s of nanoseconds.

You're in luck because your working set happens to fit into RAM, and the rest is written to swap out gracefully and doesn't pull itself back into memory. But as soon as you're actually working with more than 16GB of data at once, you're in trouble.


Yes, for devs with vms for compiling and testing like my team does, 16gb is not enough. Thus we are moving off macbooks. You can argue that you shouldn't do that on a laptop, you should have a desktop that you always connect to, but that's not how we prefer to do it. If there was no laptop with more than say 8 gig we'd probably have gone that way. 16gb is almost enough.


why did you prefer laptops to a bigger and cheaper in a bang for buck way desktop machine?


Because people wanted to take them around. At first, I think the dev system fit in 16 gig macbook pro without any trouble and it was a startup, so it was just natural to say that's all you need. Then the dev env started getting a little bit bigger.

I showed up at the company and noticed you couldn't run the vm for a test and compile separately at the same time. Next I found out I couldn't have a bunch of tabs open while doing dev. So that was painful, and I got a desktop. The existing people who were used to how things worked said you shouldn't have a bunch of tabs open, don't do that and it works fine (oh and don't run any tests while compiling).

Then as memory use kept up even the at most 4 tabs people found they kept running out of memory and they bought a few 32 gig laptops and suddenly things worked again.

A few of us have desktops, most people are struggling with 16 gig notebooks and they started buying 32 gig notebooks for people that want them.


"wanted to take them around" but did they "need" to carry them around.

Where they using them for their own use out side of working hours? not usually a good idea you want to keep your personal device use separate from work equipment.


We all have laptops because: - most devs are in the on call roster for their team’s systems. - it’s really handy to be able to bring a machine to a meeting, or to go and sit with people from the team that make the API you’re trying to use.


Some of us work for companies that don't require us to be in the office when we work, so we end up working at coffeeshops, at the beach, home, etc, despite having an office.


Sure we do. Try running multiple VMs or a few Selenium tests and you can see the GBs get eaten by the dozens. Not to mention creative software like Adobe's line of products which are very demanding resource-wise. Load a couple hundred high-res pics on Photoshop and try executing a complicated script and boom your PC gets hammered. It also counts that most laptops cannot feature an advanced GPU so a lot of processing is handled by the CPU and the RAM.

Of course, we can argue how many of these will be executed from a laptop but there are people who use a laptop as their main rig so I guess every possible scenario is on the table.


Our photogrammetry generation machine has 64GB of RAM and this is much MUCH lower than we want. Most of our models crash we try to generate them on 16- and 32-GB RAM machines; the workflow for our target quality/size models has 64GB as an absolute floor.

We'd also like a few TB of VRAM as well, but that's another order of magnitude expense...


Might as well burn more points here - these are all niche cases that should be offloaded onto external machines.

I should have rephrased my original question - is any significant share of the market running into issues? Because everyone acts like this 16GB limit is something a huge chunk of people currently need.


You're asking on HN. Of course we're all going to rush to tell you about our niche cases. In terms of the broader market, you're already a niche case if you're a software developer of any sort.

In my experience it's pretty easy to run up against the 16 GB limit on the MBP if you're running Slack, a browser, a couple of IDEs, and Docker.


Kinda funny that you had to include Slack as a primary memory hog on a dev workstation.


Makes sense though. Slack and other electron apps are basically running their own isolated browser instance, so they duplicate all of the baseline memory needs of a browser plus the memory of their actual content.


Electron isnt the problem, slack is.


Eh. To give a comparison, Kate (the KDE default text editor and a decently looking one at that) uses <1M when freshly opened and <100M when a 2 kLOC (~84 KiB) C file is opened (with a decent number of plugins).

Meanwhile, Visual Studio Code uses ~400M when freshly opened and ~550M with the same file (with similar plugins where available). Admittedly, VSC offers far more functionality, but the memory increase is still sizable.

I know that those (Slack and VSC) are vastly different programs with vastly different purposes, but even a minimal Electron app is going to have ~100M baseline memory, which is going to be used again with each and every Electron app that gets launched, in addition to the runtime overhead.

A common response to this is that "RAM is there to be used", but that RAM would have been used anyways for caching (which would have increased overall IO performance across the system) if these apps didn't hog it all. This fact becomes especially relevant when doing tasks that require lots of data (machine learning, compilation, etc).

That being said, I acknowledge that browser runtime based apps make it much easier to develop cross-platform applications, a fact for which I am grateful for as I run Linux. I think that a reasonable solution going forward would be if Electron (or another similar runtime) offered a way for multiple installed apps to share one running application. Ideally, of course, this would be offered natively by the browsers themselves, but given the technical hurdles to doing that _safely_, I'd easily settle for the former.


There is also the matter of the fact that most of these stuff only use a small amount of memory in the background though.


  • Software Development
  • 4k+ Video Editing
  • High Resolution Image Editing
  • 3D CAD 
  • GIS
  • AR/VR
  • Data Science
  • Machine Learning
  • the list goes on…


Probably a niche case, but I regularly analyze data that exceeds 10s of GB, so it helps when that fits on RAM and requires less chunking when analyzing.


> does anyone really need 32GB of RAM?

That sounds an awful lot like the infamous "640 kB ought to be enough for anybody" quote

> Is anyone seriously running into issues with only 16GB of RAM?

Every day.


Here is a good visualization on access speeds for each level of cache, then main memory. Now take the main memory access time and essentially double it (for your state of the art SSD's today) and that's the access speed.

https://www.youtube.com/watch?v=WDIkqP4JbkE&feature=youtu.be...


> With super fast SSDs, does anyone really need 32GB of RAM?

I've used a machine with 8GB of RAM and a swap partition and hard drive cache on the fastest SSD (Intel Optane SSD DC P4800X). Responsiveness still takes a huge hit when processes are actively using more data than fits in RAM.

Fast SSDs can help when you have more RAM than you need but not as much as you'd like, but they don't help when you don't have as much RAM as you need.


"640 K ought to be enough for anybody."


"Is anyone seriously running into issues with only 16GB of RAM?"

Yes, all the time. I wouldn't touch even a laptop with less than 32Gb these days, but YMMV with workload. JetBrain IDE's and Slack are a far cry from volumetric image processing or lots of data science loads.


Agreed. My new build (15 months old now) has 32GB ECC. Apps and VMs consume RAM like nobody's business. I'm glad that I built it before memory prices doubled.


CAD software can use a lot of RAM. Swapping it on a SSD is just not fast enough.


People used to say this about 8GB being enough just a few years back.


Scientific dataset processing with R and python frequently pushes up against 16GB. Having more room with 32 or 64 would be fantastic.


Some people store data in RAM not software.


VMs.


That's a silly question. Most people barely use SATA 3 SSDs, and SSDs then to degrade quite a bit after a year. But even the fastest ones still aren't quite a match for RAM.


Meanwhile, if you're willing to go to DDR4, then e.g. the XPS15 is looking mighty nice: https://www.anandtech.com/show/12605/dells-8th-gen-alienware...

"internals of the XPS 15 model 9570 have been upgraded: [...]a fully-unlocked six-core Core i9-8950 HK."

"Since the new processors support DDR4-2666 memory, Dell will equip its new XPS 15 with 8 – 32 GB of DDR4-2666. "

It even scores 4 lanes PCIe Thunderbolt, finally.

(I'm looking at upgrading my original X1 Carbon with a new one in the next couple of years myself so will probably wait for the LPDDR4).


It will suck if we still won't see macbook pro with more than 16GB of ram because of that


They've already had a Kaby Lake MBP; there's a good chance they'll wait til CannonLake for the next one, which will.


That or apple will just not update it for 20 months to build demand then use their own chips.


In 20 months it better be 12 cores and 64gb.

I already have a thinkpad with 32gb that I use for ML and minikube.


I have a suspicion that Apple can beat Intel on most of the metrics typical end users care about (single threaded speed, battery life), and pro users will just be forced to push the work to another rig.

Who knows though, maybe they'll reintroduce the xserve? ;)


Does it have more l2 or l3 cache to mitigate this? are there any benchmarks where this has a large effect? Is the I9 with 6 cores still faster than the I7 with 4 across the board?

With more cores memory bandwidth does start to become more and more of an issue...


Yeah, those new Intel chips are useless. I can't believe they wasted resources to build something like that. I really need a new laptop, but in 13 inch format I don't think I can get one with 32 or 64 GB of RAM. Only having 16 GB is a huge problem and make work difficult needlessly.


So, the laptop processor has a higher turbo frequency than the desktop parts?

Sigh, the same old intel, they accuse others of selling desktop parts as server cores, but they are the ones that don't have a proper desktop lineup. Rather their focusing entirely on power/thermal constrained mobile parts, and then packing as many as possible into a server part. Desktop users get whatever random dies are left over. At least the new "workstation" series xeon's acknowledge that there are users for which single threaded desktop performance is still important.


>Rather their focusing entirely on power/thermal constrained mobile parts

So where the market is? Sounds like smart for a for-profit.

If anything, one could accuse them that they didn't do that enough (e.g. missed on the mobile phone market), not that they did it.


Anandtech.com points out how Intel doesn't release information about the limitations of the boost - primarily that it's very much a single core clock boost, and it's unknown what speeds the multiple cores can increase to. Perhaps on desktop, they can all hit 4.2Ghz, while the mobile chip is limiting them to 3.8Ghz. Of course, those are made up numbers and wild speculation. Until we have comparable benchmarks, the boost speed itself doesn't mean much to us.


turbo frequency is not everything.

Every benchmark i know from intel desktop cpu versus mobile cpu is the same: desktop wins by big margins.

I'm guessing that the mobile version does have less execution units.


It's several things. Desktops dissipate more heat, so sustained clocks are possible. Max Turbo is theoretical in most laptops. A decent example of this phenomenon is the m5 vs m7. Though the much is theoretical faster by double digits, in practice the clocks that those systems can sustain are identical (so save $100 by getting the m5).

Let's compare a 8550u and 8700k. The desktop has a base clock of 3.7 instead of 1.8. it has 12mb cache instead of 8mb. Twice the bus speed. Faster supported memory and more bandwidth. Desktop also has 16 pcie Lanes vs 4 (both can access more via the chipset, but then there's that bus bandwidth issue). Instruction and feature support is almost identical (except some things like vpro).

Sustained clocks, bandwidth, and more cache make a huge difference.


Very true, trying to evaluate laptop CPU performance is highly deceiving. Unlike desktop parts the actual performance depends completely on the particular laptop's cooling capability, which is impossible to know from looking at the spec. And often that cooling is terrible.

That's why desktop high end CPUs have such a large gap in performance in desktop vs. mobile. The gap seems small if you just look at the specs, but is much larger in practice.


> I'm guessing that the mobile version does have less execution units.

They don't. The architecture of their mobile CPUs is exactly the same as the desktop versions. (They use the same die.) The difference is in the thermal limits, which determine how long the CPU can spend at each power state.


Another huge difference can be whether the machine has a memory configuration that runs in dual-channel mode or not, as well as differences in cache sizes and BIOS settings for things like cache pre-fetching.


TurboBoost on laptops has always been poop. Take the 3720qm and 3940xm - yeah the latter has a higher TB, but under high load it runs very close to base clock, while the 3720qm runs a bit higher than base... resulting in a minimal difference :/

That's disregarding TDP/TPL adjustments/overclocking, of course (which is disabled by BIOS on workstations anyway).


yes I used to have an older i7 4 core desktop and a newer i7 4 core laptop, and by reading the spec sheets you would think the laptop should be as fast or faster, but the desktop was much faster.

it was pretty interesting!




The Intel Core i7 laptop series also features 6 core CPUs:

Core i7-8850H

Core i7-8750H

Core i7-8700T

Just not clocked as high as the i9.

I wonder if any of these 6 core laptop CPUs will have the AMD integrated graphics - it appears not at this moment. I was looking forward to that on an upgrade to my Dell XPS 15.

Who is getting the AMD integrated graphics CPUs then? Apple? I really wanted one in my Dell as I was hoping it would be faster and more power efficient than the NVIDIA 1050/1050 Ti.


The model numbers you're looking for are iX-8xxxG.

Dell are doing preorders on XPS15 2-in-1; HP doing same on the 15" Spectre x360.


> Who is getting the AMD integrated graphics CPUs then? Apple?

Doesn't really fit Apple's line. The 13" probably doesn't have the thermal headroom for it at all, and it's not an improvement on the discrete AMD stuff already used in the 15" MBP. Might work for a low-end 15" MBP, but it's not like it's a particularly cheap part, anyway.


The AMD integrated chips has 100W TDP (up from this 45W) so not really in the same league


WTF. The Core i7-8700T is a desktop CPU with 6 / 12 cores/threads at 2.4-4.0 GHz at 35 W for 303 dollars.

The i7-8750H has the amount of cores and thread it is rated at 2.2-4.2 GHZ. But it's 45 W and 395 dollars. That's a lot of money and heat to pay for a paltry 5% speedup at the very top -- and let's not forget the non Turbo speed is 8% lower.


Ghz range != performance. Among other differences, the i7-8700T only has turbo on one core.


At least this article doesn't say anything about that. In fact, it presumes all cores have turbo (which would make sense): At the peak turbo of 4.0 GHz, or for all-cores somewhere in the middle (again, Intel won’t specify), the power will obviously be higher.

Later they do some "sleuthing" to quote them and come up with this chart which again indicates turbo across all cores: https://images.anandtech.com/doci/12607/Turbos.png



So, are those chips immune to the Spectre and meltdown already ?...


Nope. It's basically the same design that they already had, just with the clock speeds and TDP's moved around a bit.


Are there any real attacks on consumer/prosumer machines using Spectre/Meltdown already?


and Intel ME has been removed from those chips already ?...


11 years after the first quad core laptop hitting the US market. Well done Intel!

https://www.cnet.com/news/first-quad-core-laptop-hits-u-s/


Because there was some contract that cores have to be increased before some end date?

If anything, most cores are underutilized in laptops even today...


Why does it matter if some people are underutilizing cores on their laptop? Mine are constantly pegged (developer running a lot of different heavy processes) and I'd appreciate the ability to pay more for an upgrade. There are many others like me.


Not many enough (or not paying high enough), or there would be a market.

Instead, the major market pressure was on power efficiency.

(My cores are pegged too -- I run DAWs and NLEs -- but me and you are irrelevant in the grand scheme of things, if there are not enough of us. And that's not determined by counting us, but by the emergence of a market).


> All of the new processors and their accompanying chipsets will support Intel's Optane technology. [..] Intel claims the technology helps game levels load 4.7 times faster on the Core i7 8750H

Anyone try out these Optane chips yet and seen a significant difference? This is apparently the other big announcement coming to laptops...


There's no new Optane hardware yet, just a renewed attempt to push Optane caching for mobile use. Intel plans to release cache-sized Optane SSDs that have low power idle states, to replace last year's 16GB and 32GB modules that idle at 1W.

Intel is also updating their Windows drivers to enable Optane caching of non-boot drives. For the past year, Optane Memory caching has only been usable for the boot volume. This driver update should be available for both the new platforms and for all existing Kaby Lake platforms that support the original Optane Memory implementation, since there's no motherboard firmware functionality that needs to be updated for non-boot volumes.


I think that's compared to loading from a 5400rpm mechanical hard drive. I'll easily believe that optane is 4.7x faster than that.

I also expect most high-end NVME SSDs are going to be in a similar ballpark, maybe "only" 4.5x faster or something.


Optane performs much better at low queue depths which accounts for many common workloads, and has better power usage at these depths -- this lets it compete with NVMe at the same performance for many tasks with a smaller footprint (e.g. battery). It's not all about peak GB/s, though to be fair not even Intel's blurb makes this obvious...

That said it's quite possible NVMe drives will be "good enough" where PCI-e Optane never makes major inroads, even just in terms of pure volume. The dollar-per-GB needs to come down a lot, still.

Optane in DIMM form is also still MIA. I'm extremely skeptical it will live up to the original hype it was put through ("thousands of times faster") but I imagine it will be able to outclass many competitors.


This claim doesn't make sense to me. I think game loading times are primarily limited by the hard drive. It's about loading data from there into RAM (and/or GPU RAM). The hard drive would be the bottleneck.


Optane is a brand that covers both the harddrive and RAM. For non-SSD HDDs it will provide a cache on commonly accessed files, providing a big boost to disk HDs and a minor boost to SSDs. Which is why I asked if people have noticed an improvement in practice with either the HD or RAM options, as I assume most people here have SSDs.


I just hope the do extensive testing on cooling these machines.

As a former user of laptop workstations, there is nothing worse than having to always lug around a bulky UPC as well as worry about overheating.

The other reason I could see them doing this is maybe related to wanting to push people to use more graphic related applications (3D rendering), but even that is slowly going to cloud.


Intel is dead to me until they support ECC memory in their desktop processors like Ryzen does.


I don't really understand the demand for this, does not having an i9 defeat the purpose of laptops (working without a power source)? I usually buy laptops with i5 processors which is the perfect trade-off between performances and battery life.


There is a market (not sure how big, but there is one) for modern-day luggables. They're more often called "desktop replacements", and "working without a power source" is really far down the list of requirements for them.

The common use case for these machines: a developer who needs a fast CPU and lots of RAM (and often a dedicated video card) coupled to a 15" or better yet a 17" screen and a ton of hard drive space. You may unplug to go to a meeting for an hour or two, but you'll be back to your desk (and docking station) pretty quickly. You're not working from a coffee shop or a couch because you need a real mouse and a real desk. You occasionally take your work computer home when you're the on-call resource that week.

In that case you don't need a big battery (or rather you need a big battery but not a lot of battery life). Five hours is more than good enough, but you can't trade that performance for anything you don't need.

I know because I have one sitting unused beside me (Thinkpad W530) from back in the days before I started traveling for work.


I have two of these workstations, a dell and a lenovo w520. My lenovo is from 6 yrs ago. 8 cores, 20gbs of ram. It's a beast. I figured I needed the power. The truth is that they both have been sitting in the same spot now unmoved for a year. One at work and another at home. Last time I traveled with one, I hated myself. I carry around a chromebook or a lightweight lenovo carbon if I need to carry things around. With 24/7 cheap internet. If I need power I can VPN to work/home/cloud.


> Five hours is more than good enough

Was only a few years ago where any decently performant machine didn't get more than 3 or 4 hours anyway. Everyone has a different use case.


Five hours new does not stay five hours for long. Really what people want is to have ~3 hours 2+ years from now.


Who's got 5 hour meetings?


Poor shmucks in big corporations. I've had a few 8h meetings in the past year. We did have a power source though.


For me, it's 5-hour flights on planes that sometimes don't have working power plugs.


In my experience, it is usually 5 1-hour meetings.


All day "strategy meeting".


Another great use case is something like a mobile CAD/PCB/whatever design software. This is useful for contractor/consulting shops. It's super useful for them to be able to bring to client sites and be able work on "the real thing" right there. In that case they're still plugging in on the other side, but the portability is nice.


Yeah, I don't know how common this is but when I decided to do something about the couple of old DIY and not really working properly Windows PCs I had under my desk, I decided that the cleanest route to go was just to get a big 17" Alienware laptop. It's not really portable though I could take it downstairs in my house if I wanted to. It's mostly for sometimes gaming or other Windows-specific tasks and I just didn't want a lot more random clutter in my office.


> I usually buy laptops with i5 processors which is the perfect trade-off between performances and battery life

That's not... how that works. The "U" indicates a low-power part, while the "H" indicates a higher power part.

For example, the i7-8650U (the U is the important bit) uses 15W TDP: https://ark.intel.com/products/124968/Intel-Core-i7-8650U-Pr...

The i5-8400H uses 45W TDP: https://ark.intel.com/products/134877/Intel-Core-i5-8400H-Pr...

So the i7-8650U will have better power-usage than the i5. Heck, due to binning and Turbo (the "race to idle"), the i7-8650U likely will have better power-usage all around than the equivalent i5-U class.

The i3 / i5 / i7 is just a marketing trick by Intel. The "important" bit is the U, or H on the end of the chip number, telling you whether its a low-power, mid-power, or high-power version. The "U" chips tend to be around 2GHz base, while the "H" chips turbo to 4.5+ GHz and are almost desktop class power-usage.


I'm going to second the first comment here - thank you so much for this! I never realized the significance of the model numbers!


Yeah, marketing is tricky because there's so many model numbers and configurations.

The most important tidbit about "TDP" is that its "THERMAL design power", not actual power draw. So strictly speaking, TDP measures the size of the heatsink needed to keep the CPU functional.

This means that different chips, even with the same TDP, offer grossly different power consumption rates. Besides, every chip's actual power draw is slightly different, even chips of the same design (see "binning" and "silicon lottery").

-----------

When you understand how chips are made, then you see why things get complicated. Its a well known fact that the entire AMD Zen series uses only two die designs: Zeppelin, and a 2nd design with the iGPU.

That's right, the singular "Zeppelin" design covers AMD EPYC, AMD Ryzen3, 5, 7, and Threadripper (1900X, 1920X, 1950X).

How?? The difference between the dies is in "binning". If one or two cores are broken because of manufacturing defects, AMD sells it as a Ryzen 5 (6-core model) instead of the Ryzen7 (8-core model).

Very broken designs (3 or 4 broken cores) are sold as a Ryzen3 (4-core model). But when manufactured, these are all 8-core chips.

Between chips with full functionality between all eight-cores, some will be able to reach higher clocks than others. The 8-core designs are tested, and the ones that reach the highest clocks are 1800X, and the ones that reach the lowest clocks are sold as a 1700.

And that's how ONE design gets turned into 10 different SKUs sold to the customer. Because the manufacturing process is innately variable.

-------

Intel is known to bin for power-efficiency, high clocks and so forth. The marketing is a way to sell "broken chips" to people who don't care as much about high speeds or low-power draw.


> The most important tidbit about "TDP" is that its "THERMAL design power", not actual power draw. So strictly speaking, TDP measures the size of the heatsink needed to keep the CPU functional.

Your caps are misplaced here. The key word here is DESIGN power - inherently an approximation. It's more of a "power category" than an attempt at a rigorous measurement - really they're saying that CPU X needs to be paired with cooler Y. For example, a 6800K and a 6950X are both listed as a 140W TDP, but a 10-core processor is obviously going to pull more power than a 6-core processor under a full load.

Also, what you're measuring can significantly impact the number you get as a result. Measuring at all-core AVX, all-core base, single-core turbo, etc will all give you different numbers. There is frankly more marketing that goes into this than actual technical backing.

If you're implying that heat != power then no, that's incorrect. A CPU is essentially a nearly-perfect resistive load and all power that goes in is converted to heat.

AMD has attempted to spin this one in the past (I believe it was AMD_Robert or AMD_James who made a handwavey post about how TDP was not actually about electrical input but about heat output, as if they are somehow different), simply put they are being misleading. Turboing above average power consumption means that you need to reduce power later to average things out. Otherwise, you need to dissipate a greater amount of heat. That's why it's an average power measurement, and not a hard cap. And it certainly does not imply that "power in != heat out".

edit, found it: https://www.reddit.com/r/Amd/comments/6svy1a/tdp_vs_tdp/dlg8...

Yeah, simply put, thermal watts and electrical watts are the same thing. There is no power that goes into a CPU that is not converted into heat, and there is no heat that is not generated from the electrical input (or the ambient temperature of the room). AMD did not disprove the laws of thermodynamics.

The rest is some handwaving over a formula that he claims doesn't include power, but it's right there in the ϴca term (°C/W), the W is power. It's true that you can sprint above the cooling capacity of your cooler for a short time, but then the heatsink will heat up, and at some point you'll have to reduce power to compensate. Again, that's why it's an average and not a hard cap.

Pretty cringey stuff for an engineer to be spouting and it should not be repeated. Again, there is a grain of truth that TDP is typically not reported as an accurate, measured number but rather more of a general category, but the idea that electrical watts and thermal watts are different things is horseshit.

> How?? The difference between the dies is in "binning". If one or two cores are broken because of manufacturing defects, AMD sells it as a Ryzen 5 (6-core model) instead of the Ryzen7 (8-core model).

This is how virtually all CPUs do it, not unique to Ryzen. What is unique about Ryzen is that all products are constructed from a single die, from laptop to server products, whereas Intel has five: laptop, client, LCC server, HCC server, and XCC server. But within each die, everyone employs die-harvesting to increase yields.

> Very broken designs (3 or 4 broken cores) are sold as a Ryzen3 (4-core model). But when manufactured, these are all 8-core chips.

Side note, but there are typically not enough broken dies to fulfill all the demand for the very cheap SKUs, so many of these are actually fully-functional dies that are locked at the factory.

Back in the day, you used to get Phenom X3s that could be unlocked to the full 4 cores, and you also get GPUs that could have additional shaders unlocked (most recently on Fury, I believe).

There's some Ryzens that are sold with more cores enabled than they're supposed to have, but that is different because they come from the factory like that. Pretty sure it falls into the category of manufacturing errors - someone screwed up and didn't blow all the fuses they were supposed to.


Interesting, I didn't know this, thank you.


I almost never use my laptop without a power source: to me, it is mostly a transportable workstation with a built-in UPS. I don't know how large that market segment is, but I would say that it is what laptops with full-sized keyboard including numeric pad, accordingly large screen and components chosen obliviously to power consumption are for.


I use a laptop and never (as in never, ever ever) run it without the power cord. I need to use it in N places, but none of those places are without a power socket. I also use a big fat screen and a proper keyboard always so the quality of the keyboard, touchpad and screen is also of no consequence.

I just want a portable/luggable workstation, not a laptop.


> defeat the purpose of laptops (working without a power source)?

I think for most people that's not the purpose of a laptop:

  - 80% of the time people will work from a desk.
  - It may be a hot desk
  - They want to take calls with computer in front of them, but not from their desk (due to open plan offices)
  - A couple of times a month they need to take something with a keyboard into a vendor/client meeting/presentation


Different use cases: somebody may want a portable workstation, not a tablet with keyboard.

I am typing this on one: it's still technically a laptop, and I can still unplug it from the plethora of peripherals and take it for a walk - it's just that I rarely do that. Still, it does happen every month or so, so there's some value in the laptop factor. (Haven't moved my desktop computer for years, am using both; and on the road, the smartphone is powerful enough.)


I've got a recent XPS15 with an i7 that is way more efficient than a few years old T530 with an i5.

You want power efficiency when you don't load the computer much, while you might also want to be able to load it a lot to get e.g. fast builds. If you can get both (and now it seems that you can) it does not make sense to only get power efficiency while never allowing high perf for transient high loads (or even sustained load for when you are plugged in).


i3, i5, i7, and i9 have really just come to mean Good, Better, Great, and Best within each respective line of Intel CPU. Usually while the i7 and i9 models have higher clock speeds, they are also better binned processors (meaning they can achieve those clock speeds without spending much energy over their i5 counterparts).

The notebook i9 is still an i9 relative to their notebook processors. It's still a 45W TDP processor, just like the other i7 6 core mobile chips.

It may run a bit hotter if you run it at sustained load, but mostly you're paying extra for the higher-efficiency chip that can handle the extra clock speed.

An i7 from the 15W TDP series can still perform slower and use less power than an i5 from the 45W TDP series.


Based on the slides, do some processors lack virtualization support?


The only technologies they lack are related to remote management, as joathonW said: https://ark.intel.com/products/134903/Intel-Core-i9-8950HK-P...


How they currently define vPro:

Intel® vPro™ Technology is a set of security and manageability capabilities built into the processor aimed at addressing four critical areas of IT security: 1) Threat management, including protection from rootkits, viruses, and malware 2) Identity and web site access point protection 3) Confidential personal and business data protection 4) Remote and local monitoring, remediation, and repair of PCs and workstations


Does it still come with the intel rootkit?


Must be a mistake. Other than video rendering, virtualization is a good task for a multi-core processor.


“vPro” in this context is probably Intel AMT (Intel uses the “vPro” branding to cover several unrelated technologies)... but Intel’s used virtualization capabilities (specifically VT-d and VT-x) as a point of product line segmentation before, so its not that far-fetched that they might omit something, especially if they’re concerned about cannibalizing server-class processor sales.


might want to discourage server use.


Did I miss it or is there no mention of price? I assume that's because it's going to be more expensive than AMD's 8-core chips?


$583


Good is competition. Began the Core Wars have.


Finally the regular i7 has moved beyond four cores. It only took ten years.


IIRC there were i7 chips with six or eight cores available for a while, they were just ridiculously expensive.


The X series has always been ahead of the curve, but priced astronomically expensive, usually well above the equivalent Xeon part.

The i7 was introduced as a 4-core part and only now have they started to shift that to 6-core by default.

If their initial trend with Core continued we'd be at 16+ by now.


AMD had an octocore that still crushes most of intels stuff and it's been out for 8 years now.


Except it didn't crush most of Intel's stuff. It had more physical cores but lost in actual performance. That's why Ryzen was such a big deal, it's the first time AMD has been able to compete with Intel since the Athlon days.


Well, my modest FX8350E keeps working very fine.


Bulldozer and Piledriver worked fine, but never came close to "crushing" Intel's offerings. They were a decent option for specific workloads that only cared about parallel performance on a budget. If you cared at all about single threaded performance or had a few extra bucks, it was/is worth it to buy a comparable Intel offering for the entire lifecycle of Bulldozer and Piledriver.


"Working" is still a long shot from "crushing" however


They ran hot and angry. That's what makes Ryzen so surprising. It's fast and low power.


Check out cpu benchmarks on the fx line and get back to me. AMDs 300 dollar chip destroyed Intel's 1200 dollar server chip at the time. Hell that chip is still competitive to Intels latest and greatest


The FX series has high core counts but much lower single thread performance vs. Intel chips. For example, the FX-8350 has a respectable passmark score of 8949 which puts it in the ballpark of an i7 3770, which came out 3 quarters earlier, but its single thread passmark rating of 1509 is significantly less than 2069 for the i7 3770. The FX-8350 initially cost around $200 and the i7 3770 cost around $350. So the FX wasn't ever really competitive with Intel and certainly never crushed an Intel server chips.

https://www.cpubenchmark.net/cpu.php?cpu=AMD+FX-8350+Eight-C... https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3770+...


Don't forget the i7-3770 has half the TDP of the FX-8350 despite the overwhelmingly superior real-world performance in every meaningful metric.

I'm not sure in what world anyone could think the FX series ever "dominated" Intel chips at the time, much less current ones, in the datacenter -- unless they are literally delusional.


Anyone more in the know - how much ram could one of these new chips handle - the 16GB limit is something that has to go.


Isn't the 16 GiB a self-imposed limit by the laptops manufacturers rather than the processors?


Bit of both. Current Intel chips (including these ones) don't support LPDDR4. This is largely because Intel is at this point years behind on their roadmap; phone chips have supported LPDDR4 for a while. They support LPDDR3 and plain (non-low-power) DDR4. So if you want to go over 16GB, you have to use plain DDR4, and power usage, especially suspended power usage, suffers.

Next-gen Intel chips will support LPDDR4, making the whole thing a bit of a non-issue.


If the next gen was close, it would not be an issue. But because it's way behind and one commenter up above stated its 18 months out. So it's an issue.

My dev team is finally abandoning mac books for lenovo laptops with 32 gig.


If you mean mbell's comment, they said it's 18 months late, not 18 months out.

How many months it's out from this point is presumably something only Intel knows. Or maybe not even Intel.


ah, thanks for the correction. so when is it expected, if not in 18 months? I've been waiting for > 16 gig for literally 2+ yaers.


Hopefully soon. I've been waiting for > 16GB for almost 7 years now. :/ (I have a 2011 MBP with 16GB RAM)


It'll be this decade or next, never you worry.


You can buy an HP zbook x2 with i7-8650U and 32Gb today, if you want? And there's even a mobile Xeon laptop from HP that supports 64 Gb...


That web site is extremely heavy. I hate to think how much Javascript it's trying to run in the background.


Better upgrade to a shiny new CPU to run all that pesky Javascript, then!


It's super fast and light here. Is your ad blocker up to date?


He's probably talking about the previous link before it was changed.


Accidently dropped my gaming laptop during the weekend. Guess I will buy a new one. The Gigabyte Aero 15X v8 looks promising. So this is a nice coincidence.

The only issue is the missing Spectre / Meltdown silicon fixes. And even when they are available, its probably pure luck to get a model having it. :(


Great. Can we get > 16GB RAM while we're at it? :-)


Does this generation add any hardware support to mitigate/eliminate meltdown and/or spectre?


How's the TDP?


I miss the old processor wars; is this a reboot?


Yes, but no Vega. :(


one a related note, this is the chipset many in the iMac world are expecting for some new models. being that it will also be a twentieth anniversary of the platform there is expectations of a new chassis as well


iMac doesn't use mobile CPUs. The current top end iMac has a 7700K


Would be an a$$ kicking mac mini with 6 cores.


12-core laptops have been available since 2013.

http://www.eurocom.com/configure(2,234,0)


That design is stretching the definition of "laptop" pretty far.


Desktop replacement laptops like that one are admittedly more mobile than real desktops, but the weight and thermals mean they can't be used the same way as a MacBook-sized machine: on a couch, on a plane, etc. They also can't last long on battery (the one you linked has a 330W draw on a 78 watt-hour battery?).


Those have desktop class high TDP CPUs in them. The distinction here is that this is a low(ish) TDP CPU intended for laptops where battery life matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: