Hacker News new | past | comments | ask | show | jobs | submit login
Intel to shut down renegade Skylake overclocking with microcode update (arstechnica.com)
158 points by pavornyoh on Feb 10, 2016 | hide | past | favorite | 127 comments



There are two things for this situation

  -On one hand Intel is disabling a "Feature" of their cpus
   as a way of preventing users to get "More expensive 
   performance" without paying for it; you can think of it
   like intel is just covering their back; we can assume the
   difference on their cpus is just related to binning [0]
   and they are just protecting their customers by not 
   allowing less performant cpus perform better therefore 
   increasing reliability.
  -on the other hand you can see how hardware is not
   anymore something you buy and expect to behave in a
   certain way.
   For those paranoid... would this mean they can also alter
   how instructions behave? **cough**security**cough**
[0] https://en.wikipedia.org/wiki/Product_binning


Yes microcode can alter the way instructions behave and it's been like this for a decade.

What worries me more is something like Nvidia's Denver architecture because that's actually a full abstraction above machine code.


> What worries me more is something like Nvidia's Denver architecture because that's actually a full abstraction above machine code.

What relieves me is that RISC-V is demonstrating that opensource hardware can be a reality. This will provide people with a concrete alternative to Intel and ARM chips that does away with (closed) microcode and shadowy marketing/security procedures.

Yes, we are far away from FPGA-ing our CPUs, but I see the years 2010 as the years 1980 for FLOSS code. In the '80s a bunch of people sent around tapes with copies of EMACS, cp and mkdir; 20 years later multibillion dollars infrastructures rely on open source.

Today we use Intel microprocessors fearing that their ME components will spy on us and that secure boot will lock us in. In some years we will just install our Debian-for-HW and be done with it. It is a liberating thought.


Yeah FPGAs sound nice but I seriously doubt that will ever happen where performance and power are major concerns.

I'm not sure that just having the hardware design makes much of a difference either as it'd be the equivalent of shipping a binary blob then making the source available without the compiler used or a way to decompile because the tools required are so expensive.

In CPU land have ARM which is probably as close as we can practically get to open source but IMO the real problem is still verification of the shipped product and without third parties auditing the design I'm not sure we could ever be sure source or not.


the real problem is still verification of the shipped product

In that vein, there is a lot about a modern FPGA that's a "binary blob". Do you fully trust Xilinx? Do you fully trust Altera?

Do you continue to trust Altera now that they are a subsidiary of Intel? If so, why don't you just trust Intel directly? Perhaps it's turtles all the way down? :)


Open source hw maybe the ejection seat from black boxes (tinfoil and real concerns) and proprietary facism, could solve much in the same way marginalized Unices. The issues are funding, talent wrangling and practical desktop lithography (FPGA is a stepping-stone).


I read an article* about ARM using imprint lithography to create a working Cortex M0 microcontroller. Feature size is equivalent to a 2 um process (which was achieved about 30 years ago in silicon), but it still seems pretty exciting - I could find use-cases for something like that in my own day-to-day hardware projects if hobbyists ever find a way to recreate these kind of machines for cheap (much like the DIY CNC machine & 3d printing scene has done)

* https://semiaccurate.com/2015/11/18/arm-charts-path-printed-...


All computing in general is somewhat broken if you are looking for absolute security (broken by design)... I wish that I see a solution to this in my lifetime...


There's truly secure computing to be found in projects like libreboot, it just takes half a decade to undo the damage done by proprietary software so your options are limited.


Are Intel still disabling random features in the K versions of their processors for market segmentation reasons? Figuring out what processors had what features locked off was getting truly ridiculous.



In the case of TXT I don't think everyone would consider it a desirable feature since it's part of the trusted computing stuff, but in the past they've also disabled more friendly features like virtualisation.


If you're not using TXT, your full disk encryption is completely useless if an attacker can get access to your laptop for 5 minutes¹. They can just backdoor the bootloader (or, on Linux, your kernel/initramfs which are not encrypted) to get a foot into your system at the highest level of privileges.

If you're using TXT, maybe the NSA can still do that by getting the ACM signing keys from Intel. But that's still an improvement from just the standard FDE setup most people use, which could be trivially hacked by someone with the skill of an average university student.

¹ Make that 15min if you have a BIOS password and a static boot order that disallows booting on any kind of external device. It just requires a bit more preparation, but on most laptops it's not hard to put a clip on the NVRAM that stores BIOS settings and reset what needs to be reset.


This is simply not correct, TXT is not required for this, the TPM can still and does protect your OS and boot loader from tampering, it's also not required for Windows Secure Boot.

TXT extends the chain of trust to the BIOS/UEFI by having the CPU take charge of some of the verification, secure boot supported UEFI's (if they meet the UEFI standard version 2.3.1c or higher) can also extend the protection envelope beyond the OS/MBR but the main function of TXT is to protect tampering with low level firmware not OS/bootloaders.

"Intel TXT uses a Trusted Platform Module (TPM) and cryptographic techniques to provide measurements of software and platform components so that system software as well as local and remote management applications may use those measurements to make trust decisions. This technology is based on an industry initiative by the Trusted Computing Group (TCG) to promote safer computing. It defends against software-based attacks aimed at stealing sensitive information by corrupting system and/or BIOS code, or modifying the platform's configuration."


Secure Boot's aim is mostly to protect against malware persisting through the boot process ("bootkits"). Given that it's anchored in the firmware and the firmware is unverified in almost all consumer x86 motherboards, an attacker with physical access can still bypass it in a trivial amount of time (it requires some preparation to figure out what to patch, but the patching itself is quick).


In theory anything can be bypassed including TXT, TXT on it's own does not provide any substantial amount of protection against tampering at least not over traditional TPM setups.

What it does is centralizes all of the measurements in one place and adds DRM.

It's not bad in any way but it's also not required to run a secure boot encryption setup.

My 9 year old HP Compaq 6910p Centrino with TPM can offer almost the same level of protection as any modern Intel vPro laptop.

Case for Bitlocker (one of the few FDE's that actually has good integrity checks)

BIOS / UEFI modification (Update, revision, settings change etc.) - both will trigger recovery mode

-Boot loader modification - both will trigger recovery mode

-Boot order change - both will trigger recovery mode

-Boot attempt number does match between TPM & HDD (e.g. when the HDD was removed and attempted to be booted in another device, or when the machine was booted not from the HDD) - both will trigger recovery mode

-Data partition changed - both will trigger recovery mode

The only difference is that if my device is in either S4 and S5 mode TXT can still continue to make measurements, and the measurements that TXT allows you to do are very generic unlike standard TPM/Secureboot which only checks for specific parameters.

You also need to understand that FDE is not designed to protect your information in such cases where you lose control of the physical security of the device and do not treat it as untrustworthy afterwards, it's more or less excellent at protecting your data while it's in rest if the device is lost or stolen but that's it. And while yes secure boot can be bypassed and TPM's could also potentially be broken (which invalidates TXT as well since it uses the TPM as the cryptographic storage device for the signatures) an adversary that can bypass one could most likely bypass the others (you also need to remember that with TXT the measurement plan is stored in the UEFI flash/ram/nvram not in the TPM) and with any system your overall level of confidence should be as high as the weakest component.


Can you elaborate on this a bit? I'm definitely not an encryption expert, but I was under the impression that full disk encryption on Linux with a sufficiently long passphrase was secure.

I personally use cryptsetup (dm-crypt/LUKS) on my laptop running Arch Linux just in case it were stolen. Are you saying that bypassing the bootloader with a live USB, etc, could give an attacker access to the data stored on the encrypted drive (outside of the boot partition, of course)? That seems like it would defeat the purpose of full disk encryption. Note: I understand that this is assuming that the attacker does not gain access to the system while it is up and running.


> Are you saying that bypassing the bootloader with a live USB, etc, could give an attacker access to the data stored on the encrypted drive (outside of the boot partition, of course)?

No, of course not. (Well, disregarding cold boot attacks).

But it gives access to data stored inside the boot partition, allowing for fun things like patching your kernel to send dm-crypt keys to http://fbi.gov/submit_key.cgi - makes sense now?


There are ways to also encrypt the boot partition.

There are various protection mechanism that rely on software alone (bootloader), software + hardware (TPM), software + firmware and software + hardware + firmware.

The question is always what do you want besides encrypting the main partition mainly in terms of integrity checks.

And older BIOS with a TPM or a modern UEFI with or without a TPM can provide additional integrity check for both the host configuration (BIOS/Device settings) as well as storage device specific integrity checks.

TXT basically allows you to measure various elements using the UEFI and more importantly for OEM's at least TXT has extensive DRM capabilities that can restrict the user from installing "untrusted" operating systems or making modifications to the host it self (e.g. chaging bios settings).

Beyond that TXT gives only a slight improvement as far as actual security goes against cold boot attacks as it allows you to take measurement when switching between S4 and S5 power states (soft off and hibernate) it still doesn't allow any measurement for S1-S3 states which are legacy sleep mode.

A modern UEFI with or without a TPM can ensure that the OS will not boot or will boot into recovery mode if any changes were made to the hardware or firmware configurations as well as if any tampering was done to the bootloader (secure boot keys) with a TPM you can be slightly more assured that no one tampered with anything since the TPM is a better cryptographic storage than the UEFI's ram/nvram.


Exactly the type of information I was looking for, and not something I had even considered. Thanks!


Full-disk encryption protects against anyone who steals your laptop. However, if someone can get access to your laptop without your knowledge, and you subsequently use the laptop, they could install a hardware or software mechanism to obtain your passphrase.


Yep. TXT or TPM can cryptographically verify boot integrity but aren't magic pixie dust for passphrase recovery via hardware tampering. In the end your adversary can just put in a hardware keylogger between the laptop keyboard and the motherboard.


It once occurred to me that it's possible to encrypt the disk completely and put bootloader/kernel/etc on a flash drive attached to your keyring (the physical kind).

Then it all boils down to BIOS security. With physical WP pins on flash chips it ought to be possible to make BIOS reflashing a reasonable PITA, especially for someone who wasn't prepared for such surprise.


This wouldn't prevent the computer from booting into a new key-stealing bootloader newly installed on the hard disk. You probably wouldn't even notice that the computer wasn't actually booting from the USB drive.


Put the computer in transparent case and replace HDD top cover with glass :)


With how overclocking is done on UEFI enabled motherboards TXT will not work anyhow. Virtually no OEM uses K series processors, the vast majority of computers that come with TPM's are laptops which use a different SKU family, the few workstations that do will not use the desktop K series of CPU's. Intel TXT is also not available on their more "workstation" oriented prosumer line of CPU's like the 5820 and the 5860's.


You're right. Some of that happens in the current generation with the mainboard chipsets.


One thing I've really hated about Intel in recent times has been that you can't really tell what you're getting based on the model number.

I got bitten by this when I built up a system and picked a Core 2 Quad Q8200, only to discover after powering it on that it did NOT have virtualization support.

The problem is only worse now; is that i7-xxxx dual-core or quad-core? Should I get i5-yyyy instead?



Warning: ARK is not much safe either, I saw on reddit some days ago some guy complaining that he bought a I3 after checking it on ARK, and after the purchase, Intel disabled the feature on all i3 and changed ARK too, silently. (It was related to some feature Intel also tried to implement in the 2 previous generations but seemly is buggy)

EDIT: I hit the post limit... Yes, I do think it was TSX


Intel disabled TSX on all Haswell and some early Broadwell parts (not just i3 processors!) because it was irreparably broken:

http://techreport.com/news/26911/errata-prompts-intel-to-dis...

Really can't blame them for that one. Disabling a feature that causes frequent CPU lockups when used is a good thing.


Not limited to Haswell and early Broadwell, some Skylake CPUs lost TSX as well.

https://www.reddit.com/r/hardware/comments/44k218/intel_disa...

Also some Skylake i5s and i7s had TSX issues, but I don't know whether Intel managed to fix them or is going to disable TSX. So far ARK lists them as supported.


Yeah I'm not holding my breath waiting for TSX to be stable enough to worth investigating.


I wouldn't expect anything different, but saying that disabling a feature that was advertised and paid for it's a good thing is plain weird. A good thing would be a recall or reimbursement.


Was it TSX?


Intel Transactional Synchronization Extension.

When it works, you get support for database-like transactions on RAM (only small transactions, it isn't black magic).

When it fails, you get all kinds of random crashes at random times.

Introduced in Haswell, turned out to be buggy and Intel famously disabled it with ucode update. Since then, they quietly disabled it also for several Broadwells and Skylakes which didn't fare much better than Haswell.

edit: it seems I misread parent's post as "what is TSX"


Me too did.. but I was about to google TSX before the last two posts. So thanks for the explanation.


What possible reason could they have for this, apart from forcing users to buy their 'unlocked' chips? Is it really worth the nightmarish PR debacle they will undoubtedly face?

Side note (from comments) -- apparently Intel paired this microcode update with a patch to fix CPUs freezing during Prime95. Not cool.


What nigtmarish PR are you talking about? Nobody cares, just like nobody cared when they started shipping cpus with embedded Intel ME.


Nightmarish PR was probably an overstatement, but I would expect a considerable backlash from customers who will be having a free feature removed-- for no reason other than to make Intel money.

Seeing as AMD are getting back into the game with their new Zen architecture, I think this is a very unwise move by Intel.


but it was never advertised as a feature, in fact i believe the opposite is true, they advertised it as non-overclockable.


Very true, but having the loophole there is beneficial for users who want to overclock it-- bug/feature debate aside.

They advertised it as non-overclockable, but some users found out it was -- why bother patching? To force those users to buy the more expensive 'unlocked' chips for something that used to be free.


Even if not advertised, it was known to OC. Some people bought it for this purpose and they surely won't be happy.


"embedded Intel ME."

Do you mean like embedded backdoors?


The irony is that if you are running Prime95 and overclocking, Intel should love you, because you are going to be in the market for a new processor much sooner than the average user.

Prime95 slowly cooked several PCs that I had the pleasure of supporting through my years in college.

I love what Prime95 wants to do, but consumer PCs are not usually designed/maintained with 100% CPU usage in mind.


Even more you can probably compare these users to the "sneaker heads" that keep the Nike brand going by waiting in line for underpriced limited edition sneakers.


The unfortunate difference probably boils down to sneakerheads being a substantial portion of Nike's high-end business, while overclockers and computer performance enthusiasts are nothing more than a rounding error in Intel's balance sheet when you compare them to a customer like Amazon buying chips by the thousand. It doesn't matter if your average overclocker buys a new chip every year to keep up with the latest and greatest instead of every 4 years like his less-savvy friends might.


Overclocked CPUs run hotter. For modern technologies that means they can degrade faster, and can fail earlier than their expected lifetime. To be clear, I'm not saying this is the reason Intel did this, but it is one possible reason, and you asked for possible reasons.


CPU's don't really break in any timespan that matters to the consumer, be that with or without overclocking. How many times have you had a CPU break? I know of no one who has broken their modern CPU due to overclocking or otherwise.


I have had one break. It was flooded when a pipe burst in the house near the computer. And the whole CPU didn't break, only part of the memory controller. So for a while I ran an A10-5800K in single channel mode with a single stick of RAM until I could buy a new CPU. Interestingly, I had water pour out of my HD-7950 (it was completely submerged), and it is still running fine a couple of years later (I did dry it for days on a fan, then completely dissasemble it, wash with isopropanol, and re-assemble it very carefully).

They are much hardier than we give them credit for.


I have seen CPUs bricked from moderate air-cooled overclocking. And it is still possible to kill a chip by feeding it the wrong voltages as long as the motherboard allows it.


Admittedly I don't know how far we are down this line, but as transistors are getting smaller, and the number of transistors in a processor is increasing, transistor degradation during the processor lifetime is becoming a larger issue, and Skylake is already at 14 nm. I think even experts in the field will be unable to give statistics about Intel processor durability, as Intel are not very forthcoming with such statistics.


Overclocked CPUs will only run hotter if voltage is increased beyond default.

Many CPUs can achieve a significant overclock without raising the voltage.


> Overclocked CPUs will only run hotter if voltage is increased beyond default

Raising either voltage or frequency increases power dissipated. P = C V^2 f, where C is a constant that depends on the physical structure of the gate, V is the voltage, and f is the frequency that the gate's logic state is changing.


Well, that was an embarrassingly ignorant statement on my part.

Thanks for the correction.


Isn't C capacitance?


I'm glad they're doing this. It'll help plant an image: Proprietary microcode is evil.


The alternative here is what, exactly? Can you site an example of a better situation?


It's "cite".

And the better solution is to finally ditch backwards compatibility with the 8086 and implement a sane instruction set that isn't just a virtualized layer à la x86


Additionally, the instruction set is not virtualized because of compatibility. Compatibility is a nice side effect though, so software from even five years ago works just as expected (not all software can be recompiled, and could you imagine commercially supporting that many variants--this way Intel shoulders that burden). And, backward compatibility is a tiny fraction of the control unit silicon area. It has zero performance cost. Zero. Intel is scraping for more (not less) things to add to silicon to improve performance (video encode/decode, GPU, memory controller, PCI).

The virtualized instruction set enables the performance gains by executing portions of instructions in parallel, re-ordering to avoid pipeline stalls, better branch prediction, execution shortcuts depending on operanda, etc. without compiler support. RISC architectures like ARM do this as well. On a modern Cortex part with multiple execution units, the processor is not a textbook pipelined RISC processor like this community seems to yearn for. So if you look at machine code for some reason, yes the instruction set is simple, but it's still a facade over complexity.

I don't understand the constant cry that processors are complex. To gain performance against the frequency limit, complexity increased. This happens with software all the time.

Also, for those that want direct control over all the elements of the processor, go buy an Itanium... oh wait... no one did.


Intel tried that, the Itanium didn't really work out so well. Anyway, a sane instruction set doesn't mean you'll include the proper division tabels in silicon, so you probably still need to have updatable microcode to fix that.


I apologize for my auto-correct.

If you're advocating for a whole new instruction set, good luck with that. May as well go outside and yell at the clouds.


What makes you think that removing "backwards compatibility with the 8086" would make "proprietary microcode" unnecessary?


Can't they just disable it for yet unsold cpu:s (e.g. look at serial numbers or something)? Consumers (should) only care if their product is made worse after they purchased it. I can see why someone would be angry if a feature that was advertised is removed after the purchase.


IMO this would only be an issue if they advertised that it was overclockable.

If it was advertised with no ability to overclock, you should assume it can't overclock and if you are able, don't assume it will work or will stick around forever.


There was a time when I bought something I owned it and could do anything I wanted with it. That time is clearly gone.


No one is forcing you to install the BIOS update.


I have been 'required' to install BIOS updates before. PS3 lost capability's at one point in an update and new games forced that update without mentioning it on the packaging.

"Note that Intel paired this with a bug fix for the freezing during Prime95 - if you want the bug fix, you have to let them lock down your clock. "


You are using an undocumented feature of the CPU, from Intel's point of view it's an unintended loophole.

Also you can try to separate the Prime95 bugfix from the clock lockdown. Good luck :-)


They could theoretically push the microcode out via Windows Update as well, which combined with Win10s enforced updates would make it difficult to avoid.

https://support.microsoft.com/en-us/kb/3064209


That's nasty - but isn't Microsoft to blame there?


upgrading the OS, which you are required for security reasons, will also upgrade the micro code. in fact, i don't remember the bios ever being relevant for that


to the geniuses downvoting... don't even know why i bother:

"While microcode can be updated through the BIOS, the Linux kernel is also able to apply these updates during boot."

-- https://wiki.archlinux.org/index.php/microcode

now, do you update your kernel more or less often than the BIOS?


It's best to develop a decent level of ignorance when it comes to social media dynamics :)


> don't assume it will work or will stick around forever.

I agree with the first part, but no with the second. I don't like manufacturers deliberately breaking products which have already been sold, especially in cases when the sale wouldn't have happened if the product weren't better than specified.

Today it's Intel removing not advertised features, tomorrow it'll be Sony removing advertised ones (or had they already done that?)


Are you referencing the PS3 forcing the option to either keep the ability to run linux or stay on the PSN, or was there something else?


Yes, I meant Linux on PS3.


What if you went by a third-party review? IMO it should be illegal to remove functionality without changing the model number.


nah, they advertised Pentium G3258 as overclockable and later killed OC with microcode (on non Z boards)


Pretty sure the microcode update is applied on every boot both by the mobo and the operating system.


Those supporting AMD should remember that they did a similar thing after it was discovered that some processors could have entire cores re-enabled.

On the other hand, if I remember correctly one of the mobo makers soon figured out how to use both the new microcode and the old one to get the best of both worlds; it might've been AsRock too...


Those supposedly triple and dual core CPUs that got unlocked to quad cores were apparently a way to sell damaged silicon that the remaining cores passed QA so they just disabled the broken core. To meet demand of these product lines perfectly good quad core CPUs got disabled and sold off as dual/triple editions. It was the luck of the draw if you got one of those and not a genuinely flawed chip.

Now, How much of that is true I don't know but that's what I remember reading at the time. A friend of mine did succesfully unlock his triple and ran for a long time.


> Those supposedly triple and dual core CPUs that got unlocked to quad cores were apparently a way to sell damaged silicon that the remaining cores passed QA so they just disabled the broken core.

It's called "binning", and it's not just for defects. CPUs from the same chain can have different tolerances, one will reach 3.5GHz easy and the next one won't be stable beyond 2.8. Those are also put in different bins and sold as different models.

> Now, How much of that is true I don't know but that's what I remember reading at the time.

It's completely correct. Selling a model from the exact corresponding bin is ideal, but if you have more demand than the bin provides you get parts from higher bins and gate them. A few years ago that got very very common with Intel parts as they reached tremendously low defect rates, and more or less any CPU you bought would come from the highest bins gated and undermultiplied to whichever model you'd buy.


This is "floorsweeping", not "binning".

Floorsweeps are done when a part of a chip isn't passing tests. The defective part gets fused off and the chip gets sold as a dual core instead of a quad core, for example.

Binning is orthogonal. It's a qualitative measurement of (the part of) a chip that passes all tests. The electrical and thermal properties of a chip are tested, e.g. the leakage current and temperature are measured at different clock speeds and voltages (this is infinitely more complex for a battery powered device where voltages and currents fluctuate). The ones that pass with best results are sold as a premium product and the rest are clocked down and sold for less.

So a "K" model Intel chip comes from the best bin. An i3 chip dual core is a floorswept quad core i5. (This is what I assume, I don't work for Intel and don't know the details)

But these chips may not sell in the proportion they get manufactured in, which means that some perfectly functional quad cores get fused to dual cores and sold for cheaper. If you're lucky, you're getting one of these (and un-fusing, if possible, will work).

Floorsweeping and binning are required because the tolerances of modern semiconductor manufacturing are so tight. To get the best bin to perform well, the manufacturing process is really pushed to the extreme, which means that there will be chips that have manufacturing defects as well as lower performing chips.


On the other hand, floorsweeping and binning cannot exactly match market demand. Towards the end of a run of a given chip, most may qualify for the highest bin - but due to demand many have to get moved down a bin or two. So you end up with totally capable of being overclocked CPUs being sold as lower clock.


And that's addressed by introducing new bins + dynamic overclocking or reducing dynamic voltage


This has been going on for a looooong time. Old time intel 486 "SX" chips without math co-processors (floating point unit really) were originally 486 "DX" with either the co-processor disabled or which failed QA. At some point, they made actual SX without FPU built-in.

The "stand alone" math co-processor you could add to an SX based system to gain h/w floating point was actually a full blown 486 dx CPU with a different pin layout (one extra pin) so it couldn't be used as CPU and could therefore be priced differently.


Most importantly, the 487SX could be marketed differently so it could be priced higher for essentially the same chip.


I'm still using all 4 cores in a "triple" core phenom to this day. I believe it is also overclocked roughly 400mhz, my wife sometimes plays games on it.


I'm running such a system at this very moment. It has a Phenom II 550 Black Edition (i.e. sold to be overclockable/unlockable), and by toggling a BIOS option, I was able to unlock two more cores. I run it at rated clock speed, and it has been rock-solid since the day I built it up.

I can build a large project in NetBeans as fast on this system as on a hex-core FX-6100 (and I can watch my CPU monitor clearly showing all four cores running during the build).

CPU temperatures stay well within limits with the stock cooler.


The difference is that Intel is forcing this update (via Windows updates and by forcing mobo makers hands). Don't think AMD ever had that kind of power.

I am curious which AMD chips had the unlocking disabled?

I am still rocking AsRock 970 MB with AMD B50 which actually is an Athlon II X3 unlocked into a full blown Phenom II X4 (3 cores with no L3 cache into 4 cores with unlocked L3 cache).


This "software eating the hardware" is worrisome.

It used to be that at least you could depend on the hardware staying the same unless you chose to apply patches yourself.

Is it theoretically possible to change the microcode to actually add more features instead of disabling them?

Let's say Zen actually comes out better than expected and then Intel miraculously releases another update which re-enables OC ability?


What are the ways in which this microcode update is enforced? Is it something that ships with your os? or something that user has to manually install like a BIOS update?


Would probably be a BIOS update distributed from the motherboard manufacturer. As the article states, people could just not update,but finding hardware with the older BIOS will be more and more difficult over time.


Why couldn't the OS just put the microcode update into the EFI bootloader?

I know some recent Intel parts can't even run the EFI firmware without first getting a microcode update, but is Skylake like this too or can the EFI bootloader run without first getting microcode downloaded?


Linux effectively does if your distro opts in.


What happens when this microcode patch is applied at runtime on an overclocked CPU?

Remember the Haswell mess when a microcode patch disabled TSX ? If you applied it after user code (glibc) detected TSX support, boom.


you get a hang, happened with Win 10 on overclocked G3258. Win10 ships with mcupdate_GenuineIntel.dll containing microcode that ... disables Overclocking

Workaround is renaming/deleting mcupdate_GenuineIntel.dll and not updating BIOS ever again to avoid new microcode, all to keep cheapest 4.4GHz single thread cpu money can get.


> What happens when this microcode patch is applied at runtime on an overclocked CPU?

Paradox :)

Probably it'll reset conveniently so that you can restore default BIOS settings.

They can't let it run because that wouldn't change anything and I'm not sure if the microcode has a way to reliably switch some off-chip clock generator back to 100MHz.


Lets hope Mr. Keller did stellar job at AMD. Once again.


Wouldnt count on that, best AMD marketing predictions promise 40% more IPC, that would still be 1-2 Intel generations behind. AMD marketing always overpromises, so we will end up with something between first-second i5 gen.


So was it just one particular Asrock motherboard that allows this type of overclocking? Whilst my hardcore overclocking days are behind me, I have an i5-6500 and some kind of Asus mobo...I'm curious to take it for a test drive.


Many Z170 motherboards got this functionality, but keep in mind it's not without caveats.

It breaks integrated graphics, turbo boost, all power saving states, temperature sensing, and cripples the performance of AVX instructions for some reason.

http://overclocking.guide/intel-skylake-non-k-overclocking-b...


To my knowledge all of the vendors supports that. You need a Z170 board and search a non-K bios for your model.


One reason that I've always liked AMD is they have a limited number of products so they don't play games like this. AFAIK, every feature that an architecture supports is available on chips released. The only exception that I'm aware of is "black" edition allows for an unlocked multiplier, meaning you can overclock the CPU without overclocking the BUS.


They're basically disabling functionality in order to make a quick buck.


AMD Zen can't come soon enough.


My Athlon 740k is getting old. I really hope Zen comes to market soon. I can't bring myself to buy a chip as a stop-gap with Zen around the corner.

I'd upgrade if I could get my hands on an AM4 motherboard. Hopefully we'll see something like the FX-6300 on AM4 early this year, so I can at least get ready to upgrade next year.


Microcode giveth, microcode taketh away :)

It is interesting though how Intel waited so far (not even a statement?) , probably just to sell more CPUs overall.


Good thing is we can load old microcode on Linux


> Good thing is we can load old microcode on Linux

You cannot load old microcode anywhere. The CPU won't let you.

The OS feeds the CPU a blob, the CPU checks that it's signed by Intel (to prevent modifications), and it additionally checks that the version number is newer than the currently-running code. If it's not, it won't be loaded.

If you have a CPU with the old microcode versions, you can keep it around, but if you update your BIOS you'll find it will bring the new microcode in and you can't downgrade after boot. If you're lucky the BIOS manufacturer wasn't too careful with signing their BIOS and you can replace the microcode blob, but that's a huge hassle.


Microcode is not written to the CPU, it gets loaded on every boot. This can happen during the BIOS POST, during the OS bootloader or even while the OS is booting. Therefore, yes its possible to run older microcode (at least on Linux), since you just have to not write the newer version on boot. If the BIOS contains the new microcode, you can flash the previous version of the BIOS.


> Microcode is not written to the CPU, it gets loaded on every boot. This can happen during the BIOS POST, during the OS bootloader or even while the OS is booting. Therefore, yes its possible to run older microcode (at least on Linux), since you just have to not write the newer version on boot. If the BIOS contains the new microcode, you can flash the previous version of the BIOS.

Did you read the last paragraph of my message? Because you're not really disputing anything I said. (to clarify, when I say "You cannot load old microcode anywhere", I define "old" to mean "older than the currently running microcode", I.E. you cannot downgrade it at runtime after it's gotten a new one loaded to RAM.

If you're willing to run outdated system firmware (with associated bugs, security vulnarbilities, etc), you can do it - just like I said in the message you're replying to. But that's not what I'd call a good solution.


Are microcodes written to a writable part of the CPU? I have always assume it resides in the BIOS or equivalent on the motherboard.


The CPU has a burned-in microcode. It can be updated after the CPU is booted, but the updates are only written to RAM and lost on shutdown.

Usually the system firmware will include a recent-ish microcode and automatically update on boot. Many OSs also bundle microcode updates and install them on boot.


But then you don't get the Prime95 fix as well


AMD adopted same strategy, no overlooking for poor!


CPUs are complicated pieces of technology. During the manufacturing process, some parts have a better quality grade than others. The better quality parts allow some overclocking without producing errors and therefore they get put into the overclockable K-processors. The worse parts get put into non-overclockable processors and run fine using the default voltage.

Some of the non-overclockable cpus might work fine after overclocking, some might not. Intel definitely doesn't want the negative press when some kids decides to overclock their non-K CPU and break it during the process. So I understand the decision.


Some of the non-overclockable cpus might work fine after overclocking, some might not. Intel definitely doesn't want the negative press when some kids decides to overclock their non-K CPU and break it during the process.

Have you participated much in the overclocking community? The whole point is that every CPU chip is different and can be overclocked by different amounts, some almost not at all. There is no "negative press", since anything past stock speed is a bonus which is what overclockers are trying to get. If CPUs were not working at stock speeds, that would be a reason for "negative press".


> Have you participated much in the overclocking community? The whole point is that every CPU chip is different and can be overclocked by different amounts, some almost not at all.

On the other hand, there is no way that you can actually determine how far a CPU can be overclocked and still maintain full functionality, so it might be best to limit overclocking to systems that will not be used for something of high financial or safety value.

The problem is that fundamentally the hardware is still analog. Digital is an abstraction on top of the underlying analog system. In the digital abstraction, a signal changes instantaneously from 1 to 0 or from 0 to 1. In the underlying analog system, the components carrying the signal have capacitance and resistance. Changing the high voltage that represents 1 to the low voltage that represents 0, or vice versa, involves discharging or charging that capacitance through that resistance, and that takes time.

This sets an upper limit on how quickly that signal at that particular point in the circuit can change digital state.

There are also other ways the analog nature of the underlying circuit leaks into the digital realm. Neighboring components that are in the digital abstraction completely isolated from each other (except through intentional connections) might be coupled by stray capacitances and inductances. This can let signals on one cause noise on the other, or the state of one could change how fast the other can change state.

When a chip is designed the designers can figure out what areas are the most vulnerable to potential analog problems. They can incorporate into their tests checks to make sure that these areas are OK when the chip is operated in spec.

The ideal scenario is that if you clock a chip fast enough to break something, the chip blatantly fails and so you find out right away, and can slow it down a bit.

The frightening scenario is a data dependent glitch, where you end up with something like if the ALU has just completed a division with a negative numerator and an odd denominator and there has just been a branch prediction miss, then the zero flag will be set incorrectly.


Sorry, the negative press argument is utter nonsense.

If you run any hardware outside specifications, you expect it to fail. People brick phones and ruin engines but there isn't a backlash against people trying to jailbreak their phone or modify their cars. If anything, the people that matter —the enthusiast market for these devices— are demanding that their devices be more customisable. The press and other consumers don't give two hoots about little Jimmy trying to rice 5GHz out of his $100 CPU and turning it into liquid magma. Stupid kid was stupid.

The opposite is true though. If lil Jim manages to get a $5000 part for $100, other consumers are going to factor that into their purchasing decisions.

What is most concerning is that this is a part that has been out and about for a little while. There are dozens of guides recommending certain CPUs for this that Intel are going to patch up now. The articles and their recommendations will remain out there though. It's false advertising by the back door.


And that's just not true.

There would be no negative press for intel. Everyone with the shlightest knowledge about overclocking knows that overclocking can damage your parts. And like stated, parts breaking without voltage increase is highly unlikely. But still: Assume I buy an Intel Non-K processor, base-overclock it and it breaks. How on earth would I be able to produce negative press for Intel by publishing that?

It's simply a profit optimization. K-processors cost more, people who want to overclock had the option to buy non-K, that reduced sale numbers of the K line. Also the i7-6700 is clocked way below the i7-6700K, it was a nice option to get the cheaper version and up it to K level, saving 100€ for some time (prices changed).

Behavior like that is why I buy AMD.


"Everyone with the slightest knowledge about overclocking knows that overclocking can damage your parts."

It is not necessarily about damaging your parts; that would be the least of their worries. Unreliability of the CPU is the real problem. Your CPU might be 20% faster, but if it incorrectly computes some number in your spreadsheet, corrupts the file, or, worse, corrupts your file system without immediate consequences, or, even worse, makes hardware (drone, self-driving car, nuclear facility) behave incorrectly, they will not just have angry customers, but likely also lawsuits started against them (yes, they might win them, but not necessarily easily; people would complain that they should have closed that hole, given that they knew it was misused)

Also, that 'everybody with the slightest knowledge about overclocking knows' is only relevant as long as overclocking remains a niche thing. If it were mainstream, many of its users would not have 'the slightest knowledge about overclocking'. This microcode update helps keep it that way.


> they will not just have angry customers, but likely also lawsuits started against them

Which is why they've prevented over-clocking the whole time, except they haven't.

Their chips that allow over-clocking and the chips that don't are physically the same chips, there's nothing special about them. Intel is doing this only so that you can't buy a cheaper processor and get a more expensive processors performance.

I bought a 600MHz Celeron once, when the range was around 600MHz-1GHz. It overclocked stably to 900MHz. In essence I got a much faster processor for a much lower price. That is what Intel is fighting against, not some mythical lawsuit over life and limb.


Overclocking isn't just a joy ride. It gives you more for your money, reducing the demand for the more premium version.


I've never heard of a CPU "breaking" during non-LN2 Level OC.


I have, but that was something like 15 years ago now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: