Why does Intel encrypt the code? It is unlikely that anyone would steal it because it is useable only with Intel CPUs and is protected with copyright. So the reasons are either Intel wants to hide something (a code that changes CPU behaviour when a benchmark is detected?) or wants to have some features that cannot be disabled or modified by the user.
It is difficult to believe that there is no backdoor.
Physics is another reason. Since things are so small these days, you don't want a large amount of 1s or 0s because the charge will "leak" into neighboring cells. In the case of buses, anything other than a nominal 50/50 distribution requires too much power or will confuse the automatic gain control on the RX end of the link.
DDR3&4 is a good example of this where the CPU scrambles[0] what is written to physical memory to get the data closer to 50/50 1s and 0s.
In this case I doubt this is the primary reason though.
There's two reasons I've seen companies obfuscate as much of their hardware and firmware as possible:
1. Protecting their trade secrets. Even if it's eventually leaked or copied, the time between introducing the product and a clone happening might represent billions in sales.
2. Reducing patent damages. Hardware is a patent minefield with suits and licensing deals pretty common. The companies that want to troll need to see clearly that their patents were infringed. Making the product as opaque as possible reduces the number that will notice in the first place and/or how certain their claim will look. Patent suits are the single, greatest threat to open-source, hardware companies that operate in an existing ecosystem.
IANAL, but I find 2. weird, my first thought was that it obfuscation would give the opposing litigant an argument for willfulness potentially increasing patent damages.
i.e. it may reduce the total sum of patent damages across all potential cases by reducing the number that result in litigation, while increasing likelihood of increased damages for an unsuccessful defense.
Can't help but think 2. really seems to me a double edged sword.
Well, I know they do No 2 and it seems to work a bit. My guess is they never bring up that motivation. Just say they hid their design to stop Chinese copycats and delay hackers.
They mostly try to protect hardware implementation details. Another example was how hard it is to get data needed for writing good drivers, another firmware. Even that most companies don't want to share for fear it leaks some aspect of the design. Finally, for Intel ME backdooring all CPU's, there's a decent chance they're paid a fortune by one partner in particular to keep it there with buggy, obfuscated code. That's on top of ordinary reasons.
ME is (among other things) supposed to be the secure enclave for SGX keys, which Intel is hoping will allow encrypted computation on cloud computing clusters that are protected from inspection by the system admins.
So, without even getting into the whole DRM argument, yes?
I don't know if I agree. While obscurity can be a deterrent, I imagine most groups are unlikely to come up with sufficient, convoluted complexity to prevent identification of exploits. Primarily because you're effectively pitting yourself against a collective of $hat hackers, instead of doing things "right" and attempting to use encryption.
This doesn't mean it isn't a deterrent, but doing things in an obfuscated and unusual way also means you're potentially treading on unexplored ground, and so there will potentially be other exploits, bugs, innefficiencies which can come with the territory.
This isn't an "instead of doing things 'right'". Like I said, it's only valid as one component of a broader strategy.
You might have a point if someone had ever made a perfect scheme, but everything that seems to have attracted attention has been broken AFAICT.
So, a moat around a castle does help. Calling it good after digging a moat and not building any walls is probably less useful than doing nothing. The moat is only useful in the broader context.
Conversely, sometimes the obscurity layer will obscure the issues not just from others, but also yourself.
I worked for a thin client vendor years ago that was obsessed with encrypting our boot image etc, and poured all our resources into that. Which lead to encryption that was still trivial to work around, and a lot of other real security issues were never dealt with because we were fixated on the ineffective obscurity layer.
I am not defending Intel, but firmware encryption is a very common thing in embedded software. The new ARM Cortex M33 architecture adds a ton of security & encryption features. This is becoming increasingly important in IOT devices.
Wow, that's awful! All the more reason to continue avoiding having anything to do with the IOT. Who would want to own household devices that keep secrets against you?
Tremendously. It is a huge first step. I worked on BIOS stuff before. I decided to wait until AMT was more exploitable to come back to the scene.
This is big news. It should be #1 on HN, instead of the failed pentest story.
Right now I am ordering a GB-BPCE-3350C to replicate their work and see how to extend it to other platforms. Intel will limit the vulnerability. No time to wait.
Consider multiple news sources. Getting info only from HN must leave you with a feeling of 'Facebook rules the world, blockchain all the things, ICO or GTFO, apps are only written in Go or Rust, Tesla Tesla Tesla'
Can we disable Management Engine entirely in order to don't waste energy, in addition to not having stuff that you don't know so that it will not run all of the stuff you do not need.
I don't quite get how it works. A quick search tells me that USB 3.0 indeed allows A-A crossover cables to connect two hosts: https://superuser.com/a/945523
...but this leaves only a single conductor to do the entire thing?
And those 6 pins are separate TX and RX pairs, and 2 grounds.
The pins disconnected are 5V, which would potentially damage things or cause an overcurrent flag to trip (turning off both ports) if this was connected on both ends, and D+/D-, which is the bidirectional USB2.0 data pair that normally has fixed host/device roles and needs a special controller (like the one in your phone) to be dual role, so it's easier to just disconnect it. The USB3.0 TX/RX pairs are only one way each (think like RS232 null modem but much faster), so it's much easier.
This is a good example of how USB3.0 works almost like a separate, parallel bus to USB2.0.
Nobody else answering this question seems to have read the article.
> Like DAL, the OpenIPC library includes a command-line interface (CLI), written in Python and provided as a library for Python as part of Intel System Studio, which can be installed on the system with the help of pip.
Intel System Studio doesn't ship its own python, it depends on the OS' python version. So here we have an industry-infra dependency on python, and these typically seem to still be stuck in 2.7 land.
Python is being used here to actually wrap/coordinate the components that do the JTAG handshake. A neat approach, just using a whole programming language as the REPL. I like it. But this is one situation where I'm going to be very conservative with the tooling, as I don't want to break my system! :)
So, that's the real reason why.
Also; the repo actually has, right in view, a recent commit message of "fix for Python 3 compatibiliy". The authors are clearly just using 2.7 because ISS requires it and considering the bigger picture of tinkering with PoCs, mucking about with multiple Python versions seems like a waste of cycles and unnecessary management+verification of moving parts (I don't savor the idea of figuring out "ok how do I tell ISS where 2.7 is and how do I absolutely make sure it's avoiding 3.0?").
--
HN, please endeavor to pursue higher quality discussion. Devolving into meta is an excellent way to encourage stagnation. (I know this sounds a bit of a self-righteous thing to say but it's tricky to objectively articulate.)
Agreed. Nothing repels an actual hardware hacker harder than finding that the reaction to a significant finding consists mainly of bikesheding the python version of his concept script
Python 3 is ten (!) years old and perfectly usable, but it suffers from a well-known, but curious problem where people who occasionally use Python for some light scripting tend to go for Python 2.7. Rarely it's a choice made because of library availability, more often it is because of the perceived ubiquity of Python 2.7 installs on the target user's computer or simply because of a mindset where Python 2.7 is good enough and getting the hang of Python 3 seems like a huge obstacle (it isn't).
I'd love to know why this project specifically chose Python 2.7 too. Python 2 reaches its end of life in 2020, any new Python code should really be written in Python 3.
By the way, if you want to prevent getting a bunch of downvotes like your comment did, be polite! Not including any capitalization or punctuation is considered downright rude by many. Your text reads like a robot's printout.
It's not a huge obstacle, but it _is_ an obstacle.
I'm not a Python programmer, but I can find my way around without much issue. Still, from where I'm sitting all that stuff about envs and the 2, 3 or 4 redundant, overriden, or deprecated mix of solutions sounds strange to me, I don't really want to buy into that just yet (I need Python like once per month, no more). For the casual programmer ("just want to change this line and run, please") it's just a block in the road that virtually all instructions you might see on any random README in Github might tell you to use "pip install..." but then you have to second guess if they mean "pip2" or "pip3", and whether they assume you'll be running with v2 or v3, maybe they tell you to run "pip install something" (which runs pip2), then in the script there is a Python 3 shebang... the installed dependency won't be found and now you have to understand all this mess if you want to continue.
This is in Ubuntu, where the system's "python" is Python 2, and you need to explicitly add a "3" to all your commands if you want to run v3. I'm sure there are clever ways to solve the issue, but I bet they'll break some use case. I'd just rather none of this nonsense existed at all, then no funny workarounds would be needed.
I wonder, how come other languages are able to slowly introduce some breaking changes over time (I'd swear Ruby was mentioned in some other HN thread about doing this) but not cause this kind of mess (as perceived from my humble external point of view)?
You forgot one reason not to switch from 2 to 3: Zero benefit. Nothing python 3 does would benefit my scripts to any extend over just using python 2 as I always did. Is learning 3 a huge hurdle? No, it is however in some cases completely unnecessary.
Great point. Python 3 for many just wasn't a great carrot. That's why they didn't switch. For a while it was even slower. Also most importantly Python 2 wasn't that bad of a stick. It was just pretty darn good to begin with. And of course that it is still supported on some OS-s people will keep using.
Yeah, for writing scripts like this - python 2: 0% chance of thinking about Unicode. python 3: 5% chance i have to waste time debugging some random str decoding issue, no benefits. why bother?
This might be true if you're an english speaker, running the script on an english platform and only consuming data from english services. And also you are sure that no non-english speaker will ever take over the development of your script and you will never have to localize it to other languages. Otherwise, it's exactly the opposite. Python 3 is what you should be using if you want 0% chance of being stopped by a string encoding issue.
Python 3 might occasionally require some extra steps when consuming strings compared to Python 2, but the reality is that those steps were always necessary. Python 2 just hid those details in a way that was only really safe for english-exclusive development. That doesn't mean that Python 2 is easier to use or less brittle. In fact I would say it means the opposite.
For most strings I don't care about language, encoding or related overhead. In my scripts they are best dealt with as ophaque bytes with a few specific byte patterns that are the same in ascii and utf8, as well as various other encodings.
Last unicode issue I had was on a system german characters, because some library assumed it had to explicitly perform encoding with a bad default setting. If the library didn't try to be smart the program would have worked independently of system or language, instead it failed on any non english system by trying to convert a perfectly fine, system specific encoding to utf8.
FWIW Win10 was affected by that same horrific SMB RCE vuln. So the implied argument that 10 has been immune or even much less vulnerable to ransomware over the past year or two is on shaky ground, though I agree it probably will start to have some merit going forward.
I suppose unless someone either forks it or keeps delivering patches outside the Python project. That wasn't really an option for Windows XP, but I'm quite sure if it had been then someone would be doing it.
Not actually the case. UTF-8, using only SPECIFIC operations that don't try to split up strings or replace things that aren't exact matches for a given text, will result in valid output as long as there was valid input.
All interchanged Unicode text should be UTF-8, never use another encoding* (without a really compelling reason).
No, storing it as an array of unicode characters isn't a compelling reason during interchange.
ALSO, never use a BOM; that will break things.
The second answer (should be anchor-linked) in this goes over MOST of the advantages of UTF-8, but it doesn't capture that some carefully input operations in otherwise completely Unicode //unaware// 'string' functions result in no change to string validity.
The only potential issue is if recognizing something in different normalization representations is important. However, for nearly all quick and dirty tasks (where a short script is most likely) it usually doesn't matter. For everything else a different paradigm than the one Python3 picked would be better. (One where adding filters to a read file is OPTIONAL and they can be invoked on individual byte-strings as well.)
You don't have to learn Python 3, and there are plenty of cool features (ok well being cool isn't always enough reason but it is a reason) and there are libraries that are not supported on 2.x anymore.
Also you don't have so much longer until support drops entirely for 2.x, and then you really should port your projects. Why wait and port? Just write them in 3.
There is nothing wrong with 3. Just starting in 3 is easy and not an issue. It is basically the same language.
> I'd love to know why this project specifically chose Python 2.7 too.
My guess is they just went with whatever was most convenient and worked. I doubt if this kind of questions were even on the radar. It's security research, not a production website. In research you tend to go with whatever tool does the job.
Look at the overall complexity of this work. Maybe the author simply felt more confident with Python 2. You take whatever shortcuts that make things simpler, even if it means using 20 year old software.
30th of June 2024 is not 10 years away. RHEL 7 will be the last RedHat with python2 by default and it goes end of life in 2024.
Additionally; expecting them to properly support all the new features and manually creating (not backporting) fixes for issues is accrediting RedHat with having more resources than the reality allows.
EOL for RHEL 8 will be in 10 years and python2 in RHEL8 will be a fully supported package even if not installed by default. And I suspect that it is not only Redhat that will need to support python2 past 2020. For example, I guess Ubuntu 20.4 with EOL in 2025 will still have a package. So there will be upgrade and fixes for python2 just not from python.org.
That's called "dealing with the reality of international text instead of burying your head in the sand," isn't it? Bytes are not text any more than bytes are a picture or bytes are a sound recording; it is only in the context of an encoding that bytes can be interpreted as something more.
For that matter, given the inexplicable popularity of emojis, it isn't even a matter of international text anymore.
"dealing with raw bytes" = processing raw binary data (firmware dump, binary network protocols etc), not complete works of Shakespeare translated to 10 languages.
It is difficult to believe that there is no backdoor.