Hacker News new | past | comments | ask | show | jobs | submit login
The hijacking flaw that lurked in Intel chips is worse than anyone thought (arstechnica.com)
208 points by podiki on May 7, 2017 | hide | past | favorite | 95 comments



I wish that this security incident would get more attention. It seems fundamentally unfair that some projects that are essentially volunteer labor get so much scrutiny, while Intel doing some so fundamentally asshole-ish gets mostly ignored.


Being an advertiser has its privileges.


If scrutiny improves the project it's ultimately a good thing.


...which is exactly why Intel chips should get it too.


Totally, but what I'm saying is it's not unfair, it's an inherent benefit of OSS.


There's two things that don't get mentioned much with this issue.

1. There's a second bug that allows non-root local users to provision AMT. "An unprivileged local attacker could provision manageability features"[1]

2. Access to AMT allows you to boot a recovery image, mount local drives, and do whatever you like with the included remote KVM.[2][3]

So, even if this is turned off, there are issues to address. If it's on, they have control of the whole machine, remotely. It's as bad as it can get.

[1] https://security-center.intel.com/advisory.aspx?intelid=INTE...

[2] https://software.intel.com/en-us/node/681803

[3] https://software.intel.com/en-us/node/674998


There is a great comment linked at the bottom of the article that explains what exactly caused the authentication bug (it was strncmp!). Is there somehow a law that states that the higher the severity of a software bug, the simpler the reason behind it?


From the comment:

What the programmer should have done is check if the hash coming from the browser has the correct length, 32 characters, before attempting to compare the two strings. Or even better, the programmer should have used the proper string comparing function, strcmp, that already does that for you...


strcmp has its own issues. What they actually should have done is use a cryptography library with properly implemented comparison, because it is surprisingly easy to get wrong.

Also, fixed length hashes aren't strings. They shouldn't use the same functions as strings.

Edit: Using a proper cryptography library also might have saved them from using MD5. At the very least, most modern libraries have a warning that is is deprecated and should not be used for new projects.


Yep, even if they had used memcmp/strncmp correctly the AMT would still be vulnerable to a timing attack here, which is probably even easy, because it runs on a low-power system with, I assume, not much background activity.


In addition to timing attacks, if the attacker supplied a non null terminated string, it could potentially cause a memory overrun. This may not be possible depending on how the string is handled on ingress, but if it is, it would likely have serious consequences.


The attacker supplies ""-enclosed string and the HTTP parser is supposed to verify this.

Maybe they used some in-place HTTP parser and that's why they didn't have zero-termination to use strcmp.

And btw, it's not clear if the code was C to begin with. I believe large chunk of ME firmware is said to be written in Java, which kinda makes sense for a network facing system considering how shitty string.h is.

It would be interesting if somebody found that they are using some COTS webserver, C/Java - doesn't matter, and see if there are some CVEs for it.


I doubt using Java would really help in this case, because in order to use Java they would have to port the entire JVM to whatever platform they are using. Doing this without introducing any security bugs is about as difficult as just doing everything in C to begin with. By the way, the fact that the issue was with strncmp very strongly suggests that the issue was in C code.


Some security paper showed ARM disassembly of the issue, although I'm not sure if that was this specific issue or just another "strncmp-induced" issue.

(Btw. there are many smaller Java VMs for stuff like Java Card and JEFF)


They use ARC architecture and some embedded JVM. Of course this doesn't guarantee that the AMT web server runs on this JVM.

Few years ago somebody published analysis of the ME firmware for servers (desktop version was compressed with unknown algorithm). I'm sure you can google this for details.

> the fact that the issue was with strncmp

But we don't know for sure if it was strncmp. It's a logic bug, which can be implemented in any language. But again, maybe this server runs outside JVM and is written in C.


They were implementing HTTP Digest[0]. This leaves no room to choose something other than MD5. And I'm not sure what other HTTP authentication scheme they could had chose for firmware (ie. cannot take dependency on anything in infrastructure, like PKI or Kerb)

[0] https://en.wikipedia.org/wiki/Digest_access_authentication


> Also, fixed length hashes aren't strings.

In HTTP everything is a string.

> Using a proper cryptography library also might have saved them from using MD5.

HTTP, again. Browser sends MD5 whether you like it or not.


> In HTTP everything is a string.

Once you parse it out of HTTP it is no longer as string (or at least it should no longer be a string).

>HTTP, again. Browser sends MD5 whether you like it or not.

They are quite clearly doing verification themselves. They can use whatever algorithm they want.


It has to be MD5. They use HTTP digest auth, which uses MD5. You send a nonce, the browser throws a popup asking for password and returns md5(nonce+pass). No way around it.


Very easy way around it: do NOT use digest auth.


Sure, which brings us to the second point: HTTP's MD5 digest auth is still good enough to keep passive eavesdroppers from learning the password when TLS isn't used, which is its only purpose.


I think the programmer should have supplied the length of the "computed_hash" not the "response" which as I understood supplied by the user. Like this : strncmp(computed_hash, response, computed_hash_length)


You don't want to use strncmp() for this; aside from the timing attacks it opens up, using strncmp() for these kinds of comparisons implies that the operation you are performing is "string a is a prefix of string b" (or vice-versa).

Even though your example ends up being ok-ish (if the computed hash is a prefix of the response, perhaps it is ok to ignore any trailing junk in the response), intent is important for code quality and maintainability.

In this instance, the operation desired is "string a matches string b", which means strcmp() would be the right solution (ignoring timing attacks).

Of course, since we're talking about sensitive crypto operations here, neither is really the right answer. But in non-crypto contexts, if you want to know if two (valid) strings are the same, just use strcmp().

The "n" and the length argument doesn't automatically make strncmp() "safer" somehow; it is a totally different operation.


Yes. It was hilarious that the poster thought strcmp was the solution...

Though perhaps memcmp with fixed size buffers would be better still, no worrying about null terminated strings.


It almost smells like an obfuscated tactic to evade review. In a rush a reviewer might see strncmp and think "Not using strcmp. Check." without going all the way to consider the strlen as argument, which really should have been a constant and the smoking gun.


That comment id great, but where is the source? I can't find any technical details following links in the article. Moreover, Embedi team who found the vulnerability says[0] on their site: "Intel representatives have asked Embedi to hold off on disclosing any technical details regarding this issue until further notice."

Or is the commenter just guessing?

0. https://www.embedi.com/news/mythbusters-cve-2017-5689


Yes, that is crazy. Checking for match based on the length of the input. If input is "", that matches anything.


So where all this authentication and web UI code resides? Is it in the BIOS? Is there somewhere a packed JQuery or something?


It's in the AMT code, written in C, in the management engine processor...which is separate from the main processor. See: https://software.intel.com/en-us/node/631399

They are using HTTP Digest Authentication, which is built into browsers. The purpose was to keep passwords from being clear text over regular http connections.

So, the code on the client side is in the browser. The code on the server side is in the management processor, and it is a C implementation of HTTP Digest Auth.

The bug is that they used strncmp, but used the length of the incoming hash from the client as the string length to compare, versus the actual length that the hash string is supposed to be. The exploit is to send an empty hash. That requires a proxy, or browser plugin, since the browser creates the Digest Auth Headers. The empty hash causes something like strncmp(expected, received_hash_string, 0), and of course, two zero length strings are equal.


> The bug is that they used strncmp, but used the length of the incoming hash from the client as the string length to compare, versus the actual length that the hash string is supposed to be.

Ignoring timing attacks, they should have used strcmp(), not strncmp(). strncmp() is for testing if one string is a prefix of another; they wanted to test if some string equaled another.

Since this is crypto related stuff, though, they should be using a strcmp()-like routine that works in a content-independent timing fashion.


But using strcmp could result in buffer overflow, no?


How so? We aren't talking about strcpy(), just strcmp(), and all the strings in question should be nul-terminated (if they aren't then mem* routines should be used and more length checks would be needed; there are also timing attacks to worry about if we're considering all the possible issues).

But normally strncmp isn't safer than strcmp. They just do different things.


One interesting note in the article: « unauthorized accesses typically aren't logged by the PC because AMT has direct access to the computer's network hardware. »

That's unsurprising now I think about it, given how AMT works, but aren't the sort of companies that would want to use AMT also the sort of companies that have security policies that require all such things to be logged?


I disagree with the article. If anything, it's much less severe than many people thought.

- It's a logic bug (authentication bypass) instead of a memory corruption. An authentication bypass is bad, but a full compromise would have been much worse.

- It's a bug in the opt-in AMT management, which means that the default config is not vulnerable.


>"much less severe than many people thought"

I don't think so. There's a second bug that allows local non-privileged users to provision AMT.

And, once it's up and running, you have a full remote KVM where you can boot recovery disks, edit the local files then reboot, etc.

Having AMT on isn't that unusual either. It's not the default, but lots of people use it. I know some digital signage units using NUCS have it on, for example. Also, some large company data centers use it instead of IPMI. The ports aren't exposed to the internet, but if hackers get inside a different way, they are off to the races.


"Having AMT on isn't that unusual either. It's not the default"

AMT is on by default in at least a number of the popular thinkpad series[1] (i'm the only owner of a relatively old thinkpad, and only because of this security alert i checked and it was on; never haven't turned it on myself)

[1] https://forums.lenovo.com/t5/Security-Malware/Intel-AMT-back...


"An authentication bypass is bad, but a full compromise would have been much worse."

Given the authentication bypass essentially leads to full compromise, where's the difference, here?


The difference is, you are no longer limited to functionality normally provided by AMT. Maybe AMT doesn't offer the ability to download arbitrary memory locations from the running OS and applications, with remote code execution on the ME you could do that. You could modify code running on the CPU on-the-fly and make it read files from disk to memory for downloading too - again, I don't think AMT can read disks, especially when the OS is running and using them concurrently.


Exactly, AMT is essentially a remote console without any sort of direct access to the running system. You'd have to reboot the system with a remotely recovery media in order to compromise it.


You can boot a recovery image though, and edit files on the installed OS that way.


It just isn't quite as stealthy if you have to power up the box when it's supposed to be off.

I mean, not everybody runs Windows 10 and is used to his machine having a free will of its own ;)


The ME itself is not compromised.

Arbitrary code execution on the ME would mean total control of the host, i.e. privileges above any software you can run on a computer.


AMT offers basically IPMI though. Boot a recovery image over the net, mount the local drive, and edit away.


> would mean total control of the host, i.e. privileges above any software you can run on a computer.

You don't need arbitrary code execution, the ME already has privileges above any software you can run on the computer. The ME operates in ring -2 mode [0], whereas the OS kernel has at most ring 0 privileges.

With the built-in ME functionality you can: reboot the host, change BIOS settings, re-install the OS, update BIOS (boot FreeDOS & run the vendor utility).

Couple this with a vendor vulnerability such as not signing and verifying a BIOS before flashing, and you can easily use the ME to flash a BIOS with malicious components to the computer.

So, yes, it's not as bad as using the ME to read arbitrary memory regions while the host is on, but the default ME functionality for remote management is still enough for a malicious actor to cause a lot of harm.

[0] https://en.wikipedia.org/wiki/System_Management_Mode


> ring -2 mode

ME runs below the SMM. So ring -3 or lower.


ME is a separate processor, not a "ring" on the x86.

All this talk about "privilege level above normal software" and "ring -7" just muddies the water. It simply is a physically separate core running its own software which has access to all RAM and all PCIe devices and can program the Intel NIC to silently redirect selected packets away from the x86 cores to itself.


The bug is not 'opt-in'. If you need a processor with TPM then it will have AMT. The only question is if the vulenarability is local or accessible online.


Sure, but AMT is off by default. How would you exploit it locally if it's not enabled?


There's a second bug that allows local, non root users to provision AMT.


At what point does it become reasonable to conclude that strncmp, and every other string API that treats the pointer to the string and its length as separable variables, are too dangerous to be used in security-sensitive software? I feel like I've seen another vulnerability this week from someone sending the wrong length to a standard C string function.

If you want to fix this, there's no strict need to move away from C (not that I'd particularly encourage you to stay, but if you have some reason to prefer C, you can solve this in C). There are a number of C libraries that give you a struct string {char *ptr; int len;} of some sort, from bstring to GLib. And there's always C++, too.

What need does the Intel AMT firmware have for compatibility with POSIX string APIs? It's not running a POSIX OS, is it?


> At what point does it become reasonable to conclude that strncmp, and every other string API that treats the pointer to the string and its length as separable variables, are too dangerous to be used in security-sensitive software?

In general I agree, we have a long way to go to writing more secure software by default.

But this specific issue is different. The issue was not the string length was incorrect (the length passed to strncmp was the correct length for the response string); the problem here was a logic problem, which is harder to get right just by switching string implementations.

The buggy code in question was basically (in an imaginary language with fully managed strings):

    if response is_a_prefix_of expected_hash:
      allow_login()
when it should have been

    if response is_equal_to expected_hash:
      allow_login()


Yeah, but if you are writing that pseudocode, it's obvious that is_a_prefix_of is the wrong operation.

The problem is that strncmp is in fact a prefix-comparison function when used with n calculated one way, but also it's the "safer" version of strcmp when used with n calculated another way, and it's easy to confuse the two uses.

In any other string implementation, you'd just use the most obviously-named compare function, there would be no need for a "safer" version of that function (there'd just be one reasonable comparison function), and it would do the right thing.


> also it's the "safer" version of strcmp when used with n calculated another way

This isn't true; first off it isn't "safer" than strcmp() at all (this isn't strcpy vs strncpy after all), and second, no length argument to strncmp() will make it act like strcmp(). In order to get strcmp(), you have to also check the lengths are the same first.

    strlen(a) == strlen(b) && strncmp(a, b, strlen(either))
This is less safe than using strcmp() when you want equality (as this Intel issue shows) because it is easy to forget the length check. strcmp() does it implicitly for you.

I agree the names are not ideal (in fact I write my own streq() and strpfx() routines because I think it reads easier and it is also easy to get the return value of strcmp() wrong), but again, that has nothing to do with separating the string's pointer and its length.


I agree with you, but please argue with the other person who objected to my comment saying that strncmp was obviously the safer version of strcmp. :-)

An API where experienced users don't even agree what the function is supposed to be is a bad API.


How do you know all parties involved in this discussion are experienced?


Could you elaborate on why "treating the pointer to a string and the string length as separate vars" is dangerous to someone who hasn't done a lot of C?


Basically, it encourages you to store only the string pointer and re-calculate the length (or let a library function do so) or use the wrong length value, which means if there's some specific length or capacity worth paying attention to, it's easy to get it wrong.

Really you want both a length, of the data actually in the string, and a capacity, marking how much memory is valid.

If I'm understanding the vulnerability right: strncmp takes three arguments, the beginning of the first string, the beginning of the second string, and the maximum number of bytes to compare. That maximum isn't quite a length or a capacity. It's a bound on the length, if one of the strings isn't null-terminated. But if you provide too small a maximum, it'll only compare the first few characters, and return a value based on that.

In particular they compared the target string to the user-provided string, with a user-provided length, so if you provide an empty string, it compares 0 bytes and returns success.

An API of the form strcmp(struct actual_string a, struct actual_string b) wouldn't have this problem - the actual string structures (instead of a char pointer) would provide a length for both strings, so the API wouldn't let you make this sort of error.


Because occasionally you use wrong length with wrong pointer and read or overwrite something you didn't want to.

It's more convenient and safer to have these two variables packed together as a "string object" and have string functions operate on that - then they always know where the string starts and ends in memory.


If you access a memory location beyond the end of the string, you can get all sorts of unexpected behavior. The string's length is something you'll pretty much always need to take into account.


Many libraries and systems (e.g. Win32) just go with null-terminated strings and their functions don't accept a length argument at all - so I wonder if some people never store the length simply because they don't believe they'll ever need it.


strncmp and other API that treats the pointer to the string and its length as separable variables are not dangerous. They are much more secure than the non 'n' options.

With them you don't have to worry about lack of null termination as the n determines the length of the string.

Problem here was usage of them at all. Hashes are not really strings, memcmp should have been used or even better secure (that doesn't leek timings) memcmp should be used.

Problem here was just mistake in the program not in the C lib.


> strncmp and other API that treats the pointer to the string and its length as separable variables are not dangerous. They are much more secure than the non 'n' options.

"More secure" doesn't imply "not dangerous". It's definitely safer to drive a car without seatbelts than to ride a bike down the highway, but that doesn't mean that not wearing your seatbelt isn't dangerous.

> Problem here was usage of them at all. Hashes are not really strings, memcmp should have been used or even better secure (that doesn't leek timings) memcmp should be used.

This is incorrect for two reasons.

The first is that hashes in ASCII format are strings: it's useful to want, for example, case-insensitive hash comparisons.

The second is that (if I'm understanding the vulnerability correctly) the exact same bug would be present with memcmp:

    if (memcmp(target, user_string, user_string_length) == 0)
> Problem here was just mistake in the program not in the C lib.

Yes. But a library that people make mistakes with all the time is dangerous. If you are a programmer who ever makes mistakes (and clearly the AMT folks are, but I'm pretty sure so is everyone), you should avoid dangerous libraries.


It just keeps getting better: Intel's diagnostic tool is published with an MD5 checksum.


That's fine. I will literally give you $10,000 if¹ you can give me a MD5 preimage attack. Take your attack vector to be that particular md5sum that you are making fun of.

MD5 is not broken for the usage that you think it's broken for. Please don't snipe on things like this again. People like you who say things like "md5 is always bad" or "you should bcrypt, duh" are literally cargo culting the idea of computer security.

In addition to the fundamental technical deficiency you've demonstrated, people who opine on security threads while not quite understanding anything about computer security allow charlatans to sell "computer security". Every single comment like this lends credence to literal scam artists.

¹note, this is actually a prize for a weaker claim than an preimage attack; all I want is a second preimage (!)


The other way to look at this is..

MD5 is known to be a poor option. And every few years the practicalities of attacking it increase. And when the type of attack described here does become practical, all we'll be hearing is one side saying "they had ten years to fix this" and another side saying "but experts we consulted said it was fine".

It is incredibly easy for whatever process builds a website with an MD5 hash of a download to instead display something else, so the cost trade off becomes:

Cost: Virtually nil Mitigation: Currently minimal but has future potential

That a company would look at that and say "I guess we'll make a choice to stick with MD5" says something, even if it doesn't say the website downloads are easily compromised.

Alternatively, the parent comment could be pointing out that no hash signature that only exists on the download site itself offers value, and they would be better served offering a GPG signature or similar.


The reason that people move away from a hash function when it looses collision resistance is that collision resistance serves as a measure of the security margin for preimage resistance. Cryptographic primitives are intended to be rotated out of use about 10-15 years before they become actually broken for their intended purpose. Unless you are willing to extend you bounty to the year 2030 it is means nothing.

In addition, you are completely wrong about the harm caused by simplifying the rules of thumb for security. There is not good reason to ever use MD5 for this purpose, and saying "MD5 is ok in some cases" has the potentially to do far more harm than "never use MD5".

> note, this is actually a prize for a weaker claim than an preimage attack; all I want is a second preimage (!)

Don't act like you are doing GP a favor, second preimage is the attack that would cause a vulnerability in this usage of MD5.

You are the one who is"opin[ing] on security threads while not quite understanding anything", so I think before the next time you condescendingly correct someone you should make sure you have a basic understanding of what you are talking about.


You're clearly somebody who wants to rely on "rules of thumb" for security. I'm arguing against the entire concept of rules of thumb and you've since argued against a strawman. Did I ever say "MD5 is ok in some cases"?

Aside: a preimage attack implies a second preimage attack.


>You're clearly somebody who wants to rely on "rules of thumb" for security. I'm arguing against the entire concept of rules of thumb

Not all devs are security experts, or can even afford to focus on security. To deny the value of rules of thumb is to deny that they need to consider security.

> Did I ever say "MD5 is ok in some cases"?

Oh, really? Were you not arguing that MD5 is ok if you only need preimage resistance?

> MD5 is not broken for the usage that you think it's broken for. Please don't snipe on things like this again. People like you who say things like "md5 is always bad" or "you should bcrypt, duh" are literally cargo culting the idea of computer security.

Hmm. Seems like you were...


There are easy collision attacks against MD5. If they used a stronger hash, we would know that they couldn't have been coerced to use a hash that collides with a bad version of the update.


> I will literally give you $10,000 if¹ you can give me a MD5 preimage attack.

It's almost certainly a question of when, not if, I will be able to collect on that offer.


"MD5 is not broken for the usage that you think it's broken for."

Correct; it's broken for everything since we've got the computational power to make it essentially ineffective.


That's not entirely true. It's broken in the sense that for every hash given we can compute an input that produces this hash. Constructing a useful input (in this case a working exploit) is much less trivial. The complexity depends significantly on what the desired input should look like. It's easier for input that have the property that they can ignore chunks of the input (googles PDF example), but still hard for others.


We cannot produce an input that produces an arbitrary given hash. Here's an easy counterexample: the all-zeroes md5sum. That's an even harder problem than the one I've claimed is entirely not solved.

MD5 is still preimage and second-preimage resistant.


Hello fellow ordinary person using an account created a year ago and finally posting today for the very first time with comically irrational outrage over the possibility of people ever moving away from md5sum thus rendering useless whatever subset of shady tools you or your employer has that we don't know about yet. Consider making innocuous comments in various threads for a few weeks before you switch to psyop mode next time.


Tin foil much? As if what will ...really help NSA and co are 1-2 random comments of technical pedantry regarding MD5 on an insignificant (in the grand scheme of things) forum people go to discuss the latest startup news, JS frameworks, the benefits of functional programming, Golang, tech industry developments, and some CS related news.


Sustained disinformation campaigns are a go-to strategy. Redirecting and confounding conversations are documented tactics. They don't even have to be particularly well veiled or surgically placed as their value derives from their volume.


It's because both colliding files have to be specially prepared by the attacker, before they are published on a download site or presented for signing by a code signing scheme. https://www.win.tue.nl/hashclash/SoftIntCodeSign/

Which means, the published MD5 on on the Intel site would have to be the hash that the attacker created.

Though I'm sure there are better methods that Intel could implement.


Because if you are going to do security wrong, why not do it really really wrong.


This is a dream come true for AMD.


AMD has a similar system, likely with simular issues. Their smaller footprint means they just take longer for people to notice than an Intel snafu. Hopefully this will give them a preemptive heads up to harden theirs before an exploit is discovered.


AMD made some noise (way before this vulnerability became known) about open-sourcing the code of their ME-equivalent and allowing people to flash their own code.

That seems a lot better than Intel's stance on the ME.


Unless something new happened that I'm unaware of, it was more like "the people of Reddit made some noise in an AMA, and AMD basically said "we'll look in to it"".


Should be more like a backdoor that NSA/CIA can walk in rather than hijacking flaw or "OH! I didnt know there was a flaw!"

Surely a new one must have been created since NSA tools are out on the net


Is the amount of attention this is getting commensurate with how interesting/surprising/consequential a security flaw it is?


I don't think it's just that the flaw exists, which alone is pretty bad, but how widely deployed it is in large enterprise, coupled with Intel ignoring it after being given years of notice and also the unease about AMT in the OSS community since it's deployment. To be honest, I doubt 9/10 people who care about this are going to stop using Intel, but it's still a comedy of errors and I think everyone loves a spectacle.


> Intel ignoring it after being given years of notice

Sure about that? From what I can tell, it's a recently discovered vulnerability that was promptly fixed.


According to SemiAccurate (terrible name for a source) they reported it to Intel some time ago.


As far as I know, Embedi and not SemiAccurate found and reported the issue.

https://www.embedi.com/files/white-papers/Silent-Bob-is-Sile...

> An authentication bypass vulnerability, which will be later known as CVE-2017-5689, was originally discovered in mid-February of 2017 while doing side-research on the internals of Intel ME firmware. The first objects of interest were network services and protocols.


No, it looks like it's not getting enough attention. (Well, OK, it is not terribly surprising.)


Interesting how this article steadfastly refers to the subsystem as AMT and never mentions Management Engine/ME. A deliberate ploy to redirect people away from discovering all the ME-disabling research and tech? Perhaps. This is what happens when "journalists" post copypasta from publicists.


From the article:

> When AMT is enabled, all network packets are redirected to the Intel Management Engine and from there to the AMT. The packets bypass the OS completely. The vulnerable management features were made available in some but not all Intel chipsets starting in 2010, Embedi has said.

The vulnerability is in AMT, not ME (though I agree that the ME is an unnecessary security risk, and am well behind the current research to disable it).


AMT's an optional component of the ME, it's the part that enables remote management, and it's a flaw in AMT that they're describing. It seems reasonable to talk about the part that has a problem (although it wouldn't have been unreasonable for them to mention that it's part of the IME, of course).


I quote: "When AMT is enabled, all network packets are redirected to the Intel Management Engine"


As far as I know, it's only doing this for the primary (on-chip) network interface. /Never/ use the primary NIC if you have a Intel CPU.


Which is wrong, by the way. What actually happens is that a filter is installed on the NIC that redirects some packets (those addressed to the ME).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: