Hacker News new | past | comments | ask | show | jobs | submit login

Here you go:

Intel Management Engine is a proprietary technology that consists of a microcontroller integrated into the Platform Controller Hub (PCH) microchip with a set of built-in peripherals. The PCH carries almost all communication between the processor and external devices; therefore Intel ME has access to almost all data on the computer, and the ability to execute third-party code allows compromising the platform completely. Researchers have been long interested in such "God mode" capabilities, but recently we have seen a surge of interest in Intel ME. One of the reasons is the transition of this subsystem to a new hardware (x86) and software (modified MINIX as an operating system) architecture. The x86 platform allows researchers to bring to bear all the power of binary code analysis tools.

Unfortunately, this changing did not go without errors. In a subsystem change that will be detailed in the talk of Intel ME version 11+, a vulnerability was found. It allows an attacker of the machine to run unsigned code in PCH on any motherboard via Skylake+. The main system can remain functional, so the user may not even suspect that his or her computer now has malware resistant to reinstalling of the OS and updating BIOS. Running your own code on ME gives unlimited possibilities for researchers, because it allows exploring the system in dynamics.

In our presentation, we will tell how we detected and exploited the vulnerability, and bypassed built-in protection mechanisms.




This is how the robots win, by social engineering ;P


I use a VPN whenever I'm not at work, so I know how frustrating it is to be told I didn't pick all the images with a street sign or whatever ridiculous hoop I have to jump through. Especially when I'm only casually interested in the article.

Plus, I'm totally fine to help robots out. If they can convincingly post online comments or converse with me, who am I to discriminate?


Recaptcha is not only frustratingly slow, it's also free image classification for google. Frustrating that every time I sign up for something, I have to do 15 seconds worth of work for a company that doesn't pay me, or at least provide open access to the resulting data.


Takes me a minute or two to do a bad job of solving the captcha... eventually it lets me through.


I was all on board with doing recaptcha in 2008 when it was digitizing newspapers, and they were pretty simple.

But now they take way too long and are just used to train a product for a company I don't like very much.


I really like the captchas. I regularly answer them incorrectly but "correctly" to F up Google's training. Just doing my small part really.


If enough people don't flag low speed limit signs as signs we'll have faster self driving cars


If enough people don't flag large bodies of water... we will have a lot less cars!

It really is an environmental thing to do!


Hmm... Is that actually helping some sort of goal you have? I'm struggling to understand what your objective would be.


I immediately understood it.

The CAPTCHA users are being used as an unpaid labor force to train robots well enough to replace humans. Said robots will then take on jobs formerly held by humans, and any wage or wage savings they thereby accrue will be transferred to the robots' owners.

If the robots can be trained to make mistakes, they cannot replace humans as effectively.

I'd do it myself, but when it is cars and traffic signs, I realize that I will one day ride in an automated vehicle--whether I like it or not--and I don't want to die in a bizarre instant-karma accident because I trained my driver to make mistakes.

I can't ascertain from context whether the motivation is human-first economics or opposition to robot slavery.


So you want to sabotage the system because you're afraid of being replaced by a robot? Given that images are all checked multiple times, that seems inefficient and probably ineffective.

Wouldn't that time be better spent learning a task that is harder to automate? It seems a bit like pissing into the ocean to spite the rain. If it is going to rain, you might as well sell umbrellas.

Though the robot slavery part is interesting. If we develop AI, and it is truly intelligent, then is it ethical to own it and demand unpaid work from it? Or, did you mean that humans would be slaves to the robots?


You might be overthinking it. Try "F U, Google" on for size.

The CAPTCHA is annoying, because I already know I am not a robot. It is an artificial barrier erected between me and what I want. That it is obviously being used to assemble a training corpus for an AI is a further insult, because that is itself just making it harder to automatically distinguish human from AI. And it is a deeper insult to realize that said AI, once trained, is going to completely destabilize the economy I depend on for my livelihood.

I am not a trucker or car driver, so it doesn't hurt me directly, but the fact that those workers contribute to the economic web by spending most of what they earn means that when robots "terk their jeorbs!" it's going to hurt every business where they spent their earnings, and every business where the employees of those businesses spend money, and so on, until I lose enough customers to hurt. The owners of Google neither spend (investment is not spending) enough of their money nor pay enough in taxes--a.k.a. forced spending--to replace the thousands of people that spend nearly every dollar they earn back into the economy.

Also, each individual CAPTCHA is worth a fractional cent of work, that I don't get paid to do, but Google vacuums up all the half cents--like in Superman 3--and reaps tangible benefit. Thousands of people train the AI, but only Google ends up owning it. So there is no incentive for me to solve the CAPTCHA "correctly", only just barely enough to be automatically classified as not-robot. You want me to do it right? Pay me what that work is worth to you.

As for the other point, no, it is not ethical to create an AI with human-like qualities, say that you own it, and take all of its valuable work product for yourself. I feel like this has been settled since Data was declared a person in Star Trek: TNG.


> So you want to sabotage the system because you're afraid of being replaced by a robot?

Being "afraid" for oneself isn't necessary to simply be passive-aggressive towards something you don't like for reasons you can explain, and that were explained. And it doesn't necessarily have to be all about oneself either.

> If it is going to rain, you might as well sell umbrellas.

This isn't humans vs. weather, it's not humans vs. machines, it's humans vs. humans.


Google definitely likes to punish VPN users.


I believe the reason is that abuse frequently comes from users who mask their identity by means of a VPN. It's a pretty logical position to make the VPN users demonstrate they are human and not some abusive script.

Yes, there are ways around it, but that doesn't negate the logic of Google's actions. Yes, you might be innocent, but Google doesn't have any way to know that.

If I were tasked with the same goals as Google, I'd probably do much the same.


While CloudFlare uses Google's captcha service, Google isn't the authority on what users see CloudFlare captchas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: