Hacker News new | past | comments | ask | show | jobs | submit login
The Google plasma globe affair of 2012 (coredump.cx)
258 points by true_squirrel on Oct 10, 2022 | hide | past | favorite | 99 comments



>Another critical optimization boiled down to realizing that the response packet allows up to six keycodes to be reported at once. This might have seemed like a straightforward 6x speed gain, but not so: on MacOS, the keystrokes were dequeued not in the order they appeared in the packet, but from the numerically lowest scancode to the highest. This mind-boggling quirk [...]

Reporting multiple keys down in the same packet is meant to be used for when the user actually has those keys down simultaneously, so it's not unusual that MacOS decided to act on them in order of scan code, because the expected effect would be the same.


That seems like behavior that had to be actively added though, and I am extremely curious why.


There are plenty of ways why it might have been added for other purpose. A lot of keyboard APIs provide keyup/keydown events. The most obvious way for implementing that is by having a array or bitmask of all the keys. And once the keystate array is updated from usb packet, the information about order of keys in packet is lost. It also follows the principle of abstracting away low level hardware details. USB isn't only way you could attach a keyboard, there are also PS2 and bluetooth keyboards with different packet structures. Even for USB-HID the standard packet format consisting of bitmask for modifier keys + array of 6 other keys is only the default format guaranteed to work in BIOS. USB HID specification includes standard mechanism for defining the format of packets (report descriptors), which is one of the ways you can achieve more than 6-key rollover .


One plausible (to me) explanation is that the received scancodes are added to a bit vector (a "set"), discarding their original order. Some time later, the bit vector is iterated in numerical order.


Something in the back of my mind is telling me that it's actually spec-compliant to send the 6 scancodes in numerical order. Could very well be misremembering though, and most implementations accept near enough anything from an HID device in my experience


Appendix C: Keyboard Implementation from the Device class definition for HID states "The order of keycodes in array fields has no significance. Order determination is done by the host software comparing the contents of the previous report to the current report. If two or more keys are reported in one report, their order is indeterminate".


It seems pretty scandalous to me that most operating systems still haven't implemented any mitigation for pretend-to-be-a-USB-keyboard attacks.

Fixing it isn't trivial, but it's hardly insurmountable. The solution is fairly simple: Whenever a new keyboard is plugged in or types its first keystroke, lock the screen, and don't accept key input to places other than the login form from a new keyboard until that keyboard has typed the user's login password. (You also need to build a way to authorize legit-fake-keyboard devices like barcode scanners that type the barcodes they scan, but that's not to difficult.)

Given the high prevalence of USB devices with infectable firmwares, and the large number of USB cables of questionable provenance that no one pays much attention to, it doesn't seem okay to leave this vulnerability open.


For those wondering, there is an easy defense against this on Linux, USBGuard (https://usbguard.github.io/)

RHEL7+ include USBGuard as part of the standard repo [0]

[0] https://access.redhat.com/documentation/en-us/red_hat_enterp...


This isn't very practical as a defense. You can't configure it to ban keyboards, because you still need a keyboard, and you might need to swap keyboards if your original one breaks.


That's not correct, it can block and does block USB keyboards. If your main keyboard was totally dead and you needed to whilelist a replacement keyboard you'd do so by booting into single-user mode.


I think it is a hard problem to solve BUT I think OSs should offer a compromise. That you are notified each time and can grant or deny. This should be an optional setting for those that want to easily add hardening (I think there's no excuse that linux distros don't have a "hardening" setting in their advanced or security settings).

I don't know the answer to this, so I'll ask. Can USB ports be programmed to only output voltage but not data? If so, this seems like a cool way to implement the above as you can have Deny, Access (power), Access (data) as options.


Grant/deny just leads to people clicking accept, you want something where the user has to choose what kind of device the plugged in and if it doesn't match what the device says it is then you reject the device or something.

On Linux GNOME already has USBGuard support btw.


> Grant/deny just leads to people clicking accept

I was imagining this being in some advanced setting with a "here be dragons" warning. Or even a bit more relaxed like Firefox's strict or custom tracking protection. It comes with warnings and I feel pretty confident to say that most users don't touch these settings.

> On Linux GNOME already has USBGuard support btw.

Yeah I know there are plenty of hardening tools out there, but I was suggesting that they come pre-installed. There's so much bloatware on most systems these days that this seems minor. Or maybe someone could put together a bundling script to make adding all this (e.g. USBGuard + Fail2Ban + Faillock + Firejail + etc) easy to install and configure. I'm not aware of any such tool. But maybe even an Ansible script could go a long way.


> I was imagining this being in some advanced setting ... most users don't touch these settings

That just leaves most users unprotected.

> I was suggesting that they come pre-installed

GNOME's support for USBGuard is installed by default, but USBGuard itself may not be depending on the distro. Agreed that it and other security/safety/robustness (for eg SMART disk warnings need to be supported) stuff (should be enabled by default. GNOME should use Flatpak-style sandboxing for natively installed apps too.


> That just leaves most users unprotected.

Aren't these users already unprotected? I don't think this is a security concern for most people and turning on by default would frustrate them more. It'd be like shipping Firefox or Chrome with NoScript on my default. Sure, more protection, but it would turn away more people than it would pull in. Better as optional.


Firefox is ratcheting up the tracking protection for normal users and GNOME enabled Thunderbolt protection by default, so there is definitely precedent for protecting regular users too. Also with the rise of stalkerware, normal users are definitely targets too. I think the interface I proposed would be reasonable enough for most people and you could make it easy to turn off with the right UX.


Android already does this but from the other side. I assume iPhone as well. When you plug in a cable it only does power until explicitly allowed to move data as well.


grant/deny can be easily overcome by malicious input device with high pass rate


Surely not if the device has no access to the machine until after you press grant.


OK, good point


That's actually a really good idea, but I think it could cause problems for things like external numpads.

Maybe require the user to type a random combination of keys shown on the screen before enabling a given input device, instead of their own password?


> require the user to type a random combination of keys shown on the screen before enabling a given input device

If you could type it on any device, and you could guarantee that the OS could remember that device, I could see that being workable.

I would also worry about:

Ensuring that the keys shown bit is properly accessible to screen readers/braile devices, etc.

Ensuring that automation could authorise a device, or disable the prompt requirements

Ensuring that the OS actually remembered it. Plugging into different docks at home/work/conference room and having it prompt you to re-authorise your keyboard would drive people up the wall. eg: because during USB Enumeration the port numbers on your dock got switched, or they plugged the dock into the other side of their computer, or the Wireless USB controller is slow at starting up.

How to handle an unauthorised device when there's no other usable input devices. eg I started up my HTPC, and a few seconds later the IR receiver wakes up and is now marked as unauthorised - the device may not have a keyboard but that receiver appears as one. Or maybe I'm trying to fix a laptop and the on-board keyboard is broken, so how do I plug in a USB keyboard to get at the data on it (Maybe I can't reboot)

Some of these things might conflict with having such a lockout.


How do you distinguish between a new keyboard and a bad USB cable that gets wiggled a bit?


check the Device ID, if its the same as the one that was in the port a second ago it the same.


An attacker can lie about what their device id is


How would it know the right device id to spoof? (Definitely doable in a MITM scenario, but more complicated in others).


Pick one of the most popular keyboards used by your target and reuse it. That's how I'd do it. It's not going to get everyone but I think it's a legitimate approach.

The alternative methods looking at the time between keystrokes seems more reliable.


Keystroke delay based heuristic is just naive, all that means is that the attack needs to happen on an idle system.

In an ideal world, vendors would actually populate the serial number field with a number that's at least semi-random.

On this computer, only the USB-C HDMI adapter and the fingerprint reader have a serial number that looks random :-(


What about spoofing?


Don't yubikeys register themselves as keyboards? You can definitely tap the button to get a code typed into any input field.


By default, yes. You can disable the "long-press to send password" feature.

(I kept triggering it by accident, and disabled it.)


That doesn't stop a keyboard that is actually backdoored to execute payload sometime much later after setup.


Sure, but it's not intended to. It's to prevent a device from pretending to be a keyboard without any user's knowledge.


No, but it does stop you from getting pwned by a plasma ball.


Reading articles like this always causes me to think two things:

1. If this is what a couple of smart guys can do as, essentially, a side project then I can only imagine what nation states with teams of people like this can accomplish.

2. I get why some orgs pour wax into the USB ports of their desktop machines.


That was not a side project from my limited understanding ;) I believe the M$/Alphabet/Meta security teams are probably more advanced or on par with the best state sponsored teams. I could be wrong, plus the state sponsored teams might have infiltrated the FAANG security teams ;)

However, I think the FAANG companies act somewhat more restricted. Three letter agencies don't have qualms about things like "chloroforming security guards" and such.

Also, use USB Data Blocker dongles where possible.


NSA’s TAO surely has many orders of magnitude more budget than Google’s red team?


I don't think the NSA pays as well though. You also have large restrictions. Not just stuff like never having smoked weed in your life (we are talking about CS people...) but that once you even have a security clearance (a pain to get in the first place) you have a lot of daily life headaches. You have to carefully watch what you say. International travel has to be reported (and can even be a big hassle). Etc. I'm not sure if the same is true for Google's Red/Blue teams, but I feel pretty confident in saying that they probably get paid more.


Correct, government pay scale is a huge problem for talent acquisition. Your choices are either someone who is so ideologically devoted that they're willing to work for 5-25% of what they're worth or "work for us or you're going to prison for a long time".

Citations:

https://apply.intelligencecareers.gov/job-description/119486...

Cyber Mitigations Analyst/System Vulnerability Analyst - Entry to Expert Level (Maryland)

Network Cyber Mitigations Engineers and System Vulnerability Analysts analyze vulnerabilities and develop mitigations to strengthen defenses. They produce formal and informal reports, briefings, and guidance to defend against attacks against network infrastructure devices or systems. NSA analysts' competencies run the gamut of data transport possibilities. They work with traditional wired networks, wireless transport, including Wi-Fi and cellular, collaborative platforms such as video teleconferencing, and the hardware and software that support it all.

Pay Plan: GG, Grade: 07/1 to 15/10

https://www.opm.gov/policy-data-oversight/pay-leave/salaries...

In a very high cost of living area, that means that entry level is $31K and the absolute top for expert level is $176,300.

Compare that to what FAMG are paying new college grads.


Correct, government pay scale is a huge problem for talent acquisition. Your choices are either someone who is so ideologically devoted that they're willing to work for 5-25% of what they're worth or "work for us or you're going to prison for a long time"

Or "I'm a covert foreign asset, so I DGAF what I get paid."


Caps are getting raised, at least at DHS:

> The CTMS [Cybersecurity Talent Management System] salary range has an upper limit of the vice president's salary ($255,800 in 2021), plus an extended range for use in limited circumstances, which has an upper limit of $332,100 in 2021.

https://www.zdnet.com/article/the-us-government-just-launche...


Had a friend with a restriction like this and then switched to a more public unrestricted job.

To be honest, I liked hanging out with them more when they couldn't talk about work :)


All of this is true, but as a government agent, you will be much less likely to go to jail for participating in adversarial activities, which is not nothing.


Probably. It’s way handier if the keylogger is embedded into the cable from your usb-x monitor right when you open the fresh package.

But budget limited ingenuity is fun too!


It's really not hard to realize that organizations whose entire existence is predicated on actually cracking the security of other nation states might garner more investment than a red team focused on checking that your company's security is intact.


By many accounts they source exploits from the grey market, too.


It's certainly something someone could do as a side project, even if that wasn't strictly the case here.


> That was not a side project from my limited understanding ;)

First paragraph of the article:

> In episode #3, Daniel Fabian talks about the redteaming efforts - and in particular, about an exercise he and I ran together as a side project back in 2012:


This was a side project.


Well, the NSA was diverting shipments of Cisco networking gear to install custom firmware with back doors.

But...this sort of hack is around two decades old and doesn't represent anything new in the field. There are at least a dozen toolkits one can use to implement this on a number of USB-friendly microcontrollers.

I would expect someone working for Google's red team to be able to bang this device out, from scratch, in one day.

Google probably spent more on the voice actor and graphics for the video.


This seems pretty cool. It reminds me of a story I heard recently. Crooks were knocking doorbell cameras off of wifi by an assumed deauth attack. I looked it up and there are “maker watches” that will do deauth “bombs” for you, no soldering required.

A nefarious plasma globe could hide of lot of nasty stuff, you don’t even need to plug it in via USB to cause harm.


The other nifty thing about the plasma globe is that immediately after someone plugs it in, they are likely to be too distracted by the luminous fingers of writhing plasma to notice the shell window popping up briefly on the monitor. Nicely-executed hack on multiple levels.

For the same reason, waiting a few minutes to pop up the shell, as the article says they did, actually seems counterproductive. It might have been better to pop up the window intentionally -- launch the browser to display an ad from the company that made the gadget, maybe, or a 'user's manual', or something like that. Something that would appear innocuous and expected, while providing cover for the payload.


A power user wouldn't (shouldn't) expect a device merely using USB for power to pop up anything on the PC it's plugged into. That would be a dead giveaway that it's doing something it shouldn't be doing.


I’m pretty desensitized to window’s CMD prompts popping up when I install a program or plug in a device. My Razer keyboard even installs borderline malware when you plug it in.

However, if this happened on my Mac I would immediately be skeptical.


> My Razer keyboard even installs borderline malware

What does the software do? I assume it asks for permission and you decline? In a couple of decades of buying USB keyboards I have never let one install software and I have never noticed any problems.


Razer Keyboards and Devices auto-install weird proprietary "drivers" (borderline adware), and the software they auto-install also used to give anyone root/admin privileges (borderline malware) - https://www.engadget.com/razer-mouse-windows-10-security-vul... as an example


Sometimes these crapware "drivers" are actually required for full functionality of the hardware, like assigning keys, macros, profile switching, etc...


Very good point, which is why I never even bothered to check what was advertised. I simply did not want to require anything beyond a driver that would probably go out of date anyway


They make me pretty nervous on windows too… but it's windows so I don't know enough to presume for sure I'm infected or it's just windows.


I think that train left the station when they plugged a 50,000-volt EMI generator into a USB jack. :)


Plasma globe is also nice because it's so big. It could contain a camera, microphone, storage, LTE radio, GPS (for only turning malicious at the target location) ... a tiny drone that could be ejected at night to fly around the office.


Can you link to one of these watches? If they're being sold with that purpose-build functionality, that's probably a federal crime. You cannot make, possess, or operate a signal jammer in the US.


From what I briefly remember, they aren’t jammers. They’re more akin to a DoS to the access point (I think it has something to do with spamming auth requests?) than jamming any physical signals.


They aren't jammers in the "blast out RF noise" sense, but since they are meant to disrupt legitimate communications, the FCC considers them jammers and has repeatedly fined companies that run deauth attacks to "encourage" people to use their paid networks: https://www.jdsupra.com/legalnews/fcc-issues-another-fine-fo...


I see, thanks for the clarification


As tgsovlerkhgsel said, they're not the classic style of jammer. That kind of thing is called a "smart jammer". And if you built that exact kind of functionality into a device and sold it (vs. just allowing the customer to code it themselves, like they could on any wifi device), you are probably committing a felony by manufacturing and selling a signal jammer.


> The solution proved to be simple: we "borrowed" the USB vendor and product ID sent by an Apple-made keyboard taken from a coworker's desk. Looking at the prototype plasma globe sitting in my "old projects" box, it seems that we picked 05ac:024f.

I'm a bit put out that that worked, although I'm struggling to think of a solution that doesn't involve going full-crypto (including a proper PKI to let vendors sign devices) on all USB devices. But if anyone can set any vendor+product ID, it's not really a useful security measure.


You have to realize that it was never meant to be a security measure. It's only there to identify the device to the host system.


Indeed, and the Keyboard Setup Assistant was never meant to be a security measure, either.


Ah, that was a mistake on my part: I'd initially thought it was. Rereading that screenshot, it's clear that it's not a security measure, just trying to help the user set the keyboard up, in which context it makes sense.


It's got the same functionality as an Accept-Encoding HTTP header. It's meant to provide some information to the far end so that it can drive better behaviour. You can "impersonate" a client that isn't compatible and get junk data and there's no cryptographic protection against that.


Google's actual fix was to install a service that detects if keystrokes are being made too quickly for a human (by default, I believe it's 5 keystrokes per 50 milliseconds), and if so, eject the USB device: https://github.com/google/ukip


Reminds me of when I ran some autohotkey experiments on a citrix-type system.

Turned out I could click on controls before they were even drawn on the screen for super-human speed.


Meanwhile on some UIs, I can click buttons after they're drawn and they do nothing unless you wait for half a second or more.


That kinda makes sense to ensure users don’t click on something that they didn’t have enough time to interpret.

I’ve encountered some interesting situations where clicking and hitting a mouse’s scroll wheel would scroll the point of focus on the next screen…


Watching the video, they mention that in order for the red team to reach their goal (downloading Google Glass schematics), they had to pivot from the users they compromised with the plasma globe (who were not working on the Glass project) to users on the Glass team. They seemingly did that using an image attached to an email that executed its payload when the email was opened. They didn't elaborate what the payload did, but mentioned that it "captured the user's fingerprint" and enabled them to use that to access the Glass schematics.

Although it sounds much less exciting, this sounds like a much more serious exploit than a compromised USB plasma globe - embedded in an image and requiring minimal user interaction to execute. I wonder whether they identified a novel 0day in a common image parser or something.


I wondered about that too. My guess is that the image was just a trigger loaded off either a remote server or a local server installed by the exploit so they could know when the user was logged in to their Google account rather than a personal account, which would be off-limits for the exercise.

I’m guessing the documentary just glossed over it for brevity.


I have no insider knowledge, but SVGs can contain JS.


Not when rendered in an img element. Well, they can still contain it, but it won't run.


I wonder if it might be reasonable to break up the attack over time.

Slowly populate a script with contents so that instead of writing a lot of characters that a user might see, just populate a few.


The author links to UKIP[0], a Linux daemon that they built to try to protect against these kinds of attacks. Did a quick "apt search" on my Debian machine, but nothing came up. Do any major distros package this, and do any install and enable it by default?

I guess it uses heuristics to determine if a device is evil, and that could cause a lot of false positives (which would create spurious bug reports and support cases for distro maintainers), so maybe having something like that installed and running by default isn't a great idea.

[0] https://github.com/google/ukip


It calculates the times between the most recent 5 keystrokes and if they are too fast it carries out a configured action (logs the event or removes the device driver).


This could be improved by adding a microphone to help determine a good time to execute.


Rather than a microphone, measure the plasma globe current consumption. They use more power when they are being played with. At that point the user is probably logged in and distracted.


Ohh that's good!


Bit of a weird opening with the car crash stuff considering that driving is one of the most dangerous actives people do every day, and lots of people still die or are hurt by/from vehicles every day.

https://www.cdc.gov/vitalsigns/motor-vehicle-safety/index.ht...


Automotive safety has skyrocketed since the 50's, though road safety for non-occupants and motorcyclists has gone down significantly in recent years.

But...it's not weird, it's part of Gogole marketing's altruistic spin on this as "we're protecting everyone!"

Did anyone else notice that they very quickly glossed over "our red team isn't allowed to go after any user data"?

That's like saying "don't worry, our bank's red team isn't allowed to try and go after money."

The red team is only intended to protect their corporate IP/trade secrets and dirty laundry.


I’d be pretty upset if my bank’s red team withdrew my money just to prove they could.


I am always amazed when old things we used to do show up again, and again, with minor variation.

We used to be able to make (I think you could buy it too) PS/2 keyboards or dongles/warts/mitm devices that did this. The attacker interface was a bit more cumbersome, but ultimately did the same thing.


I wonder if it flopped for anyone because they had a different keyboard layout (like Dvorak) set.


It's a really interesting point. I've had to go through quite a bit of trouble to get non-keyboard HID devices (like Yubikeys) "un-Dvoraked" so that they work as expected.


Interesting thought: Keyboard layout should be a setting scoped to the specific hardware gadget, and everyone does that wrong because it's easier to do it wrong.


I'm surprised this works as described (especially with the speed mentioned). If I press CTRL+Alt+T on Ubuntu, I can release it and press T again (by hand!) without it hitting the Terminal window. Reliably, and repeatedly, even with the Terminal app certainly cached, on a modern device with an SSD and low load.

That means an attacker would need to add a wait to risk losing half the payload to the launch animation or whatever causes the delay, and doing it invisibly would be entirely impossible.


The script triggers a cycling of the scroll lock indicator to indicate success. They only need try again at some point.


Similar to the Mr. Robot episode where Elliot hacks a laptop using a Bluetooth keyboard from a nearby vehicle after an accomplice helps approve a pairing request on the device.


Google was lucky to have Zalewski.


Emulating keyboards pretty old technique. I feel like the rubber ducky stuff and the likes were around earlier. And they are still popular, the flipper zero has a mode of usb payloads but that's more useful if you have physical access.


Yep. Literally a bunch of corporate PR bullshit around a nearly two decade old technique. There's multiple pre-made firmwares for uCs to do this sort of thing.

The fact that they hit the MacOS keyboard wizard shows that apparently nobody on their team had experience with such a basic, ages-old technique.


Equally as interesting from the technical and social engineering sides.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: