Hacker News new | past | comments | ask | show | jobs | submit login
Accidental Google Pixel Lock Screen Bypass (xdavidhu.me)
1592 points by BXWPU on Nov 10, 2022 | hide | past | favorite | 444 comments



The discussion on race conditions at the end is an important one, and IMO the bugfix is a bandage at best: the notion of anything accessing the “current” object after any kind of delay, especially in an event handler, when there is any chance the thing is not a singleton, is a recipe for disaster. In this case, dismissing the “current” security code screen was a supported API surface and that should set off all the red flags.

Of course it’s annoying to have to track the identity of “our screen” and bind that identity to event handlers, or make it accessible with context etc. But it’s necessary in anything remotely security-adjacent.

(And never assume anything is a singleton unless you add a breadcrumb comment for someone who might change that assumption on a different team!)


Agreed. The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds ("if we are dismissing something that doesn't seem to be right, ignore it").

Since they're rewriting code and changing method signatures anyway, I would prefer they got rid of the notion of "currently visible screen" and made sure that all dismiss() calls have a unique pointer or token pointing to what exactly is being dismissed. If this was my codebase, their approach would give me all sorts of bad vibes about additional problems lurking deeper.

The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.


So you'd go out and refactor a major security sensitive component (which dates to time before your career most likely) in a span of a single month for an emergency security patch deadline?

That doesn't inspire a lot of confidence in your risk assesment and decision making.

I'd do what Google did: rollout a patch that addresses the immediate danger and then backlog proper refactors over time.


Their fix included a similarly large refactor, they just used the "security screen type" as a newly introduced parameter instead of something unique to the screen instance.

I do agree that in the real world, sometimes you have to settle for a less-than-ideal solution. I hope my post reads less like "those people are idiots", which was not my intent, but more like: this specific fix isn't ideal, and knowing this type of code is live in a device doesn't fill me with confidence, even if I can understand reasons for why it was done that way.


Right? This was absolutely the "right" level of refactor for a hotfix, as the full refactor would introduce much more state management that could itself introduce bugs. And especially if behind the scenes there was a detailed audit of what things can currently access the current security screen, it would be fine for now.

But I sincerely hope that in the postmortem, there would be a larger meta-discussion around code review practices and how something like this "global dismiss" became part of the API surface to begin with, and a sincere prioritization of a larger review within the backlog. Though with everyone on edge at this time in big tech, I doubt that ends up happening :(


>Their fix included a similarly large refactor

Their change is hardly a big refactor. This includes all the new code, all the parameter changes everywhere the function is used, and two additional test cases. This is a tiny change.

>12 changed files with 102 additions and 26 deletions. [1]

https://github.com/aosp-mirror/platform_frameworks_base/comm...


I don't think that is as much of an issue as the ridiculous process he had to go through.

Think about that first security researcher. You literally found a Screen Unlock bypass (should be Priority #1, right?) - and Google just went and put fixing it on the backburner.

If they will put something like that on the backburner, what else are they ignoring? It isn't confidence-inspiring.

Edit: Also, knowing Google, what are the odds of your full refactor? "Temporary" fixes become permanent fixes quickly.


> Edit: Also, knowing Google, what are the odds of your full refactor? "Temporary" fixes become permanent fixes quickly.

Hahah, I wish that was only Google :D


Could have been sold for up to 300k or more on the black market.


Maybe it was an already well known exploit. After all this was a duplicate and Google was sitting on it. Two people found it and reported it to Google. Why not a third one, and sold it?


Hahah it can go both ways.

You can have 2 major rewrite over 3 years or you can have a new temporary-became-permanent bug fix.


> The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds

This was my experience as a dev on a team at Google for a few years. I saw a LOT of exceedingly lax treatment of correctness in the face of concurrency. There have even been multiple decisions I've seen to guess at how to fix concurrency bugs and just say "well, looks good to me, let's see if it does anything."

It's par for the course, and folks get (got? =P) paid handsomely for doing it.


[flagged]


recursive comment


recursive comment recursive


This just sounds like you're prematurely optimizing for additional security screens getting added.

Maybe that's not on the table atm? Still odd that they took so long to change a couple method signatures and write a couple test cases


They already have multiple security screens, and a demonstrated critical bug with security screen confusion. Not sure how this is premature optimisation.


because if the number of screens is small and there are few tiers (only 2), passing an identifier around could be overkill

sounds to me like it's an optimization for introducing more tiers than what there are


> the number of screens is small and there are few tiers (only 2)

Making this kind of assumption, when there are no such guards in the system itself, is exactly what leads to security issues.

If the system enforced two named singletons as security screens, so it was impossible to .dismiss() the wrong thing, then sure. But that's not how the system is, and assuming that "the number of screens is small" and "there are only 2 tiers" without enforcing that assumption with code is pretty much how the original bug was introduced.


Since they are dismissing the Lock Screen _type_ (SIM, PUK, Pin) and not the instance, a logical example for where this might go wrong is if you have dual SIM. Then again, worst case you dismiss the incorrect SIM Lock Screen. That will not give you full unlock and also the ‘wrong’ SIM will still not work


Yeah an attacker may be able to use their own personal dual SIM Pixel phone to bypass the SIM lock screen for a stolen SIM card that they don't know the PIN or PUK code to using a similar technique, but like you said, I'm almost certain that it wouldn't actually let them use it to send and receive texts (and if it does, then that's really an issue in the SIM card's OS, considering anyone could just modify AOSP to ignore the SIM lock screen and then put that on their own phone.)

Even still, being able to bypass the SIM lock screen would still be a bug, just not a vulnerability. Google doesn't pay bounties for non-security related bugs to the best of my knowledge, but I can't help but feel this is still not an ideal way to design the system. It likely is fine today, but as strix_varius said, these kind of assumptions are what led to this vulnerability in the first place. Vulnerabilities do pop up from time to time in otherwise well-designed systems, but this lock screen bypass never would have been present in the first place had better design practices been followed. As krajzeg said [1], The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.

1. https://news.ycombinator.com/item?id=33545685


I would indeed expect something more robust like a security state machine where not all states can transition to any other state freely. The UI shouldn't even have a say in what transitions are allowed.


The other nice quality if your code is in a state machine is that it can verified by a model checker.

You might like this post on statig, an HSM library for Rust

https://old.reddit.com/r/rust/comments/yqp2cq/announcing_sta...


The Rx model works nicely. Rather than trying to model state transitions, you model "scopes of work." So if you have something like an event listener, you would tie it to the active unlock session. When that session ends, you dispose of that "scope of work" and all work associated with it would immediately stop.


Am I reading this right? This reads like the presence of a UI element holds the unlock state of the phone?


No, not exactly, but Android is old and gnarly enough that a lot of components don't have a clear Model/View separation you'd expect in modern codebases.


The good old "Model / View / Confuser" paradigm.


Android was invented before MVC?


Well it was designed around Java, so definitely before common sense was invented.


Well, the UI element’s dismissal is what signals the system that holds unlock state. And the problem was that multiple other systems would automate the dismissal of the current UI element… without checking whether it was the appropriate UI element they expected to be there!


And of course fix includes magic SecurityMode.Invalid value, which makes dismiss() behave like it did before.

I'd look very hard at places that use SecurityMode.Invalid.


Yeah, I suspect this fix isn't going to hold for very long at all now that everybody knows to look at it.

If Google isn't readying a more extensive fix right now, they're going to get pwned shortly.


Google invests a great amount of money in their project zero team but does anyone know if they have a specific red team that is dedicated for Android ?


As a former Google Information Security Engineer I can say: I don't know, because the Android security organization is completely separate from Google's security organization. It's something I always found odd, as it created some redundancy between both organizations, lack of homogeneous processes, etc



I was under the impression that decrypting storage actually requires the passcode of the phone, but this bug makes it look like the device is able to decrypt itself without any external input. Does anybody know more context about this? What's the point of encryption if the device can just essentially backdoor decrypt itself?


It didn't work on a fresh reboot, so presumably, it functioned like you're describing. But, when he swapped the sim live, without the reboot, the phone was already running with the key in memory.


On iPhone, keys are evicted from memory when the device is locked. Apps running behind the Lock Screen can only write files to special file inboxes (this is why the camera lets you take pictures while locked but doesn’t display earlier pictures, for example)

You’re telling me that android keeps keys in memory for its entire uptime?


That's not exactly true.

There is a data protection class that is like what you're describing, but it is not used super-widely, the one most commonly used is exactly what is being described and makes data available after first unlock.

https://developer.apple.com/documentation/security/ksecattra...


Maybe but this should be limited to application data scope.

What baffles me is that lock-screen is a system wide critical application and should in no way rely on this method.

iOS lock screen in theory shouldn't only respond to cryptographic validation from the secure enclave.


oh huh! thanks for the correction!


That's not really true at all - you can of course unlock your iPhone without entering PIN for every screen lock which should give you a clue that keys for disk encryption generally aren't purged when iPhone is locked.

Some keys are, but not the ones that are the issue here.

I've even seen conditions where iOS devices reboot and still retain keys.


If you unlock the screen using Face ID the OS gets the keys from the Secure Enclave which, depending on the model, does the face recognition itself or using the normal processor in some kind of secure way. Just like if you unlock the phone using the pin code, the OS gets the key from the Secure Enclave which makes sure it’s not easy to brute force. The PIN code is not the key itself of course.

The only key that sometimes gets retained at reboot is the SIM unlock.


Yes, and that's how Pixels work as well. The condition in question here is of course when the secure enclave releases the keys and mounts the storage.


You can have look at this document to answer that : https://help.apple.com/pdf/security/en_US/apple-platform-sec...

From what I gather the more secured keys should be discarded 10 seconds after lock screen event. Lower security keys stay in memory to allow background activity.

Encryption on ios, if i understand correctly, is on a per file basis. There is thus no "mount" event to look for and it should provide no value to use a less secured key if you do not intend to run on background because decryption is supposed to happen on the fly.

PS: Also if I remember correctly pressing down the emergency sequence (holding power + volume up) discard ALL keys instantly and unlock require the passphrase as if you just rebooted. Emergency call don't need to be issued just initiated (must hold 10 sec or confirm on screen to make the actual emergency call).


Can you elaborate on those conditions? It's my understanding that this shouldn't be the case


You probably were seeing a respring (restart of the UI processes) not a reboot.


Presumably not all keys?

If you receive a phone call while locked presumably the phone can still access the address book to display the contact name and photo?

And music playing apps can presumably access their database of music to play songs whilst the phone is locked?


> If you receive a phone call while locked presumably the phone can still access the address book to display the contact name and photo?

Just so you know, this is true on an iPhone, but NOT if the phone has NEVER been unlocked since reboot. If you get an SMS/call in this state, it will just show the number. It can't read the address book.


Could be reading a cached copy of the contact list since it’s not very big

The music playing is a different story


Text messages (iMessages) can be displayed on lock screens. Not sure how they do that with encryption but maybe the notification is separate.


I think for iMessage, the actual messages are sent using APNS, so the message is in the push notification itself. Thus while you can see the message itself without unlocking, any older messages that are behind the Secure Enclave are inaccessible without keys.


This is correct. For example, when I connect my iPhone to my provided work wi-fi, and I get a Tinder notification. I can partially see the message on the lock screen (once Face ID authenticates), but as Tinder is blocked on the wi-fi if I want to read and respond in the app I have to pop to cellular.


> You’re telling me that android keeps keys in memory for its entire uptime?

Yes. I've known that for quite some time, and yet I keep forgetting considering how stupid this feels [1] . Google provides "lockdown" button which is supposedly more secure (I think it's recommended for journalists?)... Well it doesn't evict keys either. Only eviction is to reboot.

[1] It feels stupid because there had been a LOT of work to move from FDE to FBE and to allow two states of data encryption and telling apps to support both of them. Doing all this work just to be able to store incoming SMS and to display wallpaper on first lockscreen...?


Do you have any more details about how that works on iphone? It seems very hard to believe, given the complexity and diversity of background apps on iphones, some of which access huge data that would be impossible in system memory (e.g. offline GPS/navigation apps). For example, Google Photos can send the full photo library on the phone, even if large, to the cloud while the device is locked.



Of course we won’t see analogous bug fixes on the Apple side so we can’t compare too closely. Unless you worked on this codebase :-)


This.

I actually find this incredible. I am familiar with iPhone security but not android and had naively assumed Google probably did a better job on the non-UX aspects.


Nonsense. If that was true then things like backups and cloud sync couldn't happen when the device is locked. But of course they do, meaning the keys are still sitting there freely accessed by the CPU, along with all the data on disk.

Your camera example is not at all convincing of anything special going on, since that's also the camera behavior of other OS's (like Android) that don't purge the keys. That's far more easily implemented as just a basic app policy than some security system that yanks the file system out from underneath running processes.


What do Windows/Mac/Linux do?


Key is in memory at all times after boot on all of those.

Full disk encryption is only useful on a laptop if the device is powered down fully.


That sounds like a security issue. Why are disk encryption keys not evicted in sleep mode? Seems like no apps should be running in sleep mode?


On Linux this is adressed by systemd-homed, which encrypts at least your home partition in sleep mode. Attackers could still try to manipulate the rootfs & hope the user doesn't detect it before using the device again.


It is a major security issue, and one of the reasons people running around with production access on laptops is insane.

It is hard to fix this too, because almost no background desktop processes behave well when they are suddenly unable to write to the disk.

Even if you solved that, your password manager has keys in memory, your browser has cookies in memory, etc etc.


Mac seems mote secure in sleep:

"If your Mac has the T2 Security Chip (recent Intel-based Macs) or uses an Apple silicon chip (M1 family and future), security is significantly improved. The Secure Enclave in both systems uses encrypted memory, and has exclusive control over FileVault keys. The Intel CPU or the Application processor (M1) never sees the keys, and they are never stored in regular (unencrypted) RAM. Due to this, an attacker would only be able to extract encrypted keys (which can't be decrypted), and only if the system failed to prevent the DMA attack in the first place. These protections make it a lot safer to leave your Mac asleep."

From https://discussions.apple.com/thread/253568420


The most valuable information for an adversary is typically found in Ram. Like your password manager master password, browser cookies, etc. Ram can be dumped easily with the right equipment.

The only safe encryption is on a powered down device.


Sleep mode could suspend all activity? You could encrypt all memory before sleep?

It doesn't seem unsolvable, as long as sleep (closing lid) suspends all activity.

(lock with background activity is different, lets discuss the sleep case)


If you fully hibernate to disk where it encrypts the memory snapshot to your FDE key, then you are good to go but that is not locking that is turning the computer off.


> Key is in memory at all times after boot on all of those.

I would think it would have to be while the device is mounted and OS locked, but surely if you dismount a secondary disk/container the key is purged?


As long as that secondary disk uses a different FDE key and you manually unmount it. This is easily done with LUKS on Linux but YMMV on other operating systems


It has to, if you want to be able to unlock the device with a fingerprint


The passcode is required to get access to anything the first time you start the phone, for the reason you mention, and after that the password is retained in the trusted execution environment. This way apps can continue to function in the background while the phone is locked and you can unlock with alternative methods like fingerprints or face recognition.


  It was a fresh boot, and instead of the usual lock icon, the fingerprint icon
  was showing. It accepted my finger, which should not happen, since after a
  reboot, you must enter the lock screen PIN or password at least once to decrypt
  the device.
i was surprised to read this part too. assuming that the author's version of the events are accurate here, my best guess is that the device had not fully powered down, and was in either a low-power/hibernate or find-my-phone mode, where portions of the security subsystem were still powered, hence the device-unlock PIN was still cached. i don't otherwise see how else a fingerprint alone would allow for the device to be unlocked on cold boot.

of course this detail doesn't take away from the rest of the report - great find xdavidhu!


Doesn’t seem like a full unlock, see the next paragraph: “After accepting my finger, it got stuck on a weird “Pixel is starting…” message, and stayed there until I rebooted it again.”


It seems to me this bug appears when a phone is booted, unlocked (and decrypted) once, and then locked again, but the decryption key still stays in memory.


This is virtually always the case with these kinds of vulnerabilities on smartphones. Security researchers often say whether an attack or vulnerability is possible "before/after first unlock" in reference to the fact that the security is a totally different story if the phone has been unlocked/decrypted since last boot.


In the write-up search for the bit that says "and one time I forgot to reboot the phone".

tl;dr: It's not an encryption bypass, it bypasses the lock screen once the phone has been unlocked once.


Which is equally important. Most people have their phone in that state than powered down.


I wonder if this can bypass the "Lockdown" mode. I always recommended people switch the phone fully off in lieu of using Lockdown.


I have an obsession with classifying software bugs into general categories, looking for the "root cause", or more constructively, for a way to avoid entire classes of bugs altogether. I've been doing that for more than 20 years now.

This bug, if you look into the fix, falls into my "state transition" category. You can (and should) model large parts of your software as a state machine, with explicit transitions and invariant checks. If you do not, you still end up with states, just implemented haphazardly, and sooner or later someone will forget to perform a task or set a variable when transitioning from state to state.

This category is my strong contender for #1 problem area in all devices that have a user interface.


I think the root issue is one of which state is the default one. In Android the logged-in state is the default one, and the logged-out state is constructed by taking the logged-in state and essentially hiding it behind a modal.

The issue with this is that systems have a tendency to return to their default state. If the modal is dismissed or has a bug that makes it crash or has a memory corruption, or any number of things then the system will return to the default state.

I would turn it upside down, and let the logged out state be the default one. The logged-in state is then a lockscreen that is hidden behind a session. If the session dies you are back at the lock screen. The lock screen can't be dismissed because it's always there. If the lockscreen crashes the phone locks up completely because there is nothing else there to show.

It's acceptable for failures and exceptions to decrease privilege (boot you out), but they must never increase it.

Edit: Ideally the lockscreen should also run with reduced privileges so that it literally can't start a session even if it wants to, except indirectly by supplying a password to some other system.


Anything related to security should fail safe.

Failure is not lack of rigour, it's from fundamentally flawed architecture.


This begs for the famous quote:

"When a fail-safe system fails, it fails by failing to fail-safe." (from the wonderful "Systemantics").

Yes, one should definitely try to fail safe. But managing your states and state transitions explicitly and carefully is a good way to avoid these kinds of bugs.


Do you have any writing I can read about your classification? This sounds extremely interesting and useful. (I have some related thoughts, but not 20 years' worth and largely not recorded.)


Hmm. Perhaps I should get my notes into shape and publish them… I'll think about it. I would need to force myself to post them to HN without looking at the discussion, though.


Reminds me of Orthogonal Defect Classification. Analyze defects for when they were introduced (during development, architectural design and so on) and what caused the introduction of the defect into the system in the first place.


I second this comment. It will be very interesting to see a rough sketch.


Another way to look at it is, since the bug is from a race condition, modeling your program after functional programming would minimize these bugs


I would like to subscribe to your newsletter.


How come the security model is so basic?

I even think they should dismiss modal by id instead of type.

As this is a highly sensitive part, I think stacking lock screens on top of the unlocked menu leaves the door open for many bugs that could unlock your device.

The unlocked menu should be locked at all times, and use a flag to monitor if it’s locked/unlocked, and only flip the flag when you unlock with biometrics or with password.

If the flag is locked, then the whole screen is black and can’t have any interactivity via touch, mouse, kw…

This way is more robust, so even if you manage to bypass the stack of lock screens, you end up with main menu locked.


I was also thinking they should only dismiss by ID instead of type.

The other question is, why would background tasks be permitted to call dismiss at all? I can imagine a scenario where you get a malware app installed using whatever method. Then when you get physical access to the phone, you send a notification to the malware app. The malware app in the background calls dismiss on every possible type several times to unlock any possible security screens.

There should be some sort of lock/flag/semaphore that is held by the current top level security screen. Dismiss should only be callable by whatever process has a hold of that. Dismiss calls from anyone else should not only be denied, but processes that make such calls should be blocked, quarantined, marked as suspicious, etc.


If an non-system app can call dismiss at all, it's already game over.


Oh, of course.


I was thinking that if I would have to code this, at least once this issue would cross my mind, the question of "what happens when there are multiple screens stacked" and how it should get handled properly. This is what meetings are there for, to discuss such issues.

It almost sounds intentional, but at the very least like a very sluggish approach to security.


I think an even better approach would be to have the concept of fixed tiers of locking combined with evicting the decryption key for any Lock Screen above the basic PIN.

And you can only move down one tier of unlocking at a time. Unlocking SIM PIN moves you down one tier to phone PIN screen.


I'd go for one screen with a queue of prioritized unlocking tasks which need to be completed successfully one after the other. These tasks could get the chance to hand over a fragment which the screen embeds and presents to the user, in order to properly modularize the tasks.


And this is something I come up on the spot. Engineers should think about like their life depends on it, this is a major security flaw.

Even more robust would be to switch that flag off by using the password or derived password from biometrics.


It is android.


To be fair, we only know the source of the bug since it's open source. With iOS, we have no idea how bad the code is behind the scenes


#0016 VENDOR: GOOGLE STATUS: FIXED (NOVEMBER 2022 UPDATE) REPORTED: JUN 13, 2022 DISCLOSED: NOV 10, 2022 (150 DAYS)

Project Zero only gives vendors 7 or 90 days before disclosure...

The short version: Project Zero won't share technical details of a vulnerability for 30 days if a vendor patches it before the 90-day or 7-day deadline. The 30-day period is intended for user patch adoption.

https://googleprojectzero.blogspot.com/2021/04/policy-and-di...

Google should have given Mr. Schütz $200,000 alone for not revealing it.


This is also the point that stands out to me the most. This is hypocritical and pretty close to negligent, if they set such high standards for other companies they investigate but can't own up to it themselves.

I can only hope this is a singular case or else the argument "yeah we collect your data but we also keep it safe!" falls pretty quickly.


If you follow published reports of Android vulnerabilities you'll see that taking longer than 90 days for a fix is actually not that rare. I myself had a similar experience a couple of times.


I am surprised there is an assumption that android fixes will get to users within 30 days


Seems to me like this impacts not only Pixel devices but all Android devices?

Patch was to AOSP: https://github.com/aosp-mirror/platform_frameworks_base/comm...

I don't have a locked SIM handy, but can someone please test on their non-Pixel device and confirm?


Not testing it right now, but my understanding is, that the issue is technically for every device, but the specific condition (putting the lockscreen on top of the secure screen stack right before `.dismiss()`-ing) is a Pixel software bug.


One of the commenters on the blog post stated that the bypass did not work on their Samsung device.



Thing is, most phone manufacturers will customize the lockscreen quite a bit, so it's possible (but not necessary!) it affects others.


I don't think many phone OEMs will actually take the effort to muck around in the lock screen mechanisms.


That's pretty much the first thing every single one of them does to differentiate the phone.


Yeah, agreed. I wonder if it affects LineageOS, which I run.


It does.


Oh no. Do you have a source?


Tried


Every once in a blue moon when I pick up my locked iPhone (which auto-locks in just 30 seconds) and engage the home button just as the screen comes alive from the gyro sensing movement, it unlocks on its own. It just flashes the PIN dialog and slides right onto the home screen. I don't use Touch ID, and never stored my print with it even once to test the feature/hardware. It's been happening ever since iOS 11, with both my 1st gen. iPhone SE and my current iPhone 8.


Do you have an Apple Watch? My phone unlocks as long as I'm nearby, wearing the watch and have it unlocked.


No Apple watch, and it can happen without the phone being connected to anything Bluetooth/Wi-Fi.


You better record and show it to people if possible.


But the Watch tells you it’s unlocking the phone.


And unlocking with the watch only works on Face ID iPhones to make it more convenient when you're wearing a mask.


And it doesn’t do a super great job at it - often times the underside of a table, or sofa cushions will trigger it


Delete this comment and write a bug, you could get $100k


I reported it years ago but the report was ignored and closed, possibly because I could not provide a reliable/reproducible procedure for triggering it.


Yeah I’ve seen this too.


This sounds like a UI race condition and actually gives me more confidence in the iPhone (unlike the Pixel, the unlock state isn’t tied to UI elements).

Unless of course you can do this long after it locks…


The issue is that it happens after the phone has locked, not that the PIN dialog happens to briefly flash before being bypassed.


Very strange. I’ve always used Touch ID when available so I can’t say I’ve experienced the issue myself.


    > "Hopefully they treated the original reporter(s) fairly as well."
Perhaps they should have reconsidered a bounty payment of some sort for the first bug reporter as well. Perhaps that's where the other $30k of the $100k went.

This actually says something interesting about bug bounty programs in general:

Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.

What's the correct protocol to be fair to all of the original bug reporters in those situations? Split the bounty? Reward all of them with the full amount?


> Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.

This case was not the case of eventually the same reports being taken seriously. None of them were, until the author met people working at Google in person at some event, and him showing them the issue and then persisting.

Different than "Ops, we received a couple of requests, better look into it" and more like "this guy won't stop bothering us about it, probably should look into it".

Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.


> Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.

Fair point, but it's also in their interest to overestimate the impact of the bug they found. And, even if the reports are well written, many reports that I've seen (mostly from new gray hats) were not actually exploitable, even with aggressive poc code.


Having been on the other side of a bug bounty: This screams like not having enough resources to take care of the reports.

Probably at Google size it's impossible to have a team large enough to deal with all the reports.


This is where I find out my otherwise completely functioning Pixel 3a no longer gets security updates, as of May.

I knew and accepted that it wouldn't get new features and major android versions, but to not even get security updates, after only three years?

So my options are: live with the piece of technology in my life that is both the most vunerable to physical security issues and has the widest access to my critical information no longer getting active security updates, attempt to root it and install my own build (is cyanogenmod still a thing?), or throw a working piece of complex technology in the trash?

Amazing


There is LineageOS now, https://wiki.lineageos.org/devices/sargo/

The most recent Pixels get a few extra years of only security updates after regular updates run out. So at least they have improved the policy somewhat now.

It would be great if Google went ahead and fixed this problem in particular for more devices, though.


Dunno whether you'll read this, but as a fellow 3a owner I just discovered that there is a September update image on the google website: https://developers.google.com/android/ota#sargo -- I guess maybe this will appear as an OTA update eventually? On the other hand it still doesn't have a fix for this bug, so the situation isn't really any different :-(


Or install GrapheneOS? Pixel 3a support is being phased out slowly but surely but currently it's still getting all security patches.

EDIT: I stand corrected, they stopped support in September. :(


GrapheneOS has just patched this vulnerability for the Pixel 3a!


That's great news!

Can you tell me where to find release informationen for the Pixel 3/3a? I see that there's a new version from Nov 11 (only) for the 3 series but the most recent changelog is from Nov 10 (https://grapheneos.org/releases#2022111000) and I don't see any information there regarding the lock screen bug.


So you migrate to Apple


That would be the "throw a working piece of complex technology in the trash" option, yes


Bottomline: have buddies at Google if you want anything ever get fixed.


Well there just went my chances of getting anything fixed at Twitter or Facebook...


I doubt Twitter would be fixing much of anything even if you knew someone still there.


and Facebook... and every other company too cheap to pay for support.


Applies to YouTube too.


which is also Google


Technically Alphabet.


YouTube is a part of Google. It's not a separate "bet" like Waymo.


A fun and interesting read. But it is frustrating to hear that such a major security bug was ignored until a happenstance meeting with Google engineers.


Wow, this is very serious - it pretty much turns every "left my phone on the bus" incident from "oh well" into "all your data was compromised". I don't know how Google couldn't take this seriously. Even after the poster physically demonstrated it they took months to fix it. For sensitive data with legal disclosure requirements this is a game changer.

Very disappointed with Google here - even though I lost a lot of trust in them in other areas, I still rated their security stance as excellent, especially on their own phones.


This is pretty terrible.

* The security screen "system" works as a stack, and the individual handlers for them don't actually have a reference to their own security screen. That seems like a terrible design; this design caused this bug, and the fix for it feels like a band-aid that could have unintended consequences (and thus new security implications) in the future. The handler for the security screen should have a reference to the security screen itself (or an opaque token or something like that), so it can be sure it is only dismissing its own screen. Passing a "security screen type", as the new, "fixed" code does, is not specific enough for this kind of code, and still seems unsafe to me.

* I'm a bit confused as to how this could unlock a newly-rebooted phone. Isn't the user data partition unmounted and encrypted? How could this partition get decrypted if the user doesn't get the opportunity to enter their device PIN, which should be needed to gain access to the partition's encryption key? I guess maybe it isn't decrypted, and that's why the system got stuck on the "Pixel is Starting" screen. Still, pretty concerning.

Meanwhile, my Pixel 4 still only has the October 2022 security update, claims there is no update waiting, and is presumably still vulnerable.


It doesn't get decrypted. The data is still safe after a reboot. That's presumably why the phone hangs for the author after a reboot. Although some comments have said Android itself loads (and I guess also the launcher) but they can't really do anything


I went to buy a phone maybe two months ago. Before I had my current Google Pixel 6, I used a OnePlus 3T for six years, and even then I only stopped because I sat in a hot tub with it on. At the T-Mobile store, I announced to the salesman that I would be back to buy a Pixel 6 when they had it in stock, and a man pulled me aside and privately asked me why I wanted to buy a Pixel.

He explained to me that he was actually working in the hardware division at Google and that the team that he was managing was responsible for some parts of the Pixel's design. But he added that he had never actually talked with anyone out "in the wild" who owned a Pixel or made a positive attempt to buy one. He went on to explain that most of his team didn't use a Pixel either - they were all pretty much using iPhones, but some were even using Samsung devices.

I understand that this was someone from the hardware team and it doesn't necessarily reflect on the people who work on the Android OS, but I feel silly for not having taken what he said into consideration when I finally bought a phone. If the people working on a device don't even want to use it themselves and can't figure out a compelling reason for anyone else to use it, shouldn't that have been a strong signal to me that I shouldn't have selected it? But I did, and I've been regretting it since. Great camera though.


I'm... not sure that I would take a random person* in a T-Mobile store at their word when they claimed that they were "actually working in the hardware division at Google."


I recognize that I should've been more clear but the person who pulled me aside was a random customer waiting in line who pulled me aside when I told the T-Mobile guy that I was planning on getting a Pixel, which they didn't have in stock. I did ask a fair number of questions about what it was that he did to determine that it wasn't someone older who was just messing with me. Granted, this was some number of months ago, but if I recall correctly he was trying to figure out why people wanted Pixels because on his team, people would use iPhones because their family members used iPhones, or because it was easier from an enterprise security standpoint with BYOD. I'm not sure if I remember specifics beyond that.

It's kind of a post-hoc realization that I should've used his admission to me as a reason to second guess a purchase on a device which I've come to discover: has a stock messenger application that fails to sync message receipt times, that gets very hot to the touch, drops cell phone tower connections until rebooted. And, as the article we're replying to points out, had a lock screen bypass bug that wasn't fixed for months.


In the story above he says he wants to buy a pixel to the salesperson, but it is someone else that says they work at Google.


Thanks, I've updated "salesperson" to "person."


So basically, even sketchier.


No, I'm quite more likely to believe that a customer at T-Mobile is a googler than to believe that the salesman is.


Especially if this is a store in mountain view!


Did this self-reported hardware engineer from Google tell you WHY his colleagues don't use a Pixel?

You could have just as likely been listening to hot air from a random individual. Perhaps an Apple store employee with an axe to grind.


I'm not the OP but I know a couple of Google SRE's and an Android Auto HCI person and they use iPhones...


Sigh, like Microsoft UI designers using MacBooks. How does someone in charge not demand that the developers dogfood the product?


Because then you get headlines like Meta got recently, where developers are being forced to use Horizons(sic?).

TBC I also agree that you should dogfood things you build, especially in the cruisy world of software development where if you really hate what you work on you can just go somewhere else. It is a bad look in the media though


Or maybe T-Mobile gave higher commission for apple sales that month.


The main reason for this is just fucking iMessage.

It's not even just that iMessage segregates non-iPhone messages by color. It screws up video and group texts.


I don't get the iMessage hate.

Just use another app if you want. They already provide compatibility with SMS. What more do you want?


Out of curiosity, why have you been regretting it? I've been using Pixels for quite a while now and generally been quite happy.


Not OP, but for me, the biggest annoyance with Pixel 6 compared to older devices was the fingerprint reader under the screen - so uncomfortable to use, and so much less precise than dedicated readers on the back like they had before (or on power button, like some other phones do).

A general frustration with the entire Pixel line is the lack of video output options. It's basically Chromecast or GTFO - no Miracast, no USB-C video. It's kind of sad when an Android phone has worse compatibility with generic hardware than iPad! And the most annoying part is that Google deliberately removed all these at some point in the past.


There are plenty of reasons that people how to spell it over the years, but none are actual red flags.

For example, I think people give Google grief over making it difficult to unlock the bootloader, but the same can be said of every other vendor.

In my experience, using the Pixel is good enough that I don't miss my Nokia 6.1 running LineageOS too much.


>For example, I think people give Google grief over making it difficult to unlock the bootloader, but the same can be said of every other vendor.

It's not so much that as it is buying an unlocked Pixel and RMA'ing it when hardware problems happen only to receive a locked phone in return. This is the sort of thing that makes people angry.


The fact they’ve had multiple critical bugs relating to emergency calls over the years is a pretty big red flag to me.


What don't you like about the Pixels? I just switched to an iPhone this year and really regret it (the UX is horrible) and miss my Pixel.


I've also struggled to install GrapheneOS and Calyx on my iPhone Pro Max. /s

(I actually do have both and definitely prefer the Pixel -- especially the cameras on the Pixel are amazing in low light, but it's a bit annoying how the official Gcam app seems to expect that Google photos is installed.)

How funny is it that the best way to de-Google is to buy a Google device, and that Apple is incredibly prescriptive about knowing exactly who you are, connecting to wifi, and getting your purchasing on file before you can even get through the initial setup on an iPhone.

The one thing I like about iPhone over Android is that the animations are a bit nicer and more polished, and the stock color scheme is pretty bright (but awful at night), and everything else about Android seems to be better.


Modern carriers are migrating to VoLTE, and LineageOS is unable to implement this outside of a few devices, meaning that many phones have been dropped from the latest release.

As [w]cdma is shut down in preference for 5g and LTE, Pixels (model 3 and above) will be more desirable for users who wish to run Android without Google on modern cellular networks.

I am one such user.

Supposedly, two different implementations of VoLTE exist in AOSP, neither of wich are used outside Pixels (if I understood previous discussions correctly).


I have a Pixel 6 because I bought it through Verizon when they offered a 5/month rebate for 36 months = 180 dollars off. This was at the beginning of this year so before the Pixel 7 released. I assume they offerred the rebate because they had a whole lot in stock that nobody bought, but everyone in my family got a Pixel because of it.


In met a guy once that pulled my aside once and told me he was an alien.


Your perception is that the probability of seeing someone that works at Google in the United States is as low as seeing an extraterrestrial capable of speaking English? What justification do you have for believing that I would be incapable of asking questions that would be able to tell the difference between an actual engineer at Google, and a random person pulling my leg? Google employs more than a hundred thousand people in North America alone. Some people that are now employed there were in my graduating class! Do you have some reason to call me an idiot?


Mr. Edman. I have no doubts about your experience. I'm sorry if I've hurt your feelings.


What was the pragmatic purpose of saying "A person pulled me aside and told me he was an alien" if not to cast doubt on me talking to a Google employee? You didn't hurt my feelings but you were clearly trying to express doubt that I talked to a person that worked at Google. Where does that doubt come from?


Mr Edmam

I have received and read your response regarding my apology to you. I'm concerned that I may have upset you more than I realised.

My grandfather once told me not to take in too seriously random strangers comments. I try to live by that advice, wether it's comments from internet strangers, or, for that matter, customers at the phone shop.

This is of course not an advice from me to you. But I thought I'd share it with you in case you might find it useful too.

DBNR,

R. Root


roll to disbelieve. i think someone was pulling your leg

especially the bit with samsung devices, i've had the misfortune of setting up a samsung phone for a family member and the amount of crap on those is just unbelievable


Could it have been a sales technique? Perhaps to sell you an iPhone. Perhaps the iPhones they were trying to sell have a higher commission and margin than the Android phones.


I wish closing things as "this is a duplicate" essentially required disclosure of the original (dupe) report.

It may well be that it's a dupe, or it may be something that looks similar but not actually the same. And indeed as in this case it's only the follow up report that got the bug fixed.

In this case it seems that contacts at google allowed them to escalate anyway and get it fixed.

But so often and especially with other programs almost everything gets closed as "dupe" which is just dispiriting.

In any case, if something this serious is a duplicate then there's the suspicion it went unfixed for long enough to be independently discovered and reported which is worrying.


Everyone who reports a undisclosed bug should get a share of the bounty; this incentivizes them to stick to the embargo.

If too many people are reporting the bug before you fix it then you have other problems.

I also start to feel that at Google's scale bounties this serious should start doubling every month.


Do you mean each new person should get a new bounty, or all reporters should split the bounty? The latter does not really incentivize much, but the former incentivizes reporters to collude with other reporters (i.e. you find a bug, tell your 40 friends to report the same bug, you get a kickback from all your friends who also reported it. $$$$).


The latter does incentivize everyone who stumbles across the bug to not disclose it. At the same time, it's sad for the original researcher whose bounty gets smaller with every new person stumbling across it.


It dose imply that finding it was easier then ones where you are the only reporter; partially justifying lower rewards.


No. A bug that can be trivially found is higher likelihood of being exploited, and thus higher impact.


Higher impact; but if it is just luck you are the first of many to find it and did not invest a lot of work in its discovery is reasonable to pay less. Under "Closed as dup" system the probability is you get nothing for reporting trivially found bugs. Whilst you are still providing valuable information (that lots of people can find it).


Well i see where you are coming from, the point of bug bounties is to reduce risk to the company not neccesarily to reward effort of the researcher. There is a sense that a bug where you have to be NSA level of skill to find is less likely to be exploitted than a bug that every script-kiddie is stumbling upon.


> Everyone who reports an undisclosed bug should get a share of the bounty; this incentivizes them to stick to the embargo.

Having worked with but bounty programs, I can guarantee this would be abused to no end. Reporters would enlist their friends and family to submit as many duplicate reports as possible.

There are a lot of good security researchers out there doing work in public, but bug bounty programs also get flooded by people looking to game the system in any way possible.


I mean you all share the fixed bounty amount. You could only game the system if you expected other people had already found the bug. However this would be risky as it is fairly easy to detect and penalize. The common case is still you only get one reporter per bug.


Yeah for purposes of the reward it should only be allowed to be considered a dupe if it duplicates a disclosed bug.


I agree, but just to play devil's advocate if I discover a bug, disclose it, then tell all my friends to also file a report before it is filed they'd have to honor multiple bounties.

I, too, am frustrated that I've read far too many stories about someone reporting a devastating critical exploit and all they get is "this is a dupe" back without further explanation. Makes one paranoid that employees are working with someone externally, back dating their bug reports, and splitting the bounty.


You'd probably violate the agreement so you and everyone else technically wouldn't qualify and would be committing fraud. That said there are other options, such as splitting the reward for a vulnerability amongst those who report it (even the dupes). This would incentivize people not to disclose the vulnerabilities while keeping the payouts static.


I suppose the risk is people could 'game' the system.

Person A finds the issue, reports it.

Then Person A secretly tells Person B about it (with no apparent connection), and Person B reports the same issues a few weeks later, but with apparent different code/description to look ever so slightly different.


Split the reward between everyone who reported it. It's even still kind of fair: The more people find it the easier it was to find.


Of course, then when A and B independently find a bug, B can enlist C, D and E, thus taking 80% instead of 50% of the bounty.

No system is perfect.


> I wish closing things as "this is a duplicate" essentially required disclosure of the original (dupe) report.

Only if it has been fixed and is allowed to be talked about, else malicious actors will submit speculative bugs to see if they catch anything.


Speculative bug reports are irrelevant, since they don't have a repro/proof of concept.


I’ve run into this with other vendors and really wished it’d get you CCed on updates so you didn’t have to ask for status periodically. It definitely doesn’t give a good impression when things drag out for aeons.


What's crazy is that it's 100% in the vendor's interest to keep this person happy, who they know can cause massive damage to their system, completely legally. The only leverage they have is the reporter's greed to get a bounty.


It's not greed to hold a company accountable to its promises of compensation.

Even so, surprisingly many researchers disclose a bug after setting a reasonable fix deadline, risking to forfeit compensation. Kudos to them!


Surely in this case, the second report must have added some details, since they weren't fixing the original report and i assume android doesn't just sit on lock bypasses.

Seems to me that if you report something that significantly recontextualizes a previous report (e.g. make it go from a low to a high severity), then your report shouldn't be considered a dupe.


I've reported some bugs to programs on Hackerone before that were flagged as dupe and the triager did reference the original report. Chrome team does this too.


I wonder how many LEO agencies are now digging androids out of the evidence closet.


Sounds like this only affects phones that have been unlocked since the last restart, so unless they have kept them plugged it is unlikely that this attack would be successful.


This is why Graphene OS has an auto-reboot feature. So that a device cannot be kept plugged in until an exploit like this is discovered.


Ah, yep. I wonder how sophisticated (or not) a typical police department is with these kinds of procedures.


Some discussion elsethread[0] suggests that that may only be the case for devices that are encrypted, as the passcode in that case would be part of the key for unlocking the contents.

If that's the case, it's possible that this attack may still work from a fresh boot for unencrypted devices.

[0] https://news.ycombinator.com/item?id=33550327


LEO already have access to locked phones via stuff like GrayKey.

https://www.grayshift.com/graykey/


I am always skeptical of these "lawtech" companies that sell magic unlocking devices. Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?

I find it rather convenient that the "detailed support matrix" is only available for current customers only, seems to me like the actual amount of supported devices/operating systems would be limited to things such as outdated Samsung Galaxy phones and similar.


It's complicated, but yes there are a lot of ways to unlock devices some of which include straight up exploiting the device. Keep in mind btw that a lot of the sorts of criminals local LE is going after with these devices are not usually running fully patched iphones or pixels.


>Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?

If you are at all familiar with the histories of jailbreaking, previous exploits, and the gray unlock market, it’s unreasonable you would not consider this this to be the default case.


It works. It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes. Other offerings are a keylogger for the pin/passwords after they "return" the device to the suspect.


How would you install a keylogger on an encrypted device without rooting it or deleting user data?


I guess you could replace the screen with one that logs taps.


maybe you could sniff the data coming from the touchscreen with something you physically install into the phone.


> It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes

Since the pin/password isn't actually the encryption key and is instead just the code that is provided to the security module/TPM on the device, I fail to see how this can be bruteforced. Unless there is also a magic hardware backdoor in Android phones, but in that case why would there need to be private companies and how would they even have access to this.


Confiscated phones often have been confiscated for months and are therefore on a relatively old patch level. If at any point old vulnerabilities come out these can be used. Keeping the phones on and connected to a charger in the evidence lockers doesn't seem like too much work.


> Keeping the phones on and connected to a charger in the evidence lockers doesn't seem like too much work.

There's no way that's a standard procedure.


Why? Seems pretty intuitive to me in a time where everything is encrypted.


If the phone is setup for automatic updates it'll restart within a month (most of the phones I've had do monthly security patches) and you'll be in a fresh boot state. You can't turn off the updates without first unlocking the phone giving you a rather limited window to attempt to exploit the device.


Will it reboot if it's not on network?


Updates need network access. If the phone isn't on a network, then it won't reboot.

Police can't really pop out e-sims, that means the police needs to keep the phone in an RF proof bag/work in an RF proof room.


Which, again, wouldn't be too much work. Also, my Android phone does not update and reboot automatically.


Android disabling USB data by default has been a thorn.


I think it has problems in some cases, pin codes longer than 6 digits.


Why don't Google and Apple buy this product then proceed to analyze and close all holes?


If I had to guess: not everyone can buy this software and A/G are not wanted by the sellers. Even the usual customers (law enforcement) are not very likely to pass exploits to them, because their work would become more difficult.


Very weird implementation with UI stacks and dismiss. The way we designed a multi step flow for a web app was basically having a sort of state machine/flow which says what are possible transitions

say password > mfa1 > mfa2 > done

and as each steps complete what's the next security steps for this particular user's configuration and simply allow just that transition. Once we are at the done state the authentication is marked as successful.

Not storing auth state in UI (regardless of any MVC concern) and allowing only a very narrow allowed state of transition seems like a trivial design choice. I assume google has no shortage of people for security focused design.

The UI stack being created together and dismissed rather than created/called on demand as state transition happens also seem a very wired design. Perhaps I don't understand the reason cause I'm not an android programmer.


> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.

> We typically do not reward duplicate reports; however, because your report resulted in us taking action to fix this issue, we are happy to reward you the full amount of $70,000 USD for this LockScreen Bypass exploit!

Lots of mixed feelings just reading this, but at least in the end it seems like a positive outcome for everyone.


Ah, that's a nice hack to avoid having to pay your bounties! First report: "can't reproduce, sorry." Subsequent reports: "duplicate, sorry." Then fix on whatever schedule you feel isn't too blatant.


And they stiffed him $30K


Appalling handling on Google’s end here. The duplicate issue part I can understand, but why should it take two reports of a critical vulnerability to take action? Surely when the first one comes through it’s something you jump on, fix and push out ASAP, not give delay to the point where a second user can come along, find the bug, and report it.

The refactor that’s mentioned towards the end of the article is great, but would you not just get a fix out there as soon as possible, then work on a good fix after that? For a company that claims to lead the way in bug bounty programs this is a pretty disappointing story.


You can read in the conversation that Google was not able to reproduce it the first time the bug was submitted:

> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.

I wonder if it really was the same bug or what they did wrong to reproduce it. Or maybe they just made some mistake in reproducing it.


Agreed. If the first bug was

> I did something weird after putting in a new PIN, and I was able to access my home screen without my password, but I'm not sure of the exact steps I did

then that's not really a duplicate. If the original bug report doesn't have enough information to recreate the steps, the second one is the only real bug report.


Yes. The first one is more like a user complaint than an actual reproducible bug report.


Then if that’s the case, the author should have been paid a full payout, not a “thanks for making us fix this” payment.


Just trying to rationalize, but if the "external researcher" was hired by Google to find security issues, google might have a requirement to fix the bug at its own pace.

I would personally be highly suspicious of a security flaw being a duplicate though. It's can be a very convenient excuse not to pay the bounty.


Reporting and investigation matters. Perhaps the initial report was only on the bypass of the lock-screen but the initial report only ran into the decrypted phone state so it was dismissed as not being exploitable (see other comments), whilst the second report actually got inside an active phone (And then was also written up in a simple, concise and reproducible way).


[flagged]



They ended up rewarding him with $70,000 tho


    > "Due to this, they decided to make an exception"
Sounds like they weren't going to at first, though, because it appeared to be a duplicate, but this was the better bug report that prompted an action.

(To be fair: my hat's off to Google for even having one, and it's still shocking to me that AWS doesn't have one at all.)


AWS has a bug bounty it's just hosted by the fine black hat community instead of amazon


took me a moment to catch! nice!


yeah because he made a fuzz about it. Guess how many bugs are reported in and they just tell you it was already submitted and never talk to you again.


Yeah I agree with that one. They set up a call and he stood by his decision to disclose it on oct 15th. Then 3 days before the disclosure deadline they rewarded him.


So basically google wanted to give this guy nothing. Then he set a hard deadline for disclosure and google managed to buy him for 70k so they could stick with their own deadline.


It would not surprise me if in some cases, google runs the exploit up the tree to the NSA to see if they're actively using this for matters of national security, then slow-walk the patch to release. Given how easy the exploit is (no software needed, no special equipment beyond a paper-clip), would not surprise me if this has been in wide use for several years now by various groups.



I agree that it appears to have been the disclosure threat that resulted in the bounty, but I don't agree (if I'm reading you correctly) that the OP acted unethically. It sounds credible to me that he was just doing everything he could to get the bug fixed.


According to the article, the reporter had already decided before the bounty had been set that they would wait for the fix:

> I also decided (even before the bounty) that I am too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway.


According to the bug thread transcript Google have not yet known he's not going to disclose in October when they offered the 70k.

https://feed.bugs.xdavidhu.me/bugs/0016


Or more charitably, by the terms of the program he wasn't eligable for anything, but they gave him seventy thousand dollars out of goodwill and the spirit of the program.


Then they would have done this before the sorta-threat of his own disclosure date.


I don't know anything about Android or iOS coding but...

I'm actually very suprised it's coded the way it is, a stack of screens that are dismissed over the top of each other. I would expect one very specific boolean/flag bottleneck that says if the device is locked or unlocked. Then a list of actions that can be taken when locked (such as sim pins, emergency calls etc) and then unlocked (such as everything else). And that flag can only be flipped by unlocking the phone. This set of screens on top of the phone that can be dismissed is very surprising to me.

Does anyone know how the iOS lock screen works? Is it the same way?


Just tried it myself, works on Android 12, doesn't work on Android 11. Both LineageOS, so the bug was introduced somwhere in AOSP 12.


Given the bug was already reported (and even more ignored) it seems like the $70,000 was really a “you made us do our jobs” fee.


It read like the payment came only when disclosure was imminent. Google basically extorted themselves into paying it to encourage pushing disclosure out a couple months.


Yeah, they tried to set up a call to dissuade him but he stood by his decision. Then only 3 days before the disclosure deadline they decided to pay out.


Security through ̶o̶b̶s̶c̶u̶r̶i̶t̶y̶ bribery

*Edit: ̶H̶o̶w̶ ̶t̶o̶ ̶s̶t̶r̶i̶k̶e̶t̶h̶r̶o̶u̶g̶h̶?̶ - cheers


As far as I'm aware you can't, maybe this works though https://unicode-table.com/en/tools/strikethrough-text/

̶t̶e̶s̶t̶


Not sure if they have that https://news.ycombinator.com/formatdoc


If it was really a duplicate report, then where’s the HN article “I just got $100k for a security bug I reported a year ago!?”


> During the life of this bug, since the official bug ticket was not too responsive, I sometimes got some semi-official information from Googlers. I actually prefer to only get updates on the official channel, which is the bug ticket and which I can disclose, but since I was talking with some employees, I picked up on bits and pieces.

This is going to be one if the uncounted casualties of a downturn in tech and layoffs. When the organization is in turmoil, and the tenured folks have left out the backdoor, security flaws are going to remain open for a lot longer


Turmoil isn't a good description of Google right now, though. It's far from healthy, but it has avoided mass layoffs so far, and the bug was filed in June.


> of Google right now, though.

I agree, my commentary is more on tech as a whole, which is in turmoil rn.


THIS IS ABSOLUTELY CRAZY! I have personally tested this on my Non-pixel Android 12 device and it works. My findings: - The exploit works even on first pwd input screen on boot. however, the filesystem is still encrypted and cannot be accessed by any means (ADB/MTP). launcher does not load fully. but settings and other things accesible from notification panel can be launched (BT/Hotspot etc). you can get list of installed apps and many other sensitive informations that are not stored in /data/media/0/. - adb can be connected. shell can be launched but data partition is not accesible. - mtp initializes but does not load. I guess although one cannot access the user data, the ability to access/control other parts of system potentially exposes a huge attack surface.


Are you able to set a new pin or fingerprint from that state?


What specific device do you have?


The device I tested this on runs a moderately modified AOSP based os. I cannot specify the device model etc. I have also tested this on another LineageOS device and that is also affected. So I suppose any aosp-based rom that isn't heavily modified (like Samsung/MiUI) are affected.


So, did someone else get the full $100k for reporting the vuln already or was that BS?


It's entirely possible it came from a channel that wasn't eligible, or from a google employee, or own of their own security testers, etc.


Google could not reproduce it the first time so presumably they didnt pay up.


> I mentally noted that this was weird and that this might have some security implications so I should look at it later.

If I had experienced the same situation I'm sure I wouldn't have noticed that something was wrong. Kudos for noticing that and thank you for documenting it for everyone to understand :)


>Two weeks after our call, I got a new message that confirmed the original info I had. They said that even though my report was a duplicate, it was only because of my report that they started working on the fix.

Google engineers don't seem to care much or am I being too harsh here ?


If you are that makes two of us, my partner just asked me why I shouted "what the hell?!" when I read that.


But for sure the instruction manual says that the sim can only be inserted/removed while the device is off? Security is ensured!


I was surprised that hotswapping SIMs works, I thought it was not supported.

Many phones used to have the SIM under the battery (back when it was commonly removable), ensuring you couldn't remove it without powering the device off first.


Why do you think so?


Think? Ok, I checked. And it IS in the manual. From the google pixel help [0]:

> "Insert a SIM card > With your phone off:"

[0] https://support.google.com/pixelphone/answer/7086887?hl=en


I think I triggered this a few times; I use a alarm clock that's dismissed via math equation which I frequently get mixed up with the lock screen when waking up. I have GrapheneOS set to auto-reboot every 8 hours so then there's also the SIM unlock screen. In the state of just waking up it's easy to PUK lock the SIM. I guess I just assumed the "Pixel is starting" message was some unrelated bug and just manually restarted. I think you can still continue to a normal boot if you manually lock or wait for the lock screen PIN timeout to expire.


Lol this reminds me of those windows login bypasses by navigating some convoluted menus


I wonder if reluctance to fix this was because this "backdoor" was being used by security services. They now have to figure out the new one...


Maybe the new one was encoded in the fix.


Can we expect a fix for Pixels outside of the official service window? So 4 or older?


No. "No security updates after X" means no security updates after X. Of course you can install an up-to-date OS, e.g. LineageOS works on the Pixel 4.


"Guaranteed security updates until at least: X" is their actual phrasing.

https://support.google.com/nexus/answer/4457705?hl=en#zippy=...


It’s odd that dismiss can even be called like that from a lock screen. I did not expect android to not have the lockscreen be just another activity.


And what do we learn from this?

Pressure on disclosure date until Bug bounty, then relent.


The last scheduled security update for the Pixel 4 was the October 2022 one. So this might stay unfixed on those phones.

https://support.google.com/pixelphone/answer/4457705?hl=en#z...


That’s pretty frustrating, I have a Pixel 4 that I quite like.

This is one of the reasons I recommend the most recent lowest-priced iOS device you can afford to family and friends who don’t upgrade often.

My grandfathers iPhone 6s is just now going EOL after 7 years. Apple is a little inconsistent with updates for prior iOS versions but it still received the iOS 15 security update.

It wonder if iPhones end up being cheaper because of the extended support?


Very interesting. To summarise, I think the issue is that the phone gets itself into a state of waiting for a locked SIM to release itself before it unlocks the phone - the problem being the attacker could have their own pre-locked SIM they can hotswap in that of course they know the code for, and this will erroneously also unlock the phone.


Add to that the fact that the pixel 6 left audiophiles SOL for almost a year with no 3.5mm jack and broken USB-C DAC compatibility. Ontop of that Display Port Alt Mode is still disabled on every pixel for no good reason, despite many pixel owners reaching out to them, leaving us SOL for an alternative to samsung dex, or compaitibility with devices like the Nreal Air AR glasses.

Google's hardware support IME is a shit show. They need to stop spending all their time playing with ML tricks that nobody uses outside of ads and keynotes (aside from normal camera stuff), and start listening to customers and addressing bugs and missing features that even the mid-tier chinese phones have rolled out.

I'm buying a oneplus next time around and never looking back, idc if google's new chip gives me 2x battery life at the same price, it's not worth the frustration.


If you want a stable bug-free phone, oneplus is not the phone for you. Ever since they merged oxygenos and color os, oneplus phones are laden with bugs.

I was planning to switch from oneplus to pixel for this reason.


Oneplus used to be good. But the experience and the OS is not the same. If you want Dex. Why not Samsung? I have a Pixel 6 (which I regret trading my Onplus6t for) and my next phone probably will be a samsung. The new ones are pretty stable and not as bloated as they used to be.


Was a loyal OnePlus customer, but jumped to a used Samsung Note 10+. The Note has an amazing screen and much better camera. Be sure to use the Android Universal Debloater[1] and your favorite launcher, and it will feel like stock Android, just like OnePlus used to deliver.

[1] https://github.com/0x192/universal-android-debloater


Buy iPhone. I'm switching from iPhone to Pixel because iPhone bans some apps which are essential for me. Android is just so bad. The day Apple allow side loading, I'll switch back.


I wouldn't worry about missing out on battery life. Every Google phone has had pretty sad runtime since the very first Nexus. It would seem they don't care.


This is very worrying. I have two old Pixels (2 and 4) with plenty of personal data, and none have been updated for years now. I am sure other people too keep their old phones around and won't be getting updated either.


This only applies to phones who remain continuously powered on and still retain the decryption key in memory. It's possible you turn on your old, unused phone, unlock it (putting the key in memory), and plug it in for years but the number of phones in this state are probably vanishingly small.


It would be interesting to understand if it is also reproducible in other brands


My daughter wears earrings, so she never needs a SIM ejection tool. But I keep one on my keyring - amazing how handy it is. (I don't carry a PIN-locked SIM card!)


Why do you need to pop out your SIM so often? Is that an Android thing, or are you actually swapping your SIM all the time?


No, big family, frequent SIMs moving between phones. But it comes in handy for other things that need a similar pointy end too.


I also keep one on my keyring and I get poked in the finger/thigh by it all the time; it’s super annoying! Still keep it though since I regularly have to pop my SIM in and out..


Kind of related: My new Samsung S8 c. 2017 has unlocked a few times without my finger or password. The first couple times I figured user error. By the 5th time, I'm pretty sure it's a software/hardware issue. Now my version of Android doesn't even get security updates any more. Maybe time for a new phone.


Nicely written. I have found my Samsung phone unlocked for no reason so many times I can't remember anymore. I am sure there is some way to use the emergency call or the ICE feature to bypass the lock screen. There seems like these features randomly gets activated while the phone is in the pocket as well.


I came across this so called bug. I thought it was a feature. I never realized how it could be used maliciously.

Security is still such a weird concept. Some times it feels like a paralyzing debilitating effort. Similar to how parents yell at you for things you should not be doing, even though it is exciting and useful .


> I decided to stick with my October deadline.

[...]

> I also decided (even before the bounty) that I am too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway. I decided to wait for the fix.

I have gone through similar trepidation.

What were you scared of?


Security researchers (like the one here) don't want harm to come to people from their actions. If they announced the live bug before it was patched a lot of people and organizations might have been adversely by this before the fix was applied.


Right.

I should have asked "at what point did you decide that the wait outweighs the risk?"

That is, at what point wait becomes too long, and is worse?


> We didn’t have a SIM ejection tool.

I didn't have one yesterday night. Things that work: the classic paper clip, a small staple for paper sheets, the inner metallic wire of twist ties (or whatever they are named.) I discovered the latter yesterday.


Glad he got rewarded. Feels like this could have played out differently, if it had hit his disclosure deadline we might have been reading about him going to prison, such is the febrile nature of the legal situation around vulnerabilities.


I tell companies 90 days. When they ignore me I go public at 90 days, consequences be damned.

No jail time for simply telling the truth about a discovery I made on my own time.

https://www.vice.com/en/article/3kxy4k/high-tech-japanese-ho...


Are older Pixel phones or other unpatched Android devices vulnerable to this?


Yes, that is the entire point why this is so nasty.


It's quite amazing how poorly designed Android really is. Every single part of that OS is poorly architected and have horrible APIs. What a shame.


Considering it's probably the world's most used OS, it's laughable. They churn out new recommendations constantly only to deprecate it again and think of some new tedious convoluted approaches a short time later. I think it's a combination of poor judgment and Google's toxic internal incentives to design crazy new stuff.


Linux?


Do you think there was an previous report for which this was a duplicate or were they just trying to get away without paying?


This may be a stretch but I could see the original report coming from an intelligence service.

The report might be accompanied by a request to hold off on patching it due to active use.

This would explain the desire to wait on G's side, and why it would not explain the prior report.


And also why they patched it only when faced with exposure.


The code quality is a bit concerning but then again, we have no idea what iOS looks like so it's hard to judge fairly


That architecture is scarily coupled with UI and it doesn't inspire a lot of confidence in Android.

Ship it mentality much?


Pretty alarming, and there's a ton of other complex security implications arising from this.


Have you tested this on other brands?


Wait, this bug affects unsupported pixels now right? A Pixel 4 won't have this fixed ?


On my pixel 4a there's a "critical security" update available for download. (Current patch level is one month old)

I will try to exploit before and after downloading.


The 4A is still supported.


if your patch level is less than 2022-11-05 you are affected :D


Luckily if you use a custom ROM, that is going to get updated after the official EOL, this should get fixed.


This kind of response makes it real tempting to just sell it on the market instead...


Reminds me of how my daughter can unlock my Asus phone. My code is 8 digits long, policy set by work, and a fingerprint reader in the screen. She definitely doesn't know my code but somehow can still get into my phone if I leave it on the couch.


Btw, I would love if Smasung comment if this also affects Knox.


Incredibly good writeup, author seems like a great bloke.


How do you declare 70k bounty on your taxes?


"Miscellaneous income".


iPhone fanboy here. Here is another reason why I use iPhone. I feel like I left this comment way too many times.


Very well written, thank you.


Hei


any other android version vulnerable?


jwz approves


I can't believe this is not a "drop everything and get it fixed ASAP" bug. This makes me think there's probably tons of other similar bugs out there being exploited right now even with disclosure.


The security researchers only mistake was letting Google fart around for so long.

You give them 90 days, then you go public. That is the policy Google Project Zero holds other companies to, so it is only fair to hold Google to the same standard.

People using their device for high risk applications need to be informed in a timely manner, and Google needs to pay a reputational price for their negligence.


70,000 reasons to think long and hard about that appraoch though :-D


An alternative would be to go show a bunch of journalists that you can unlock their phone and have this all over the news. You get your name /really/ out there for holding Google accountable for security negligence and ignoring a very reasonable 90 day window. The exposure could lead to millions in security consulting contract work over time if played right.

Disclosing on time is a way to force companies to fix the bugs, and to get a major social capital boost that can be used to get a return on the time investment.

Personally I love when companies try to call my bluff. Great chance to educate the public on why they should not be trusted.


I suspect if he started the conversation with a 90 day disclosure window they would have offered him $100k immediately to extend the deadline. Of course, you'd have to consult a lawyer to make sure you don't technically cross the line into blackmail.


If you use a Pixel for high risk applications you are a bit at fault here


What a weird argument. So if the law enforcement of your country uses this technique to unlock your phone without your permission(or you know, some criminal does that), that's your fault for using a Pixel phone? You should have known better than you know, buying a phone from one of the largest software houses on the planet?

I smell a fair hint of victim blaming here.


> I smell a fair hint of victim blaming here.

Why is that a bad thing? You should absolutely blame and hold the victim responsible and accountable for their part.


So let me rephrase my question - what part of the blame should be assigned to the victim here, if their "fault" was buying a phone made and marketed by one of the largest and most well known software developers on the planet?

Also, this is an interesting discussion in general. If someone forgets to lock their door and a thief gets in and robs them, do you think it's fair to "blame" the person who forgot to lock their door? Or do you think that maybe we should recognize that 100% of the blame should be on you know, the person doing the robbing?


I agree that there's not any significantly better phone options, but no I would not place 100% of the blame on the robber. When we're talking about possessions, theft is a reasonably foreseeable consequence and not an outrageous action, so the owner can get a small slice of blame.


> If someone forgets to lock their door and a thief gets in and robs them, do you think it's fair to "blame" the person who forgot to lock their door?

No, but let's say they've bought from a manufacturer who is not most well known for their lock mechanisms, wouldn't it be the user's responsibility to find a better alternative? You're to be held accountable for your part.

You're making the assumption that the average person thinks Google employs the “most well known software developers on the planet” – that's your subjective take, not anything close to common knowledge


I disagree with this. There isn't a consumer-level alternative to the security provided by a pixel if you want to use a cell phone right now. I guess you can argue that the iphone is better, but without a specific threat model to discuss, it's like arguing mountain dew is not healthy so you should drink dr. pepper.


iOS has had many flaws this bad or worse, so what would you have people use?

I agree current gen smartphones should not trusted for high risk uses but the reality is, they are. There are staggering numbers of people using their phones for banking, crypto trading, or to transmit sensitive information that could collapse markets or start wars.

Also consider not all journalists or dissidents get a choice in what phone they can afford.

Security issues like this can be life or death, and security researchers must sometimes -force- companies to treat them as such.


> iOS has had many flaws this bad or worse

Has iOS had a Lock Screen bypass in recent history?


There have been MANY such attacks against the iPhone (and every other device), most of them against the biometrics mechanisms, which tend to be pretty weak as a matter of first principles. Add to that the persistent hints/rumors/claims of gray market unlock/rooting kits available to large entities. Phones just aren't that secure, though they're much more so than they were a decade ago. Security vs. physical access is an extremely hard nut to crack, it's only been in the last few years that we genuinely thought it was even possible.


Okay, but fooling a biometrics sensor is not exactly a Lock Screen bypass. Has iOS had a Lock Screen bypass?


Fooling a biometric sensor is precisely a lock screen bypass, that's what the biometrics are for. By that logic the linked bug was "fooling the SIM security layer" and not a "lock screen bypass". Don't play that game, it's bad logic and bad security practice.


But it’s a fundamentally different type of security bug: these biometrics bypasses require knowing something about the user (lift a fingerprint, picture of a face, etc).

I see this as a different class: I can grab an unknown person’s Pixel they left in a coffee shop and get into it.


Cellebrite sits on a pile of unlock exploits for Apple devices and sells unlocking services to law enforcement, or presumably anyone with money.

https://cellebrite.com/en/cas-sales-inquiry/

Zerodium brokers sales of iOS FCP Zero Click for $2m. I expect they sell to people like Cellebrite who can make a profit selling expensive unlocks and keeping the vuln secret.

https://www.zerodium.com/program.html

All phones are security shit shows. It is just a game of how well known this months exploits are and how much someone has to gain by targeting you.


It has had multiple remote, zero click remote code execution exploits so it's actually worse?


If you use any always-listening (see rooting exploits over wifi beacons) general purpose computer for high risk applications, it's a bit your fault.


This was kind of my experience with reporting a bug to Google as well. Some years ago I managed to upload a SWF file to "google.com" which allowed me to do an XSS and access anyone's gmail, contacts, etc. I reported it and they just initially never responded and I had to constantly follow up. It was seemingly a simple bug to fix but it took them a couple months and they eventually only paid $500. Being able to exfiltrate data out of someone's gmail account always seemed high priority to me but I guess not lol.


Do you mind sharing a weite-up about that bug?


I forget which Pixel generation.

For one generation Google I believe never shipped the ability to unlock your phone with your face. Despite having all the hardware on the phone, it just didn't have the feature.

This was a serious feature deficit viz a viz the relevant iPhone at the time.

The gossip was, the feature was finished, completely.

Had to be ripped out after external pen-testing bypassed it with Facebook photos.

They have many, big, problems.


Android introduced face unlocking in 2011[0]. It used the regular front camera and hence had no depth information, which makes it vulnerable to photos[1]. It was removed in Android 10, when a new face authentication interface[2] was added. Face unlocking without specialized hardware such as what iPhones have is not secure.

[0] https://www.androidauthority.com/face-unlock-android-4-0-ice...

[1] https://www.androidauthority.com/android-jelly-bean-face-unl...

[2] https://source.android.com/docs/security/features/biometric/...


Pixel is on generation 7. Only two supported face unlock: 4 and 7.

6 was rumored to have it, but it was never delivered.

6 and 7 are equivalent hardware-wise for face unlock: neither has the sensors to do it in a highly secure manner. 7’s face unlock therefore doesn’t give you access to the most sensitive stuff, like bank accounts, requiring supplemental, secure authentication, such as fingerprint.


I'm not really sure what you're talking about - the only generation that had LIDAR was Pixels 4/4XL and those shipped with face unlock.

There WAS a rumor about Pixel 6, but it doesn't have any special face unlocking camera. Pixel 7 does support face unlock without special hardware with caveat that it's less secure.


> This was a serious feature deficit viz a viz the relevant iPhone at the time.

IIRC, the iPhone uses not just a photo from the selfie cam, but adds infrared to construct a sort-of-3d-ish depth map of your face as well - that is what defeats a simple attempt at unlocking with photos.

Now, the really interesting thing to research is if a silicone molded face mask could be used to fool the iPhone into unlocking. Photos or videos of the subject in multiple angles should be enough to create a decent enough 3D face copy.


Won’t work -

https://9to5mac.com/2019/12/16/3d-mask/amp/

Muscle movement is also now necessary so it’s pretty difficult to circumvent


That just sounds like you need a proper mask that can be worn. And it doesn't sound like it even needs to fit.


Betcha someone will make a silicone mask that twitches.


A video rotating around a subject + nerfs could maybe get you the 3D face copy pretty easily


> Despite having all the hardware on the phone

Did Pixel phones really have a frontal lidar?


Pixel 4 had dedicated hardware (project Soli)


Soli != hardware for face unlock.

It had 2xIR cameras, flood illuminator and a dot project for that purpose. Soli was a gimmick on top of that, so it would enable that hardware above when you were reaching with your hand for the phone.

In my case it was a gimmick because I don't see much difference between face unlock times when I reach for the phone and the most useful feature for me (swiping to change music) was working also when my windshield had wipers working.

I dream of a Pixel with normal face unlock (like in Pixel 4, not the crippled on in Pixel 7) but without Soli.

I can't believe that they ditched it after just one generation, now I'm stuck. And only reason to upgrade would be a Pixel that has photos >12mpix (not just the sensor).


Incidentally, I said to myself I would buy a Pixel 5 if it had Soli as well because it would show that Google was becoming serious about supporting features for more than 1 generation.

Predictably, I never bought a Pixel 5 or 6 or 7.


There's exactly one drop everything and get it fixed ASAP bug at Google - something broke the ad platform.


If its not a "Bank error in your favor - collect $200" error that favors Google.


I've seen multiple instances of Google failing to correctly triage critical security issues. I can only conclude from these organizational failures that Google leadership doesn't really take security seriously.

Here's another example of a critical vulnerability in GCP that Google sat on for 9 months: https://github.com/irsl/gcp-dhcp-takeover-code-exec


yeah right? after the article mentioned that he waited 2 months, I was already shocked, then he mentioned 3 months, and so on.. sometimes it's just annoying to report something really important and still you don't get enough attention.


Hahaha I can just imagine finally getting in front of the Google security team in the office, then realize that you don't have a sim ejector. You try a few things like a mechanical pencil, dental pick, jumbo paperclip, etc. that don't quite work. Then asking around, no one has one but someone fortunately has a sewing kit.

Meanwhile, the Google engineers, who were skeptical to begin with, show some signs of impatience, which you become acutely aware of, and get even more nervous. While you're shaking and pressing too hard, you ultimately stab through your hand and now there is blood everywhere. You try to hide it at first and play it off but blood is getting all over everything and a few drops hit the floor. You try with the needle some more but the blood is too slippery and you accidentally wipe some across your forehead to top it off.

It's been 25 minutes at this point and 2 of them decide it's not worth their time any more and leave, which you also notice, and begin to realize your chance is slipping away and you're spiraling internally.

Eventually, someone produces a bandaid, but your hands are shaking too much and they have to pitifully put it on for you. While contemplating if you are actually a grownup or just a large child, you realize you started sweating a lot and you forgot to put on deodorant because you're traveling and left it at home. You smell your own awful fear creeping up through the neck of your hoodie, hoping that the guy who is fixing you up doesn't notice.

Crazy intrusive thoughts start to cross your mind as you pick up the needle again, but a woman snaps you out of it with "would this work?" holding up an earring. You kick yourself for not thinking of it earlier when you actually noticed her earrings during earlier chit chat. "I'll try not to get blood on it," you chuckle, but no one really laughs beyond a murmur.

In seconds, you pop the sim out and swap it, quickly demo the vulnerability speaking faster than Eminem spitting Rap God. Everyone is quiet for an eternity (1.5) seconds as pressure builds, until the engineer who handed you the earring says "holy shit" and runs off without her earring. Those in the small crowd turn to each other to discuss, taking the pressure off you as you go totally cold from the sweat that you now realize has trickled all the way down your leg.

The rest of the day you are high as ever, like the feeling of headiness after eating an extremely spicy order of hot wings or curry.


Please publish a blog full of these based-on-a-true-story behind-the scenes tech-drama fiction stories!


This is the exact vibe I got reading that part.


Given how much engineers make at Google after a long interview process to supposedly only get the best people, how significant the login system is to security, how "industry standard" the Google process is, it's not a bug that should have ever made it live. The bug fix show that the issue was clearly a case of a set of people not communicating well, code reviews being lax, and a general lack of understanding of how Android works.

It's also possible that the code is too complex to understand fully which is a requirement for a correct operation. Bugs happen, but I've seen way too many cases where complexity and lack of understanding led to surprisingly bad outcomes.

The login process should have the highest amount of scrutiny.


I have spent a lot of time in the Android codebase building security/privacy focused ROMs. It was a very dark rabbit hole and in the end I realized the 240GB of messy blobs and source code can never be understood or audited by anyone.

Even if you did somehow get that much code regularly externally audited, there are piles of random binary blobs with root access supplied by cell carriers and chip vendors Google blindly includes in the vendor partition and a backdoor or bug in any one of them can burn it all down.

I abandoned the project, and stopped using smartphones entirely.

The only sane engineering effort that gives me hope for a trustworthy mobile device at this point is Betrusted. https://betrusted.io/


It looks like (not an expert) they did not use a state machine there. Those kind of behaviors are better detected with them. But I am just thinking out loud.


I can't tell you how often I still see operating system level rotation bugs from iOS on my iPhone/iPad. Complexity kills.


> The bug just got fixed in the November 5, 2022 security update.

Lovely. My Pixel 4 got it's last update in Oct.


Don't worry, your phone has already has other vulns.


Could try checking out https://grapheneos.org/, it looks like they are supporting the Pixel 4 for a little longer.

In this case though, you would hope Google release an extra patch for the Pixel 4, they knew this bug was there and a fix was in the pipeline.


Same. Absolutely unbelievable that they sat on this to avoid having to fix it.


I think what really gets me about this is how differently Google treats its own security issues than the ones it finds in Project Zero. They have absolutely no problem enforcing deadlines around disclosures for the vendors they find vulnerabilities for, but when it comes to their own systems they seem to have no sense of urgency while also expecting security researchers to sit on their bugs for a much longer period of time than Project Zero does.

For context Project Zero used to have a strict 90 day disclosure policy, but updated it to "90+30" a year or so ago to give people more time to patch. Google took at least five months to resolve this issue, and it's possible they took longer than that because we don't know when the first report was actually made.


Terrible response by Google to this.

Basically it seems like their bureaucracy is structured so that no one has an incentive to actually address this. Everyone at the company acted like they would rather this issue not even exist, rather that address a critical flaw that makes locking essentially not work on the Pixel.

This gives us a look under the cover of what's important at Google, and it seems like security is a clear subdivision of marketing in this case.


Re: the title of this post, $70k is the bounty paid to the researcher, and "accidental" refers to the fact he came across the bug during personal use of his Pixel phone, not during testing.


It's somewhat interesting that there was never a major public pressure campaign by the FBI to force Android phones to be backdoored, as there was with Apple. Maybe this was the tactic used by law enforcement (and others most likely) to unlock Android phones? Maybe Google knew about it and that accounts for their stalling on providing a fix?

Yes, that's how you start thinking after reading Yasha Levine's Surveillance Valley.

https://yashalevine.com/surveillance-valley


For what it's worth, all the famous either unlocking or remote hacking sagas (like Pegasus) have mostly been around iPhones. Which either indicates that Androids are so trivially hacked that nobody is even talking about it (sounds a bit doubtful, IMO, hopefully), or that the majority of "targets" have been using iPhones. It has certainly been the case with all the hacked journalists I remember.

Also the Android landscape is much more fragmented, so maybe a vanilla Android exploit might not work on MIUI, so hackers don't bother when it's "easier" to develop for iOS, which has the majority of juicy targets anyways?


I suspect pressure to backdoor something or similar requests in the US are often (not always) one off adventures that collectively look like something larger, rather than say a policy where someone goes to companies and make general requests continuously like some regulator going about his business.

The effect may end up being widespread, but the actual access and details are more uneven / look strange to us because of how spotty it is at times.

At least in the us I suspect that law enforcement, the typical surveillance organizations may even try to cast a wide net at times, but I think they're more transactional in their intent / look for what they need for a given person, people, case and less so to keep an eye on where a random citizen comes and goes.

That's not a justification for any of it, but I think it might explain why folks don't always find the 1984 they're looking for / expect in the end result.


That actually sounds plausible. It’s a pretty simple explanation that would explain all the issues in the post.


Good reason to not disclose to Google.

Instead, you should sell the exploit on the exploit dealers sites. This is easily worth $300-500k

But not now. And you have the 'privilege' of being dicked around with people googling you.


Maybe having morals is worth $230k to the author.


You can say that, but he was going to get $0 if he already didn't have internal connections to google.

If these companies try to cheap people out of what bounties they offer, then they need reminded that they're not the only game in town that'll pay for exploits.


This is the correct takeaway. It's damaging to their reputation to not admit the error and cheap out like this. I would hope they at least split the bounty between the two researchers, the one who initially raised it (but didn't complete their report?) and this one that had a fully documented chain.


When chosing a phone there should be a security metric reflecting how much time and money has been spent on security, researches and bug bounty programs. This error looks so trivial! As a long time Google Pixel user, I honestly don't feel the company did a good work protecting me.


I dislike Google like the next guy. And this is problem of monumental proportions.

But if you came here to piss on Android and praise Apple's security, let me remind you of this:

https://www.howtogeek.com/334611/huge-macos-bug-allows-root-...


More important not to come here trying to goad other users into some pointless flamewar.


Surely you can find something more recent than five years.


Like Apple not patching macOS security holes on older versions: https://arstechnica.com/gadgets/2021/11/psa-apple-isnt-actua... ?

(This happened again with Ventura).


Exactly. And I suggest this is a much better example of egregious security practices than that 5yo article.


Software is written by people and organizations made of people.

Security issues are precisely because of that.

Rather than focusing on bug we ought to focus on how it was handled. And, in this case it was obviously poorly addressed.

So let's focus on that, not on "iOS/MacOS is superior" because it is not (it is not free from flaws).


I don't see how the timing is relevant here.


Let’s take it to the extreme. Suppose this becomes the last bug Google exhibits in the next fifty years. Forty nine years in the future Apple makes a major security faux pas.

Do we need to remind everyone that Google made a bug fifty years ago, too?


Or we can be realistic and agree that neither Apple nor Google are immune to future bugs.


My next party trick


It almost sounds like Google had an incentive to leave the bug in.


What is up with the Pixel specific bugs lately? One would think Google did more QA on their own products than on stock Android but the opposite seems to be the case.


Why? Pixel is stock android and also customization, so it needs attention to both.


As it says in the blog post (and is confirmed somewhere in the comments here) this is not pixel specific.


I wonder if this bugs exists on FireOS. It's obvious that bugs like this will happen in a spyware company product like Google.


> When the SIM PUK was reset successfully, a .dismiss() function was called by the PUK resetting component on the “security screen stack”, causing the device to dismiss the current one and show the security screen that was “under” it in the stack

Oh, the exceptional safety of object oriented programming!


There's nothing OOP-specific about this bug. The bug is in too-wide variable scoping, insufficient OO really.


It calls .dismiss() expecting the PUK screen but it's a different object instead. This is the kind of thing that OOP rely on.


It has nothing to do with OOP. The design we're talking about is blind firing events at the in-focus window, and the in-focus window is being changed unexpectedly. The problem is loose coupling between the parent<->child. If the child window (PUK entry/pin reset window) knew the parent's ID and fired the dismiss event at that ID this wouldn't be possible.

Even the fix they implemented is poor: The child is STILL blind-firing a dismiss event, but they just told the lock screen to ignore dismiss events from the PUK entry window. Instead, the fix should have been to stop blind-firing events at whatever happens to be in-focus. They've added more complexity rather than fixing the poor design.


Maybe I'm misunderstanding what you refer to as blind-firing, but the change is to require a target ID from dismiss calls. That's regardless of focus then right?


Sure, but it’s a race condition that could happen in a functional language, too. FP has an analogue to a class called ADTs, and you could have the same bug using those


Again not really, somewhere, something send a signals, and outcome of that signal depends on UI state. The problem is a lack of qualified signals and validated state changes, whatever the programming model used, if the logic is «remove topmost screen without any context check», you'll end up with this issue.


This is a great example of why you should use iOS. Most android devices do not receive security updates long enough to get this update.

Since the author effectively tells you how to do it, all you need to do is find a pixel 4 or older and you’re golden.


Who knows how many bugs live in iOS as well. Security through obscurity (iOS is closed source) isn't usually considered that great a strategy.

Besides the whole "can't install user software" issue.


The number of long-running bugs which have been found in popular open source projects suggests that “many eyes make all bugs shallow” should be remembered as an amusing bit of 90s trivia like Swatch Internet Time.

What seems to matter more is how many auditors are actually digging in and how aggressively secure coding practices are applied. It certainly doesn’t seem like there’s a big difference between the two in terms of security but Android has more people using old software because their manufacturer didn’t want to ship an update.


If something isn't being actively attacked, penetrated, scoured over, delved into, fuzzed, and poked at by MULTIPLE EXPERTS IN THE FIELD, you should assume it has several completely bypassing security vulnerabilities.

“many eyes make all bugs shallow” should have always been seen as horse shit. It has the same level of evidence as other linuxy "truisms" like "worse is better" and "everything as text or a file is best"

Heartbleed and shellshock sat right in public eye for quite some time, but it turns out nobody was watching.


The number of bugs is not the issue. The issue is that apple supports their devices longer than all android vendors.

Bugs are inevitable and so the difference is support duration and speed.


Really, how long?


Bugs happen in iOS too.

> all you need to do is find a pixel 4 or older and you’re golden

Its worse, it works on most Android phones without the latest security patch, not just Google phones.


Besides my opinion that iOS is just simply better built and more secure, the biggest difference for me comes down to the UI. Maybe my mind is just wired more for iOS, but subjectively I would say that it's by far the superior user interface. Snappy as hell too.


I've used both for about as much for quite some years, both OS phones are always with me.

I'd say iPhones used to be snappier maybe 5+ years ago, but nowadays I grab an Android phone if I want something to be done fast, say perform a web search. Two exceptions: 1) Android phones are stuttery disasters for some time after booting up. No big deal, since I rarely power my phones off. 2) iPhone is usually faster to snap a photo than Android.

For anything security related, like banking etc. I use iPhone.

Of course your mileage may vary.


Love that. Mileage may vary. How long have we computer people said this?


It's also a great example why not to use iOS. If you find a hardware flaw in an iPhone and it can't be patched then literally everyone is effected. Even worse is if Apple decides you can no longer use feature/app, it's gone.

Fragmentation has it's issues but centralisation is way worse.


Everything you described is also true of android. Fact remains that even flagship android vendors support updates for much less time than apple.


except apple takes bug fixes and security 100x more than google does.

I remember the Android nightmares of Camera1 Camera2 CameraX APIs, then bluetooth all buggy implementation with years passing by and no decent solution in place.

I don't remember a single big bug by iOS


From what I have heard, they may fix thing quickly but their bug bounty program is not liked and they skimp on paying higher payouts.

Much more lukrative to sell your exploit on the black market.


This blog post doesn't make Google sound much better in that regard.


You've invented a quantitative measurement (100x) for something that is qualitative. This unravels discussion and turns away those who may otherwise support your assertion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: