Its a self created problem of locked down user hostile devices. Operating system and software upgrades are not locked to the hardware manufacturer in the laptop world (not yet, for the most part). It is a pipe dream of mine that someday these attacks are used as an excuse by some government perhaps the EU to force opening up devices for install of other operating systems, maybe forced to open sourcing firmware etc.
The problem is that the “self” in that statement is not the consumer. They have no choice, yet the bear the burden. The people who made these choices benefit from it.
The manufacturers do not have the problem. They created a problem for many users as a side effect of forced obsolescence making consumers in richer countries buy new hardware every few years.
The original sin of Android was that it was made for phone manufacturers, not end-users. And phone manufacturers were in turn subservient to cellular companies. Back then cellular companies demanded network-specific devices, with network-specific software. Android was made to fit right into this relationship.
The continuing problem with Android, and a display of Google's lost interest in Android, is that they still stick to this paradigm in 2024. Nowadays phone manufacturers ship stock Android with add-ons, and cellular companies no longer have the power to demand anything. If Apple can ship iOS 18 worldwide next Monday, why can't Google?
Google certainly contributed to it. But whether the issue for consumers was caused by hardware makers or by OS makers it is not a self-imposed wound for the consumers -- they had nothing to do with it.
Androids are less locked-down than iOS devices. The problem is that Google used to not have the ability to require long term support from device OEMs and now that it has the market power to demand long term support they seem reluctant to prioritize that in license agreements.
Then there are devices, like in China, where Google Mobile Services is not on many phones so Google has no leverage at all re supporting updates.
There is nothing in principle requiring Android phones be locked down, but its a de facto reality that manufacturers of almost all phones have made them locked down and you have to research to even know beforehand if you can even do something as little as a bootloader unlock or a rooting. Why is os and phone model so tightly integrated and locked together, when in the pc world you can generally install any os on any computer? That's what I was saying.
Which is to say I think you are speaking in terms of what you can do within the android's playground itself, its certainly much more open than ios in that regard. But when it comes to the fact of being able to change the playground itself ie install whatever os you want on the hardware or upgrade the os on the phone then suddenly most phones aren't any better than iphones either. Some are slightly better in that you can at least unlock the bootloader or root it, but I don't know if much progress has been made beyond that in being able to reverse engineer or otherwise them to able to install anything you want on them. That was the point with respect to I was speaking.
> There is nothing in principle requiring Android phones be locked down.
There is; it's called the Android Compatibility Commitment. If a phone manufacturer wants the Google Play Store, their phones need specific security requirements. Which includes measures to lock down the phone to prevent piracy of the store, which for most manufacturers is achieved through a locked bootloader.
> Why is os and phone model so tightly integrated and locked together, when in the pc world you can generally install any os on any computer?
Qualcomm is the reason. Qualcomm wants OEMs to buy new chips every year, and for that to happen, consumers have to buy new phones every year. To upgrade Android (across kernel versions), OEMs need Qualcomm to provide updated drivers, which Qualcomm has been reluctant to do, because their sales will be undercut by chips they sold years ago. Android phones are closer to Mac than PCs, as there's one hardware-maker who determines which models become obsolete, and when, based on the software they choose to update (or not).
That's something that I've found really notable about Linux, compared to Windows. Windows drivers seem to work across versions, whereas binary Linux drivers are typically specific to a kernel minor release.
Does Microsoft just maintain broad kernel backward compatibility, or are driver authors doing more work to support more Windows versions? Is it a fundamental architecture difference with how each OS implements their kernel APIs?
And thats my point, you should not need OEM support for a pure software issue of operating system upgrades. I do not ask Dell or Lenovo before writing 'apt update && apt upgrade' when on my pc. I do not need to ask Dell or Lenovo before plugging in a flash drive with the iso of my favorite os or to upgrade the next version of the same and so on. These are things that if the device wasn't locked down, shouldn't have required the OEM's direct involvement in the first place.
Laptops work because the BIOS provides a universal abstraction layer and because support is upstreamed in the Linux kernel. Supporting phones for longer would require them to also upstream support.
Its true that a lack of standardization on phones makes things a bit harder but its as you said, still the key point is that they need to make the firmwares opened up. If manufacturers want to keep it all locked down then they deserve and should expect these type of attacks all the time. If they don't want these to happen, they should make their firmwares available and upstreamed or at least not cause roadblocks toward reverse engineering of the device by means of cryptographic locks or otherwise.
It is the reason why I have Windows tablets not Android ones. I know there are OS updates. And once they are dead I have options for Linux tablets now.
Fragmentation is not the problem. The problem is inability to change os or firmware. If control of upgrading or changing os wasn't solely with the maker there wouldn't have been an issue in the first place. I bet for example many of these bugs might be due to the much older linux kernels in use in phones. Again something easily solved by making the os easily changeable and not presenting cryptographic etc roadblocks to reverse engineering, if they don't even want to open source the firmwares at least. At the end of the day these are software bugs not hardware bugs, so the solution as with any software is to be able to push fixes. It does not matter if there are a million different Android phone platforms, a fix to some bug in the Linux kernel for example should work the same on all of them. More importantly, as a piece of pure software this should be something anyone working on Linux or Android should be able to fix as happens in the more sane world of laptop, server etc hardware. If we weren't locked to the manufacturer for all operating system and software support this won't be a problem in the first place. Imagine how ridiculous it'd be if we had a Dell XPS 13 OS, a Lenovo Thinkpad T14 OS etc that went out of support the moment the model is discontinued instead of the sane and normal situation where your Debian or whatever os continues receiving software updates as long as Debian wants to support it.
By "alternate" I don't necessarily mean some obscure os, but any os in general , in fact I rather specifically had in mind Android or Linux. By alternate here I meant the fact of being able to install any you want instead of being stuck with whatever ancient Android version and Linux kernel the original came with. Ie if your phone came with an ancient Android, you should be able to without OEM support install a newer Android or a recent Linux or anything else you'd like.
And honestly I doubt 99.9% of these attacks are anything hardware specific but rather generic software bugs like buffer overflows in the kernel, so hardware is in any case a moot point.
Updates for Android continue to be a huge problem.
Android OEMs have historically been terrible at releasing timely updates (it can take months), releasing updates at all (particularly cheap Android phones get EOLed long before an admittedly way more expensive iPhone would) and such updates aren't typically pushed in the same way.
One big problem with Android is the way driver updates work. It's a nontrivial process to add hardware support for a later Android version because of the Linux roots and having no stable ABI.
I think back to how Windows has changed over the last 2 decades. Once Windows updates were all paid and people would hang on to old versions of Windows forever. Microsoft eventually realized this was a problem for them so paid upgrades aren't what they once were.
Additionally, driver instability used to be a massive problem on Windows. Microsoft to their credit spent a lot of effort improving their driver situation to isolate drivers and have a stable ABI. Windows is way more stable than it was 20 years ago.
Even OSX has adopted user space drivers (DriverKit).
By the way, the above reasons is why some at Google have pushed Fucshia as essentially an Android ecosystem and architecture reset. It's been close to 10 years now. I don't see this going anywhere despite billions being poured into it at this point.
> Windows is way more stable than it was 20 years ago.
The problem with Windows today is more that it wants to opaquely update itself all the time, and the user gets undesired mandatory "updates" such as ads in menus and sending "telemetry" home (ie user activity data for MS' machine learning ambitions). But of course the most important thing is to keep fucking web browsers up-to-date with laughable and undesired CSS or WASM features (including subsequent exploit/fingerprinting fixes) all the time and even more often than actual content, or what's left of it anyway on the extant web.
I am using an Android phone which gets timely updates. However the vendor does not seem to do enough testing for major versions and has ended up with at least one significant bug upon release twice. I could do with less timely updates that come with less bugs.
> Updates for Android continue to be a huge problem.
As is being pointed out elsewhere, this isn't a vulnerability in an "Android" product. These are TVs running vendor-maintained AOSP builds. They get updates when and how the vendor decides to do it. It's not related to the (fairly reasonable, though often spun) arguments about updates in the phone licensee ecosystem.
This is a distinction without a difference. It's not like you can take your Android hardware and move to a different software supplier. You're stuck with whatever your OEM chooses to do or not do.
It's also something you don't necessarily have knowledge of when you purchase a phone.
It's one reason why people (myself include) prefer Apple. You know you're going to get 4-5+ years of updates.
No? For a licensed Android phone there's a clean split between OEM and OS updates, a clear declaration for support period and a OS-managed tracking of updates with clear visibility to the customer. I get that there are constant platforms flames about whether this is acceptable or inferior to Apple's offering, yada yada yada.
But the point is that none of that is relevant here, because these aren't phones and aren't running a licensed "Android" variant. They're just TVs running vendor-custom firmware that happens to be based on AOSP, and clearly can't be expected to conform to any update regime except whatever the integrator put together.
Basically, the story here is saying "I hacked product A" and you're trying to have an argument like "That's because this unrelated product B is bad". It's a non-sequitur.
I'm not sure I understand how having a second OS would hedge against people not wanting to use their first OS, but Fuschia being some sort of backup plan would at least explain why it didn't really ever get used. My guess was always that it just didn't ever have full support as a replacement due to turf wars with people in charge of Android-related things.
> such devices often run on outdated Android versions, which have unpatched vulnerabilities and are no longer supported with updates.
Many of them NEVER received a single update ever. There are so many shady companies producing TV boxes with no plan to ever provide any updates.
Unless one of the larger brands make such a device, I don't see any reason to recommend anything but the ChromeCast or whatever Google calls it now. Or a Roku or an AppleTV, if you swing that way.
99% of these devices are purchased to view pirated content. You think the guy selling boxes off Craigslist should be providing support? The company who manufactured the computer?
Not really, google specifically doesn't want you to run chromecast receivers on commodity hardware. There are some hacks and reverse engineering attempts but they are not very reliable.
Quite a few of them actually end up configured to preference SD boot over internal flash and/or have easily accessible buttons or shortable pads to trigger bootrom recovery modes.
Which at least, stops them being automatically consigned to e-waste.
Although, customising a LibreELEC image for the dozens of different models of TV box isn't great. Typically involves sorting out the dts for the device and remapping the remote.
Some of my hard requirements for a media device are that it must not share any of my personal information with any third party and it must fully cache the full-resolution and complete media content prior to beginning playback. If it's going to be connected to the Internet it must receive regular security updates for anything that's not written in a memory- and type-safe language like Go or Rust.
While Go and Rust aren't necessarily magic pixie-dust that can account for all types of security vulnerabilities, if I'm going to be faced with the possibility of some project being abandoned at some point for the next new shiny thing that everyone would rather work on, I'd at least like to give it a fighting chance of remaining secure for some time after abandonment without any updates. Ideally it would be a Rust userspace media management package running on Debian Stable getting unattended upgrades every night.
Since nothing like that exists I've recently decided to give CoreELEC/Kodi a try on an ODROID-N2+, albeit disconnected from any network. I was surprised at how seamless and integrated everything was.
The remote control for my television "just worked" with it out of the box thanks to HDMI CEC support. Arrow buttons, play/pause, back, etc. all did just what I expected them to do. It's a marked improvement from the last time I built a custom media box, which I had running MythTV on Gentoo, when I needed to jump through hoops to set up an IR blaster. And you can't argue with a 12v/2a power supply.
For now I'm keeping it off my home network and am "sneaker-netting" content on a USB drive between my trusted devices and the ODROID. When I get tired of doing that I might add some firewall rules to my router to only allow it to talk to a locked-down VM doing nothing but hosting a read-only file share. But some day I hope to look forward to building a similar form-factor box that has all the media gadgets and gizmos with a Rust userspace that respects my privacy and auto-updated Debian Stable so I can actually connect it to the Internet.
I am running my own custom-built Debian-based router in a VM and architected an encapsulated networking infrastructure at a FAANG once, so I know perfectly well how to get the device securely connected to the Internet while isolating it from the rest of my network. While I may trust the CoreELEC image I downloaded and audited at one point in time, I won't trust the people-and/or-org that controls the servers it connects to and pulls down updates from in perpetuity. So long as it is stable and has all the features I want in it, it won't ever be talking to anything on the Internet and will be left alone. If a feature or bug ever comes along that I feel I absolutely must get an update for, I'll be pulling an updated image, auditing it, and then flashing it to the eMMC device from another trusted host. But so far I don't imagine that will ever be necessary.
There’s always one thread where we are discussing how everything needs to auto-update for security/stability forever, and another thread (currently crowdstrike) where that approach has caused the problem we wanted to avoid. Would be nice to see more discussion of this basic tension in the abstract since $current_issue is often just a distraction.
Auto updates also have a reputation for harming the user at least as often as helping (removing features, adding ads, whatever) and so trust in that is declining while the need for decent security (smart cars/homes) is increasing. Not sure what to conclude from this except that we need more focus on secure-by-design systems and maybe immutability guarantees rather than autoupdates, app stores, and plugin/extension frameworks but these things are sometimes impractical fundamentally and sometimes just inconvenient for surveillance capitalism.
Sure, but again the specifics are a distraction. The problem with pushing any release onto users who have no ability to opt out is that those users never have any guarantee that vendors tested things, or that the vendor is even hoping to help rather than hurt users.
it’s pretty safe to assume that most companies spend money trying to make money, which usually involves exfiltrating my data, turning off things I need but they don’t want to support, general rent seeking, ads injection.
Trusting any small manufacturer of anything to spend time/money on fixing problems with security or quality control is a hilariously naive idea these days, when crowdstrike and Boeing are showing that even big companies don’t care. We all know the security update is enhanced spyware, planned obsolescence / a slow push to force me to buy a new device, or something else that’s going to make things worse.
What's going to be even more fun is when the cars gets hacked, given that their are 100+ (200+) car makers, specially with ev cars (WSJ claimed 140+ makers in China) Bloomberg claimed 500+. I'm not dissing Chinese makers. I'm only sure that like everything there's an exponential curve of how serious companies take security. I'm guessing, of the car makers out there, Tesla and Rivan are near the top since they are new and have people with security experience? I'd expect traditional car makers (Ford, Chevy, Chrysler, Toyota, Honda, Nissan) to all be pretty mediocre. And then I'd expect all the tiny companies to be no different than the tiny companies that made the TVs above.
There's actually a strange tradeoff with automotive companies. Security requirements are a big forcing function driving vehicle electronics architectures to modernize and adopt similar security practices to consumer electronics, away from the traditionally developed and profoundly insecure embedded systems of yesteryear. That brings a lot of security features with secure communications, secure boot, signed updates, etc. It also means that all the exploit development that's been done on consumer electronics becomes applicable to vehicle systems.
The teams maintaining all of this usually have a LOC/headcount ratio approaching 1 million too. It's a very different threat model than the old days when every manufacturer had their own systems and trying to craft an exploit required individualized work for all of them.
Even if the companies took security seriously (which they don't since it rarely affects their bottom line, even in the event of a breach), it's one of the hardest topics in IT, and there are very few people actually competent in it.
With the explosion of the number of those boxes, we added more surface of attack, but the number of good security people didn't increase.
That makes me think, will we have car theft in the future where someone hacks your self driving car and it makes off on itself in the night? The next day you wake up to notice that your car has left you for someone else?
I think this is one of the reasons (among many) why self driving cars will largely be robo taxi's.
I'd trust google to maintain the security in a fleet of robo taxi's where cars can be brought in, modified or replaced relatively quickly over Tesla trying to convince individual owners to do the same...
Existing people are, in a world where your teenager has been getting self driving robo taxi's to their friends house since the age of 13 I'm not sure if they'll magically want a car when they reach the legal age to drive, hell there's people now who uber everywhere over learning to drive.
Beyond that, it seems like the nature of self driving might be one of gradual incremental improvement, where in order to actually develop it you more or less have to run a fleet of robo taxi's in different places with different climates and different road conditions.
Finally, the people most likely to deliver it are google (waymo) and I think they have zero desire to sell cars to consumers but are literally running a taxi service as we speak.
Self driving cars aren't fully autonomous yet. They occasionally require human intervention to make decisions, which I would guess makes it difficult to steal these cars without being detected. They're constantly cloud connected and so they know where the car is at all times. I imagine these factors may deter theft of self driving cars for a while longer.
Repossessing a self driving car, on the other hand...
The plan when having a self driving car help you steal it would not be to have it self-drive from the owner's home in, say, San Francisco, to a chop shop in Tijuana where you gang awaits, thus requiring it to make a long trip without human intervention.
It would be have it self-drive from the owner's home to someplace nearby not associated with the the thieves to get it out of sight from the owner's security cameras and whatever other security cameras in the neighborhood might be watching where the gang can disable the cloud connection and then transport it by truck or driving it in manual mode to their chop shop.
Yeah, but if something's so glitchy that people had seen speedometer go blank on camera, just how's it supposed to translate to code quality or software security?
That's not the impression I've gotten from Tesla's software. From the flash wear issue to the numerous vehicle opening issues, to the entire mess of their UDS implementation, there's quite a lot to complain about compared to a hypothetical "best practices" manufacturer, as opposed to the dumpster fire software of other OEMs.
I don't know anything about their practices, but once bumped into this Reddit post by someone claiming to be ex-Tesla and describing exceptionally bad practices. There's no way to verify the claims, they could as well be total misinformation. But I found the link to that Reddit post:
None of the issues I mentioned where discovered via API. The flash wear issue was discovered by bricked cars. Security researchers are pretty good at publishing about remote unlock mechanisms for all manufacturers, though Tesla's bounty program has helped. The UDS implementation issues are non user facing and were discovered by reverse engineering. Ford, as an interesting point of comparison, published a competent (if incomplete) open source UDS server.
I wanted to say thank you, searching for that phrase[1] produced a ton of fascinating reading, including https://mpese.com/publication/uds_fuzzing/SAE_UDS_Paper.pdf "Comparing Open-Source UDS Implementations Through Fuzz Testing" from Apr 2024 which itself linked to a lot of GH repos of the tools they used
I didn't find the Ford one specifically, but I also didn't go surfing through the many pages of results
Every company, car or otherwise, has security grossly inadequate for the current and especially the near future threat landscape. Everybody knows everything is easily hacked and companies people tout as exemplars like Apple and Microsoft just keep piling on the abject failures to prove the common wisdom true like it is going out of style. Decades of claimed success followed shortly after by failure after failure should really pound in the lesson: "Fool me once, shame on you. Fool me ten thousand times, shame on me."
You need systems secure against teams commercially motivated attackers with 10 M$+ budgets and tens of full-time professionals working for years. Nobody in commercial IT would even dare to claim they could stop such attacks even though they are regular occurrences these days. If they do not even dare to say they can do it, why on Earth would anybody believe those vendors have, what, accidentally made things better than they think?
Ranking companies by security is like ranking the relative resistance of individual sheets of toilet paper to bullets. Sure, maybe the single ply toilet paper is not as good as the two ply, but neither of them provide objectively useful degrees of protection. And that is just talking about the bare minimum to protect against current threats.
If you can hack every Honda at rush hour to turn off the brakes, slam the accelerator, and drive slightly into oncoming traffic how many people do you think would die in the next minute before anything can be done or people informed to stop driving their cars? 1K? 10K? 100K? 1M? Does Honda survive killing more people than died in most wars? If a criminal organization demonstrates they can and will do it, how much would Honda pay in extortion to avoid being put out of existence and their executives jailed? How much security is adequate to avoid the deaths of thousands to millions? Certainly orders of magnitude more than the 10 M$ attacks that steam roll the "best commercial IT security" available today. Any reasonable number is vastly in excess of what these companies can secure today.
20 years ago, the hackers were 18 year olds demanding 300 dollars from grandmas. 10 years ago, the hackers were 28 year olds founding businesses demanding 10 thousand dollars from small businesses. Now they are 38 year olds managing teams demanding millions from billion dollar companies. Soon we will see them demanding billions if the trends continue for 5-10 more years. The software security doomsayers were right, just early. Even Mark Zuckerberg took a decade with huge piles of VC funds to get to a multi-billion dollar valuation; you have to forgive the hacking teenagers who had to bootstrap their criminal enterprises for taking so long.
Given the nature of some (most?) of the generic tv boxes running random AOSP, I would not be at all surprised if these didn’t ship with so basic C&C malware already installed.
This was apparently found due to seeing some changed files, so they didn’t ship with void, but it wouldn’t have been hard to push it out to pre-comprised boxes.
Not sure why this is downvoted. It is well documented that most cheap android tv boxes have more or less the same trojan preinstalled. Is it due to malice of the manufacturer or most of them seemingly all taking the same pre-infected android base , who knows.
Linus Tech Tips made a full video on the subject. They seem to lean to the second option.
So distressing to see this so far down. In fact "Android TV" is an OS product, and not related to this exploit. These are televisions that simply run "Android" as in an open source AOSP build unrelated to a Google product certification.
These aren't monitors, they're little sticks with just an HDMI plug and microusb power you buy at the bogeda for $20. Android TV is an operating system name (often fake, some very outdated AOSP build, preinstalled with bootleg soccer streaming apps, and some skins/hacks/tweaks to make it look like the newest Android TV OS)
What makes you assume the vector is an incoming connection? Routers and firewalls won't protect a device from a user lured into installing malware with promises of "free Netflix, completely legal" running off some pirate mirror, or simply wrapping stolen credentials (note how people trying to break into your computers can not always be trusted with legal advice). And neither from simply luring them onto some malicious website (or just getting them to see a malicious ad) in case there's the juicy combination of a media decoder vulnerability with some privilege escalation that can be reached from the decoder.
The sheer number of compromised devices does. Sure you can trap users with malicious streams and malicious apps. But I find it surprising, that those actions alone can catch so many boxes.
I'm not aware of even a single consumer-marketed router that ships without a firewall (often ip/nftables) configured to drop all unsolicited incoming packets by default. If an attacker can create outbound connections from inside the network then they can get around this of course, but you have to already be inside.
Ah the new economical divide.
Most "real people" also have phones which aren't receiving updates for a few years by now.
In south america the median android version is 8.
And phones are not optional as most countries already jumped into both digital government and money transfer.