Hacker News new | past | comments | ask | show | jobs | submit login
CopperheadOS: A hardened open-source operating system based on Android (copperhead.co)
190 points by mbakke on April 14, 2016 | hide | past | favorite | 103 comments



I like to see it enable the user to logs and optionally block connections attempt base on IP/dns names with both whitelist and blacklist.

And track/logs all of them per Apk.


Something like Little Snitch for mobile? I've been running Firewall IP[0] on my jailbroken iOS device for a while and it works great. It's incredible to see the amount of needless connections apps try to spawn.

[0] http://r-rill.net/FirewalliP7/FiPDepiction.html


Whitelists and blacklists are useless security theatre. Any non-blacklisted IP could proxy to a blacklisted IP, and whitelisting just means you have to jump through hoops just to get your work done, which users will always do.


> Any non-blacklisted IP could proxy to a blacklisted IP

There always are ways to defeat any security; the goal is to make it more difficult and costly for the attacker, and blacklists do that.

> whitelisting just means you have to jump through hoops just to get your work done, which users will //always// do.

I agree that's true for most end-users, but the HN crowd and other power users could make good use of it.


>And track/logs all of them per Apk.

Don't run programs you can't trust.


You can't trust any program. And say you do trust but verify is a much better strategy.


No point in a smartphone if you were to follow that mindset


good luck with THAT mindset.

edit: i never touched ios (except for my employer, but it's their data) and my Android phones all have my kernel and pf tables limiting all apps network access. specially to the local network!


Copperhead seems designed to protect against malicious attackers, but does it protect confidentiality against commercial tracking (another kind of attack)?

I'll add: I haven't come across another fork of Android that focuses on security so I'm rooting for these guys.


Privacy enhancements are definitely within the scope of the project. Most of the current features are exploit mitigations though. If you look through https://copperhead.co/android/docs/technical_overview you'll see that there are a few privacy features already, and there are many in-progress. They won't be listed there until they're actually completed though.


Are you connected to the project somehow?


Yes, I'm the (lead) developer of the OS.


Great! Thanks for your hard work, and thanks for participating in the discussion. I've been looking forward to Copperhead's first release for awhile.


Partly in answer to my own question, they don't plan to disable Android's connections to Google.

https://github.com/copperhead/bugtracker/issues/184

https://github.com/copperhead/bugtracker/issues/194

EDIT: To avoid any possible confusion, Google Apps / services aren't included in Copperhead; I'm talking about other connections to Google.


That's not what the response to #184 means. #194 wouldn't be open if removing that connection wasn't planned.

CopperheadOS is not going to outright prevent connections to Google. That doesn't mean the OS is going to have Google services.

The only known case where AOSP connects to Google is an HTTP GET to test if internet access is available. It could just as easily use something like example.com but Google's domain is known to have effectively 100% uptime. All that switching it would accomplish is pinging a CloudFlare/OpenShift server instead of Google. And CloudFlare might break it for users behind a VPN... so in the end, what would that really accomplish?


> CopperheadOS is not going to outright prevent connections to Google. That doesn't mean the OS is going to have Google services.

I didn't meant to imply that; I'll clarify.

> The only known case where AOSP connects to Google is an HTTP GET to test if internet access is available. ... in the end, what would that really accomplish?

The GET tells Google and the user's ISP, and probably a few others, that someone at the IP address is using Android, plus whatever is in the GET headers, and that they just booted/woke their phone (which also locates the user at home or elsewhere, in some cases). It shouldn't be hard to identify people by IP. EDIT: In these days of one mass surveillance overreach after another, by government and business, it's doesn't seem rational to assume these companies aren't monitoring users and collecting all the data they can.

> The only known case where AOSP connects to Google ...

When I read this I think, 'Copperhead's priority isn't investigating whether there are other cases.' That is Copperhead's choice; I have no criticism of them.

Personally, I very much would like just one OS that prioritizes privacy (which is attacked much more than the exploits Copperhead focuses on) and gives me full end-user control over the information my device sends to others. EDIT: Again, I appreciate Copperhead's efforts and free OS; also, I realize they can't implement everything at once


It does this check when you connect to a new network and it repeats every so often to make sure the connection still works. It's how Android notifies you if there's a network connection but no internet access.

It's not a browser making the request so there's not much information in these requests. It's just a GET request with an unused result. It only checks to see if it succeeds. Every Android device does this, so it barely leaks any information. If it was changed to a CopperheadOS-specific URL, it would actually be leaking more information to networks.

I don't think there are other connections to Google in the base system but that doesn't extend to the user-facing apps like Chromium. I know for a fact that it doesn't make any other connections in normal usage, but there are a lot of edge cases.

We could make the internet access checks optional, but what about update checks? Those are leaking strictly more information (a phone connecting to builds.copperhead.co runs CopperheadOS and that can be seen without access to the HTTPS data) and it tells us which device is being used since it has to ask for the available updates. We don't really know how many people use CopperheadOS, but it would be possible to make a solid estimate from the update checks. Most people won't change the default of 1 check per day, so the number of checks per day in total is the approximate number of users, and the server has the IPs they connected from. It doesn't log anything itself but CloudFlare could.

F-Droid also does updates checks itself, so any F-Droid repositories that are enabled end up with similar information.

I am not really sure how this could be improved. You can use Tor... but in some ways that makes the situation worse.

An option to disable the internet access checks is possible, but I don't know what it would really accomplish. I don't want to make changes without a clear threat model in mind. So I'm not inclined to touch stuff like the internet connectivity check unless there are clear benefits rather than it just feeling right to people.


Thanks for addressing my concerns.

EDIT:

Those are all excellent points. Where there are tradeoffs, perhaps you could put some settings in your security slider UI.

An optional software firewall that requires outgoing connections to be whitelisted would be great for my purposes, but everyone knows how painful those can be.

Regarding connections, I've come across the following potential issues; I haven't looked into them but they give me the impression that locking down Android's network activity is too complex even for technical users, and that only a carefully secured OS will solve the problem (a big reason I've looked forward to Copperhead):

* Some connections are made during bootup so an effective firewall somehow has to load early, or at least first in the network stack.

* The address of the DNS server, Google's, is hard-coded in an in-kernel DNS resolver. Among other issues, it makes it hard to choose a different DNS server or to identify the application doing the lookup.

* Some other kernel connection activity is hard to stop even with a firewall [1][2]

> I don't want to make changes without a clear threat model in mind.

Confidentiality is part of security, and exploits of confidentiality by businesses are almost certainly the most common security exploits.

People tend to overlook them because usually they are technically legal and currently they are a sort of technological norm -- though remember that lead and asbestos were once norms. Certainly users should have the option; they should control their data.

----

[1] http://forum.xda-developers.com/showpost.php?s=12c116f17804f...

[2] http://forum.xda-developers.com/showpost.php?s=a5c6cb3da0cb6...


"Protection from zero-days" -- how can you make a claim like this?


I'm not affiliated with Copperhead at all, but I am familiar with the sorts of techniques they are using. Exploit mitigations, such as Address Space Layout Randomization, Control-Flow Integrity, Fine-grained Randomization, etc. provide a layer of hardening to make exploitation of a source code vulnerability harder, or even not possible on the protected device. The bug (zero-day) still exists, it's just not as exploitable to do bad stuff.



https://copperhead.co/android/docs/technical_overview covers much more and is mostly up-to-date.


ASLR is already a part of pretty much every current operating system ( save FreeBSD-RELEASE )


Not all ASLR implementations are equal, eg. PaX's ASLR vs standard Linux KASLR.


Or Android's almost useless 32-bit ASLR (even on 64-bit platforms) for that matter:

https://googleprojectzero.blogspot.com/2015/09/stagefrighten...

https://copperhead.co/blog/2015/05/11/aslr-android-zygote


I've heard rumors that the new ASLR in Android N is actually worse than the current implementation. I don't have anything online to link to, unfortunately.


ASLR is a band-aid. If you need it, your system is already insecure. It's just that the attacker may need to crash your system a few times before they get in.


64 bit ASLR is not a bandaid. There are definitely ASLR approaches that don't have enough entropy, but that doesn't mean ASLR as a whole is unworkable.


All systems need it. All systems are already insecure. All desktops systems already implement it. This has been the situation for years now.


No, they need tech that either contains the attack in its own partition or prevents it entirely by language/compiler-level action on the target. Both exist in academia and commercial sector with varying capabilities, prices, maturity levels, and so on. Most such things are rejected in favor of band-aids like ASLR.

And the systems continue to get hacked through the very holes covered in bandaids. As he said, if you're using a bandaid, you're covering up something inherently broken.


That's not a practical solution. Sure - you could write super-secure (Ada-style?) code in a verified environment (?), running on verified kernel (SL4?), on secure hardware (got any ideas how to solve rowhammer?). Realistically though - nobody does that (in a product which we can buy). Producing any application in that kind of environment would be too expensive and not possible for most companies. We don't even have secure hardware available. Academia will experiment with that. Some industries will care enough to apply it.

But in a mass-produced software/hardware? Realistically my choice for productive desktop is OSX/Win/Lin. We can talk about cool, perfect solutions for a very long time. In the meantime I'm making sure my apps are running with ASLR. I hope you're not actually advising people not to use it, just because there's some ideal solution maybe possible on the horizon, that doesn't run any apps they need?


Whoa there. There's an entire spectrum of options in between "ASLR" and "formally verified everything" that defend against memory safety related RCE. Such as, for instance, writing in a memory-safe, high-level language where reasonable (which is in fact not only practical, it's what Android does).

(That's not to say ASLR isn't great as a way to harden the C and C++ code at the core levels of the system, of course. Daniel Micay's work here is very solid.)


Yeah, Android pushes memory safety quite hard. Most code in the ecosystem is written in memory safe languages (Java and friends). That still leaves the entire kernel and lots of performance critical or legacy code. Languages like Rust could reduce the amount of memory unsafe code on the platform but there's still going to be a lot left over even it's mostly contained in a language runtime and the low-level libraries.

Despite Android's usage of Java, most vulnerabilities are memory corruption bugs. It makes sense to focus on those since it's low-hanging fruit. High-level security/privacy changes involve much more subjective changes and usually have a perceptible impact on users. Hardening the base system is invisible, and that's a good thing.

Android already does an amazing job at the access control level via very locked down SELinux policies. There's a lot of work to do there, but it involves making changes that are going to make some Android developers/users unhappy. For example, `hidepid=2` made it into Android N from CopperheadOS and there's going to be fallout from that: https://code.google.com/p/android/issues/detail?id=205565. I think Google will end up shipping it, but it's not a sure thing.


I was refering to "prevents it entirely by language/compiler-level action on the target". I understand there's a whole spectrum in prevention and mitigation. But "prevents it entirely" is an extreme, just as "formally verified everything" is an extreme.

I'm just ticked off by people lately repeating that ASLR is a bandaid, like it's a bad thing. It's a bandaid, but it can still crash-instead-of-own your app/system with 99.XX% probability. Why complain about it being accepted rather than say: "great, we're nowhere near secure, but at least we have something that works most of the time, now we can work on better protection". Safe runtimes can fail too (CVE-2015-3837 / serialization bug).

Basically, if anyone reads threads like this and thinks "it's a bandaid, it's not needed / it doesn't protect me", then we're all worse off.


It's a bandaid because it covers up instead of fixed the root problems. Getting something through Softbound+CETS will stop almost all the memory errors because it tries to fix the cause. Same with pcwalton's Rust. Then, there's solutions that say leave all the problems there while trying to counter the results of an exploit in a "maybe it will work way" that are often bypassed. World of difference.

Note that using bandaids is A Good Thing if you have something broken already. It's just best to avoid what causes the breaks where possible and look for prevention measures. Our industry loves bandaids while systematically ignoring stuff that negates a need for them. So, I call out that problem but doesnt mean someone shouldnt use ASLR if it's the best bandaid they have.


I'm talking things as simple as Code-Pointer Integrity, common tools recoded in safer language, or app-level sandboxing with or without microkernels. People rarely use strong stuff even if it's a straight-foward download, recompile, or configuration. Hell, most wont use protected messaging when it's as easy as Signal. It's a demand-driven problem largely about convenience and access to insecure apps.

Btw, solutions like OKL4 exist already and are fielded w/ Android + other OS support. Android hardening tech also exists. Cryptophones also exist. Not perfect, future tech so much as existing tech companies and FOSS developers mostly ignore. With exception of Blackberry that tried something decent by integrating QNX with stellar results.


So where does a technical end-user find a relatively secure solution for a phone/small tablet, even if they have to pay a little for it - even $500 extra?

> With exception of Blackberry that tried something decent by integrating QNX with stellar results.

Are you saying Blackberry 10 is significantly more secure than Android and iOS?


I'm not sure if you can get a secure solution at that rate. The more secure systems simultaneously have high development cost and almost no buyers. This means they're usually OEM licenses for custom work instead of mass market. So, trick would be a smart group of people licensing OKL4 or something then putting it and hardened Android on a specific phone.

Far as Blackberry, no Im not saying it's more secure. I'm saying using the QNX OS made it more secure, reliable, and responsive than it was. That's because of QNX's great design.


Indeed, I was trying to give well-known examples. Some of the more interesting, not widely-deployed PaX mitigations are more accurate here.


If a zero-day is found in standard Android (ala Stagefright) it's possible it won't be exploitable on Copperhead because of the hardened malloc, overflow protections, bounds sanitizing etc.


They complete text is:

Protection from zeroDays Prevents many vulnerabilities and makes exploits harder

So they don't claim to provide immunity from zero days, but


"Zero-day protection" is marketing-speak for what security engineers call "exploit mitigations." Of course they don't prevent exploits; they mitigate them. Pretty typical that the marketing term is an exaggeration of the more accurate engineering one.


Exploitation can certainly be outright prevented. For example, automatic integer overflow checking reduces any integer overflow vulnerabilities to at most a denial of service attack (clean abort). _FORTIFY_SOURCE (including the more dynamic implementation in CopperheadOS) does the same thing for a large subset of buffer overflows, as does -fsanitize=bounds which is globally enabled.



From what little I know about Android vulnerabilities, most 0-days on Android have to do with interfacing directly with C/C++ and optimizations.


It does seem like a ridiculous claim


It's how you interpret "protection." They didn't say you'd be "protected from all" zero-days.


It looks like CopperheadOS has managed to upstream quite a number of mitigations! Bravo to them. This makes them the sort of OpenBSD research OS of the Android world, and everyone benefits from their work.

https://copperhead.co/android/docs/technical_overview


"It will not support devices outside of the Nexus and Pixel lines."

This is really sad to me. :/ As far as we've come, everything mobile is still irritatingly device-specific.


Blame the OEMs for that. The driver situation on ARM devices is quite dire.


Is this a side-effect of mobile devices having extremely tight requirement for power usage and packaging? I could see how that, and a huge number of functions being packaged into a single SOC would make each board design far less generalizable from either one generation to the next or one form factor to the next. On the other hand, that doesn't preclude drivers for the relevant chipsets being more easily available. Do phone manufacturers write their own drivers for all these chips as a matter of course, or do the chip providers ship something with what they put out?


I think in reality it's simply a matter of everyone having their own build of the kernel and needing their own set of kernel modules. Google could have solved this in some way with Android, went for something closer to the way drivers work on Windows, a single long term service kernel that everyone builds their drivers against.

But instead they told everyone to just build it themselves... resulting in the current situation. They could have also solved the updating problem in the same way.

No big deal though, someone will make this build on other platforms if there's interest. It's all open source and I'm sure some 14 year olds on XDA are already racing to make it build on their phone from 1982. Then I'll probably flash it. Because that's apparently who I trust to write my phone ROM.


Google's certainly at some fault here, e.g. by choosing to long-term fork Linux instead of trying to upstream their patches.

But my understanding is that they couldn't "just" do something like what Windows does, they don't get to boss the OEMs around in that capacity, some of the big ones effectively have their own Android forks (Samsung) or have already forked (Amazon), and if Google starts bossing them around they're just as likely to fully fork it as be brought into the fold.


I would point out that OEMs are unable to even release an Android device without Google's permission if they want any of their devices to have access to the Play Store. Google has an extensive compatibility suite that devices must pass to even qualify to request permission to release, and Google regularly changes the requirements to enforce what they feel is a good platform. They also mandate over 20 preinstalled applications, define default search settings, and the placement of app icons on the home screen.

They ABSOLUTELY do get to boss OEMs around in that capacity.


Amazon doesn't do this, and ship their own store. What do you think would happen if Samsung forked Android and didn't ship the Play Store? Now app developers would just upload their apps to both stores and Google's power over Android as a whole would be entirely eroded.

So no, they definitely don't get to boss the OEMs around. Unlike Microsoft with Windows they don't get to just say "you can't ship Windows^HAndroid anymore".

They do have a bit of hold over the OEMs in the form of it being a PITA to fork, access to Google's own apps etc. So I'm not saying they have no leverage, but it's a lot less than what Microsoft has, and definitely not enough to say "my way or the highway".


This is why Google cut a deal with Samsung to bring them more in line. (Terms are secret, as usual.) Samsung seemed like it was likely to fork before, and it's probably the only manufacturer large enough to draw app developers with it. So Google did some under the table things. Possibly actually a patent lawsuit threat, given that the deal they announced included a patent sharing arrangement.


Not at all. Vendors just fork the kernel at the fixed releases Android versions ship (ex, 3.4, 3.10, 3.18, etc) and then merge in all their proprietary bullshit violating the GPL nonsense into that tree, ship (and publish) that kernel source tree, but never merge back into Linus' tree.

As a result, the device is supported only by the kernel they provide, rather than by generic Linux. And their kernel never gets updates.


The PC has PCI enumeration, that allows the kernel to ask each device on the bus for a id code. This in turn allows the kernel to load the appropriate drivers.

ARM do not really have this. Unless you know exactly what device is on what address range etc, you risk sending the wrong signals and fill its firmware with garbage or something.

This means you can't really cook up a generic kernel package and apply it across the product range as you can on PC.


U-Boot and linux have the Flattened Device Tree[1] (FDT) which allows the kernel to load and configure the appropriate drivers.

Your statement is true, however, that someone knowledgeable of the actual hardware has to create the correct FDT. This is slowly getting better and easier.

The more intractable problem, as others observed, is that ARM hardware vendors tend to throw together a custom kernel for a given ARM processor and board and then abandon it.

[1] https://en.wikipedia.org/wiki/Device_tree


This seems like a fatal flaw in the design of ARM.


It is because ARM has never really been standardized in embedded devices.


It is very dire and painful :-(


"Devices will be supported until Google drops support from the Android Open Source Project. Google guarantees major version updates for at least two years after launch. Security updates are guaranteed for three years after launch along with 1.5 years after the last device is sold."

As someone that is still using a phone from 2012, this is problematic since I have no intention of getting a new phone that often. Is there no stable, secure, and open combination of OS and smartphone out there?


Cyanogenmod still supports security updates for the Galaxy S, a model released in 2010.[0] Is it still worth using a six-year-old phone? Maybe not, but if your device is lucky enough to have support it can last you a long time.

[0]https://download.cyanogenmod.org/?type=nightly&device=galaxy...


I recently lost my Note 3, subsequently bricked my HTC "Pico" explorer - bought as a dev phone and GPS device due to the notorious GPS issues on my first Android; a Galaxy S. So now I'm back (typing this in Firefox) on my ancient Galaxy S, running a recent cyanogen build [Ed: 11 nightly, based on Android 4.4.4 kitkat. I believe I tried 12 - but it failed to install].

It kinda works. Had to force a move from dalvik to art, and force HW rendering - there are quite a few stalls. I haven't tried encrypting the device; it's already slow enough.

Ironically(?) Firefox works better than Chrome. Signal seems to work OK (only for sms so far due to missing network effect; I don't message anyone with signal installed).

I'm considering just getting a new battery (replaceable battery, yay!) - as it is cheaper than getting an LG g3, nexus 5 (no memory card slot, bleh) or a Sony xperia z3 (waterproof). I wouldn't really say it's usable - but a g2 or 3 might be OK. [Ed:The low RAM on the early devices appear to me to be the worst issue. I wouldn't recommend buying a device with less than a gig of ram. the Galaxy S has ~384mb.]


CyanogenMod doesn't have all of the source code or the keys to sign low-level firmware, so it's not possible for them to provide full security updates after vendors drop support. They only provide security updates for the Android Open Source Project components. CopperheadOS is security-oriented so it's not going to keep devices alive when it becomes impossible to provide proper security updates.


That's nightly builds, which CM discourage use of.

The three most recent stable "snapshots" are from 2015-09-01 00:25:00, 2015-06-26 07:37:01, and 2014-11-12 08:14:51. Given Google has been pushing monthly security updates for a long time, I'd have massive doubts about about the status of security updates for it.


No, there isn't. Microsoft supports Windows Phone/Mobile for three years, which is I think the longest of anyone, but obviously it's not open.


This is also the biggest concern to put Android into non-phone/non-tablet(i.e. no consumer devices) embedded products, which can run for years.



Interesting development. Good to see another project trying to improve the mobile situation for Android. Getting us off iOS or Android without loosing all the good apps probably isn't happening due to lock-in effects and patent issues. At the least, projects that try to allow safer use of Android apps will benefit a lot of people.


You had me interested until "..based on Android."

What we need is more original codebases in the mobile ecosystem, not endless modifications on top of the same old shaky foundation.


That shaky foundation also has a large ecosystem of useful software. I guess it doesn't need to be "based on Android" to run Android apps, though.

I'm not too familiar with security on Android (much more familiar with iOS) – what are the weakest links?


Android 0days at this point are so numerous, I find they are relatively worthless compared to time invested elsewhere. Other people seem to have the same experience (i've seen offers of double that amount for iOS remotes): http://blogs-images.forbes.com/andygreenberg/files/2012/11/e...


Android vulnerabilities aren't more numerous than iOS vulnerabilities. The key difference is that 97% of Android devices do not get security updates. There is no need for 0 day vulnerabilities for attackers, in general. Few users have Nexus devices.


Do they have a comparison table for how it fairs compared to cyanogen ? I'm interested if this is a good os if I want my personal data completely isolated away from any other app regardless of their initial permissions.


How does this compare to CyanogenMod? Security is definitely important but how much should I trust this OS?

Both CyanogenMod and CopperheadOS should be able to run smoothly withoug google-specific apps I believe, which is nice for some.


CyanogenMod's priority is not security; it prioritizes things like stability, compatibility, non-technical end-user experience, and relationships with developers. For example:[1]

Rule #1 of CM is "Don't break apps".

We recognize that there are nefarious apps out there, and many of our users would actually understand how to use permission controls (and the implications of using them). With our huge userbase, we have a responsibility to ensure that applications aren't running in a hostile environment, and work as designed.

We are all privacy advocates at CM, but I am not willing to compromise our good relationship with application developers in order to implement features like this.

Much more here:

https://plus.google.com/+SteveKondik/posts/iLrvqH8tbce

[1] See Kondik's 29 May 2013 message in https://jira.cyanogenmod.org/browse/CYAN-28?page=com.atlassi...


And unlike Debian[1], RedHat[2], OpenSUSE[3], FreeBSD[4] they don't even have a documented list of security advisories and what update fixed them—and given the haphazard nature of CyanogenMod stable builds (including some devices receiving none for months when there are high profile security issues, yet no statement anywhere that those devices have any less security support than any other), I have little faith in the security processes of CyanogenMod. And that's before you even start to touch on the privacy aspects which his post is mostly about, as far as I can tell!

[1] https://www.debian.org/security/ [2] https://access.redhat.com/security/security-updates/#/securi... [3] https://www.suse.com/support/update/ [4] https://www.freebsd.org/security/advisories.html


The problem is the vast majority of applications actually require Google services to run on Android devices. Running an Android device without GApps is pretty much pointless unless you are really using it for a very very specific purpose.


F-droid apps disagree


The vast majority of apps on the Play Store do not actually require Play Services. There are also plenty of open-source apps on F-Droid without any such dependency, and other mostly proprietary alternatives like the Amazon app store.


It would be (relatively) easy to put together another suite of utilities offering the same API as the standard GApps, in order to allow 3rd party apps that depend on that API to function. Rumor has it Samsung has just such a project in the works, in case they need to punch the eject button on their relationship with Google: http://www.digitaltrends.com/mobile/samsungs-secret-mission-...


It's in progress.

https://microg.org/


> Running an Android device without GApps is pretty much pointless unless you are really using it for a very very specific purpose.

This is an exaggeration; you can find plenty of solutions that don't require Gapps, AFAIK. However, I don't know that the typical end-user would be happy solving that problem or using imperfect workarounds. For one thing, you need some sort of GApps solution to access the Play store, AFAIK.

> the vast majority of applications actually require Google services to run

There are plenty of GApps subsitutes for people who want them; I've done some homework on it, but haven't gotten around to trying them and all of the following is "AFAIK"; it's just based on a bunch of reading.

----

These appear to be the two leading substitutes:

* TKApps: 6 editions containing varying subsets of Google Apps

http://forum.xda-developers.com/android/software/tk-gapps-t3...

http://forum.xda-developers.com/android/help/qa-tk-gapps-hel...

* MicroG Project: My impression is that this is most carefully engineered option. In addition to its full suite I think it gives you the option of installing only one component, the stripped down GMSCore, which provides substitutes for several Google Play Services APIs.

https://github.com/microg

http://forum.xda-developers.com/showthread.php?t=1715375

http://forum.xda-developers.com/android/apps-games/app-micro...

----

Also of interest:

* Blankstore: For minimal Play Store access, or maybe just the API to keep other apps happy.

https://github.com/mar-v-in/BlankStore

http://forum.xda-developers.com/showpost.php?p=29115263&...

* Fakestore: (I don't have a link, but it's the same concept as Blankstore)

* BeansTown106's Gapps: (I don't have a link, but your search engine should find it), "very complete and work quite well" per a dev of OmniROM, a leading Android fork

* GApps Browser: Google Apps sandboxed, so can login there without being logged on in web browser, for confidentiality

https://f-droid.org/repository/browse/?fdfilter=browser&fdca...


Besides USB Armory, are there any other open source harden hardware solutions?


The Yubikey neo can be programmed with JavaCard. There's a handful of applets on their github


The Yubikey neo hardware is not open source though, right?


Couldn't find the Android version it's based on?


6.0.1_r20 for the Nexus 5 and Nexus 9, and 6.0.1_r24 for the Nexus 5X. You can see the versions of the downloads page (it uses AOSP_TAG.COPPERHEADOS_TIMESTAMP) It's the same as stock. It will move to 7.0 shortly after it's released.


Google needs to step it up


Built by drug dealers for drug dealers.


This is a hoenypot for the NSA


That's called a baseband processor.

But no, in all seriousness, Copperhead (and AOSP itself) are open source. Go audit it for NSA backdoors yourself if you're worried about that.


Do you mean, "this is a honeypot put out by the NSA, to see who wants this"? Or do you mean, "this is an attempt by the developers to see how the NSA tries to subvert, sabotage, or otherwise compromise their project"?


I know a guy who works on this. It's definitely not (not that you have any reason to trust me).


What makes you say that?


This project seems interesting but largely impractical until a truly independent FOSS app store exists with a wide selection + security track record as good as Google Play or iTunes.

I don't see how it gets there with such a narrow hardware selection.



This makes the project alot more interesting to me.


https://f-droid.org/: Open source, the apps they list include the following: "This version is built and signed by F-Droid, and guaranteed to correspond to the source tarball below."


> security track record as good as Google Play or iTunes

Do they have great security track records? I know a lot of the integrations like games into their systems are terribly insecure.


No. That is why I'm using them as the minimum standard. :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: